Gaussian stationary processes: adaptive wavelet decompositions, discrete approximations and their convergence

Size: px
Start display at page:

Download "Gaussian stationary processes: adaptive wavelet decompositions, discrete approximations and their convergence"

Transcription

1 Gaussian stationary processes: adaptive wavelet decompositions, discrete approximations and their convergence Gustavo Didier and Vladas Pipiras University of North Carolina at Chapel Hill August 28, 2007 Abstract We establish particular wavelet-based decompositions of Gaussian stationary processes in continuous time. These decompositions have a multiscale structure, independent Gaussian random variables in high-frequency terms, and the random coefficients of low-frequency terms approximating the Gaussian stationary process itself. They can also be viewed as extensions of the earlier wavelet-based decompositions of Zhang and Walter 1994 for stationary processes, and Meyer, Sellan and Taqqu 1999 for fractional Brownian motion. Several examples of Gaussian random processes are considered such as the processes with rational spectral densities. An application to simulation is presented where an associated Fast Wavelet Transform-like algorithm plays a key role. 1 Introduction Consider a real-valued Gaussian stationary process X = {Xt} t having the integral representation Xt = gt udbu = e itx ĝxd Bx, 1.1 where g L 2 is a real-valued function, called a kernel function, ĝ L 2 is its Fourier transform defined by convention as ĝx = e ixu gudu, {Bu} u is a standard Brownian motion and { Bx} x = {B 1 x + ib 2 x} x is a complexvalued Brownian motion satisfying B 1 x = B 1 x, B 2 x = B 2 x, x 0, with two independent Brownian motions {B 1 x} x 0 and {B 2 x} x 0 such that EB = EB = 4π 1. The latter conditions on Bx ensure that the second integral in 1.1 is real-valued and has the same covariance structure as the first integral in 1.1. Many Gaussian stationary processes, especially those of practical interest, can be represented by 1.1. See, for example, ozanov The second author was supported in part by the NSF grant DMS AMS Subject classification. Primary: 60G10, 42C40, 60G15. Keywords and phrases: Gaussian stationary processes, wavelets, simulation, Ornstein-Uhlenbeck process, processes with rational spectral densities, AMA time series, iesz basis.

2 1967 and others. The covariance function t = EXtX0 of X and its Fourier transform are given by t = g g t = 1 ĝ 2 t, x = ĝx 2, 1.2 where g u = g u is the time reversion operation and stands for convolution. The Fourier transform x is also known as the spectral density of X. Note, however, that the two rightmost expressions in 1.2 are not meaningful for general g L 2 because the function may be in neither L 2 nor L 1. Under mild assumptions on g and in a special Gaussian case, Theorem 1 of Zhang and Walter 1994 states that a Gaussian process X in 1.1 has a wavelet-based expansion Xt = a J,n θ J t 2 J n + j=j d j,n Ψ j t 2 j n, 1.3 for any J Z, with convergence in the L 2 Ω-sense for each t. Here, a J = {a J,n } n Z, d j = {d j,n } j J,n Z are independent N 0,1 random variables. The functions θ j and Ψ j are defined through their Fourier transforms as θ j x = ĝx 2 j/2 φ2 j x, Ψj x = ĝx 2 j/2 ψ2 j x, 1.4 where φ and ψ are scaling and wavelet functions, respectively, associated with a suitable orthogonal Multiresolution Analysis MA, in short. For more information on scaling function, wavelet and MA, see for example Mallat 1998, Daubechies 1992, or many others. Moreover, the coefficients a j,n and d j,n in 1.3 can be expressed as a j,n = Xtθ j t 2 j ndt, d j,n = XtΨ j t 2 j ndt, 1.5 with the functions θ j and Ψ j, dual to θ j and Ψ j, defined through θ j x = ĝx 1 2 j/2 φ2 j x, Ψj x = ĝx 1 2 j/2 ψ2 j x. 1.6 Zhang and Walter 1994 call 1.3 a Karhunen-Loève-like KL-like wavelet-based expansion. It is discussed in several textbooks, for example, Walter and Shen 2001, and Vidakovic The sum n a J,nθ J t 2 J n in 1.3 is interpreted as an approximation term at scale 2 J, and the sums n d j,nψ j t 2 j n, j J, are interpreted as detail terms at finer scales 2 j, j J. The KL-like expansion is related to the wavelet-vaguelette expansions of Donoho 1995, the expansions of Benassi and Jaffard 1994, and others, where J = in 1.3 and hence the first approximation term in 1.3 is absent. Though the approximation term n a J,nθ J t 2 J n in 1.3 involves independent N 0,1 random variables a J,n which are convenient to deal with in theory, the term is also unnatural in one important respect. It is customary with wavelet bases that not only an approximation term but also the respective approximation coefficients, the sequence a J,n in this case, approximate the signal at hand. The sequence a J,n does not have this property because it consists of independent random variables and hence cannot approximate a typically dependent stationary process Xt. In this work, we modify the approximation terms as a J,n θ J t 2 J n = 2 X J,n Φ J t 2 J n 1.7

3 so that the new approximation coefficients X J = {X J,n } n Z now have this property, namely, 2 J/2 X J,[2 J t] Xt 1.8 in a suitable sense, as J, where [x] denotes the integer part of x. In the relation 1.7 above, Φ J x = ĝx ĝ J 2 J x 2 J/2 φ2 J x = θ J x ĝ J 2 J x 1.9 with the discrete Fourier transform ĝ J y of a sequence g J = {g J,n }. A discrete Fourier transform of g = {g n } is defined by ĝx = g n e inx, x, and is periodic with the period. The random sequence X J = {X J,n } in 1.7 is defined as X J x = ĝ J xâ J x 1.10 in the frequency domain. Moreover, we expect that X J,n = XtΦ J t 2 J ndt, 1.11 where ĝj 2 Φ J x = J x 2 J/2 φ2 J x ĝx The relation 1.7 can be informally and easily verified by taking the Fourier transform of its two sides. It is well-known e.g. Daubechies 1992 in the deterministic context that 1.8 is a property of the corresponding wavelet basis functions. When G J 2 J x := ĝj2 J x ĝx 1, 1.13 we have Φ J x 2 J/2 φ2 J x 2 J/2 for large J typically, φ0 = φtdt = 1 and hence, by 1.11, we expect that 2 J/2 X J,n = 2J/2 Xxe ix2 J n ΦJ xdx 1 Xxe ix2 J n dx = X2 J n. The conditions for 1.7 and 1.8 will thus involve the function G J given in Though the modification 1.7 appears small, it is fundamental and important in several ways, and surprisingly leads to many research questions. For example, the modification allows for several applications such as simulation considered in Section 8 below. The wavelet-based decomposition 1.3 with 1.7 can also be viewed as a generalization to the wide framework of stationary Gaussian processes of a particular wavelet decomposition of fractional Brownian motion established in Sellan 1995, Meyer, Sellan and Taqqu It thus shows that self-similarity 3

4 of fractional Brownian motion, for instance is not a necessary condition for constructing wavelet decompositions in that fashion. See also Pipiras 2004 who explored a similar decomposition for a non-gaussian self-similar process called the osenblatt process. Didier and Pipiras 2006 study analogous wavelet decompositions for stationary time series in discrete time. The decomposition 1.3 with 1.7 can be viewed as being more general than becoming 1.3 when X j = a j are independent, N 0, 1 random variables. For this reason, both decompositions should be viewed under one framework. This is the view taken in the following definition and in a parallel paper Didier and Pipiras Definition 1.1 Decompositions 1.3 and 1.3 with 1.7 will be called adaptive wavelet decompositions. Adaptiveness refers to the fact that the basis functions are chosen based on the dependence structure of the underlying stationary Gaussian process. The rest of the paper is organized as follows. In Section 2, we briefly introduce a wavelet basis to be used in wavelet-based decompositions. In Section 3, we state the assumptions on the discrete deterministic approximations g J and the functions g. In Section 4, we consider several examples of Gaussian stationary processes and their discrete approximations. The KL-like wavelet decomposition 1.3 and its modification 1.7 are proved in Section 5. In particular, we reprove the decomposition 1.3 because inaccurate assumptions were used in Zhang and Walter We show that there is a FWT-like algorithm relating {X j,n } across different scales in Section 6. In Sections 7 and 8, we examine convergence of discrete random approximations X J and illustrate simulation in practice. Finally, in Appendix A, we consider integration of stationary Gaussian processes. 2 Wavelet bases of L 2 We specify here a scaling function φ and a wavelet ψ which will be used below. There are many choices for these functions. We shall work with particular Meyer wavelets Meyer 1992, Mallat 1998 because of their nice theoretical properties. The results of this paper and their proofs rely on specific nice properties of the selected Meyer wavelets. Other wavelet bases could be taken, e.g. the celebrated Daubechies wavelets, and are being currently investigated. Meyer wavelets are also used in Zhang and Walter 1994, Meyer et al and others. Let S be the Schwartz class of C functions f that decay faster than any polynomial at infinity and so do their derivatives, that is, lim t tm dn ft dt n = 0, for any m, n 1. We can choose a scaling function φ S satisfying φx [0, 1], φx = φ x, φx = { 1, x /3, 0, x > 4π/3, φx decreases on [0,. The corresponding CMF u has the discrete Fourier transform { 2 φ2x, x /3, ûx = 0, x > /3. 4

5 The wavelet function ψ associated with φ is such that ψ S and ψx = 1 x x v φ with vx = e ix ûx + π, 2.1 where v is the other CMF. One can verify that, for the Meyer wavelets, ψx = e ix 2 x 2 1/2 φ 2 φx In particular, ψx = 0 for x /3 and x 8π/3. The collection of functions φt k, 2 j/2 ψ2 j t k, k Z, j 0, makes an orthonormal basis of L 2. 3 Basis functions and discrete approximations Let g L 2 be a kernel function appearing in 1.1, and g J = {g J,n } n Z, J Z, be sequences of real numbers such that g J l 2 Z. Following Section 1 see, in particular, 1.13, we shall think of g J as a discrete deterministic approximation of g at scale 2 J. A discrete approximation g J l 2 Z induces a discrete random approximation X J = {X J,n } defined by 1.10, that is, X J,n = g J,k a J,n k 3.1 in the time domain, or symbolically k= X J x = ĝ J xâ J x 3.2 in the frequency domain, where a J = {a J,n } are independent N 0,1 random variables Gaussian white noise. As J, we expect that 2 J/2 X J,[2 J t] approximates Xt defined by 1.1. Conversely, we may think that a random discrete approximation X J of X given by 1.1 can be represented by 3.1 with a sequence g J. Hence, X J also induces a deterministic discrete approximation g J of g. We will make some of the following assumptions on g and g J. Let L p loc consist of functions which are in L p on any compact interval of. Set also G J x = ĝjx ĝ2 J, x. 3.3 x Note that, with the notation 3.3, expressions 1.9 and 1.12 become Φ J x = G J 2 J x 1 2 J/2 φ2 J x, ΦJ x = G J 2 J x2 J/2 φ2 J x 3.4 Assumption 1: Suppose that ĝ 1 L 2 loc. 3.5 Assumption 2: Suppose that, for any J Z, G J, G 1 J L 2 loc

6 Assumption 3: Suppose that, for any J 0 Z, max max sup sup p= 1,1 k=0,1,2 J J 0 x 4π/3 k G J x p x k <. 3.7 Assumption 4: Suppose that, for large x, k ĝx x k const, x k+1 k = 0, 1, Assumption 5: Assume that, for large J, G J 0 1 const 2 J. 3.9 As explained below, Assumptions 1 and 2 ensure that the basis functions used in decompositions are well-defined. Assumptions 3, 4 and 5 will be used to establish the modification 1.7 and to show that X J is an approximation sequence for X in the sense of 1.8. Observe that the functions θ j and Ψ j in 1.4 are well-defined pointwise through the inverse Fourier transform since θ j, Ψ j L 1 for ĝ L 2. By using Assumptions 1 and 2, the functions θ j and Ψ j in 1.6, Φ j in 1.9 and Φ j in 1.12 see also 3.4 are well-defined pointwise through the inverse Fourier transform as well. Moreover, θ j, Ψ j, Φ j, Φ j are in L 2 because their Fourier transforms are in L 2. Appendix A contains some results on defining integrals Xtftdt. See, in particular, the definition of a related function space L 2 g in A.6 of integrands ft. Since θ j, Ψ j L 2 g, the coefficients a j,n and d j,n in 1.5 are well-defined. Using properties of integrals developed in Appendix A, it is easy to see that a j,n and d j,n are independent N 0, 1 random variables. Since Φ j L 2 g, the integral in 1.11 is well-defined as well. Another consequence of the above assumptions are useful bounds on the functions Φ j, Ψ j. We will use these bounds several times below. Lemma 3.1 Under Assumptions 3 and 4 above, we have 2 j/2 Φ j 2 j u, 2 j/2 Φ j 2 j u where a constant C does not depend on j j 0, for fixed j 0. C, u, u 2 Ψ j 2 j u C2 j/2, u, u 2 Proof: By definition of Φ j in 1.9 see also 3.4 and after a change of variables, observe that 2 j/2 Φ j 2 j u = 1 e iux G j x 1 φxdx, u Since supp{ φ} { x 4π/3}, we obtain by Assumption 3 that 2 j/2 Φ j 2 j u C, u,

7 for a constant C which does not depend on j j 0, for fixed j 0. Using integration by parts in 3.12 twice and Assumption 3, we have 2 j/2 Φ j 2 j u = 1 iux 2 u 2 e x 2 G j x 1 φx dx, u. By Assumption 3 and properties of φ, for any j j 0, 2 j/2 Φ j 2 j u C 2 u 2 x 4π/3 x 2 G jx 1 + x G jx 1 + Gj x 1 dx C u 2, u. The bound 3.10 for Φ j follows from 3.13 and The case of Φ j is proved similarly. To show the bound 3.11, observe from 1.4 that Ψ j 2 j u = 2j/2 e iux ĝ2 j x ψxdx, u Since supp{ ψ} {/3 x 8π/3}, we obtain by Assumption 4 that Ψ j 2 j u C2 j/2 8π/3 /3 dx j x C 2 j/2, u, 3.15 for constants C, C which do not depend on j j 0, for fixed j 0. Using integration by parts in 3.14 and Assumption 4, we have Ψ j 2 j u = 2j/2 iux 2 u 2 e x 2 ĝ2 j x ψx dx, u Hence, by using properties of ψ and Assumption 4, for j j 0, Ψ j 2 j u C2j/2 u 2 2 2j 2 ĝ /3 x 8π/3 x 2 2j x + ĝ 2j x 2j x + ĝ2j x dx C 2 j/2 8π/3 2 2j u j x j j x j/ j dx C, u x u 2 /3 The bound 3.11 follows from 3.15 and Examples We consider here several examples of Gaussian stationary processes together with their possible discrete approximations. Example 4.1 The Ornstein-Uhlenbeck OU process X is perhaps the best-known Gaussian stationary process. It is the only Gaussian stationary process which is Markov. The OU process can be represented by 1.1 with gt = σe λt 1 {t 0}, ĝx = σ λ + ix, 4.1 7

8 for some λ > 0 and σ > 0. At this point, one can approximate either g or X. We do so for the process X because it has a well-known discrete approximation. Observe from 1.1 and 4.1 that, for J, n Z, X2 J n + 1 = e λ2 J X2 J 1 e n + σ 2λ2 J a J,n+1, 2λ where {a J,n } n Z is a Gaussian white noise. Therefore, since we expect 2 J/2 X J,[2 J t] Xt, it appears natural to consider the discrete approximation X J,n = 2 J/2 1 e σ 2λ2 J I e λ2 J B 1 a J,n, 4.2 2λ where B denotes the backshift operator not to be confused with Bm and I = B 0. In other words, X J is an A1 time series see Brockwell and Davis In view of 4.1 and 4.2, the deterministic discrete approximations g J have the discrete Fourier transforms ĝ J x = 2 J/2 1 e σ 2λ2 J 1 e λ2 J e ix λ Furthermore, ĝ and ĝ J satisfy Assumptions 1 to 5. Indeed, Assumptions 1 and 2 hold because, for every J Z, ĝ 1 x = λ + ix, G J x = 2 J/2 1 e 2λ2 J σ 2λ 2 J λ + ix 1 e λ2 J e ix 4.4 and G 1 J are continuous functions on, and thus square-integrable on compact sets. To show Assumption 3, consider the domain D J 0 = {z C : 0 ez 2 J 0 λ, Imz 4π/3}. The functions { z, z C\{i2kπ, k Z}, F z = 1 e z 1, z = 0, and F z 1 are holomorphic and different from zero on the open set D J 0 ɛ = {w C : inf z D J 0 z w < ɛ} D J 0. By setting z = 2 J λ + ix D J 0, we have G J x = C J F z for all J J 0 and x 4π/3, where 0 < c 1 C J c 2 < + for some c 1, c 2. Hence, Assumption 3 must hold. Assumption 4 follows from the relation k ĝx x k = σ ik k!, k = 0, 1, 2,... λ + ix k+1 Finally, Assumption 5 is also satisfied because 1 e G J 0 1 = 2λ2 J 2 J λ 2J/2 1 2λ 1 e λ2 J = λ2 J 1 e λ2 J 1 e λ2 J λ2 J for constants C 1, C 2 > e λ2 J 2 C 1 + e λ2 J e λ2 J λ2 J 1 C 2 2 J 8

9 Example 4.2 Consider a Gaussian stationary process 1.1 with a kernel function g having the Fourier transform ĝx = fx hx. 4.5 Here, with fx = pa k, b k ; x p a k, b k ; x p0, c m ; x, 4.6 k P 1 m P 2 hx = pd k, e k ; x p d k, e k ; x p0, f m ; x 4.7 k Q 1 m Q 2 pa, b; x = ix + ia + b, 4.8 where P 1, P 2, Q 1 and Q 2 are finite sets of indices. It is assumed that polynomials fx and hx have no common roots, and also that k Q 1, e k 0, and m Q 2, f m 0. Note that the polynomials f and h are Hermitian symmetric. Hence, ĝ is also Hermitian symmetric and thus g is real-valued. Kernel functions ĝ as in 4.5 correspond to rational spectral densities ozanov To define a discrete approximation ĝ J of ĝ, consider first pa, b; x, which is a building block of f in 4.6 and h in 4.7. Define a discrete approximation of pa, b; x as and also, in analogy to 3.3, set p J a, b; x = 2 J 1 e 2 J b 2 J ia ix 4.9 P J x = p Ja, b; x pa, b; 2 J x = 1 e 2 J b 2 J ia ix ix + 2 J ia + 2 J b The form 4.9 ensures that p J a, b; xp J a, b; x and p J 0, b; x are Hermitian symmetric functions. Define now a discrete approximation ĝ J of ĝ by 4.5, where p s in 4.6 and 4.7 are replaced by p J s. The function G J is then given by 4.5, where p s in 4.6 and 4.7 are replaced by P J s. We shall now verify that ĝ and ĜJ satisfy Assumptions 1-5. Assumptions 1 and 2 are satisfied because the building blocks p 1, P J and P 1 J for ĝ 1, G J and G 1 J are continuous functions on the real line. To show Assumption 3, it is enough to prove 3.7 for the function P J. Similarly to the case of the Ornstein-Uhlenbeck process, we are interested in the behavior of F and F 1 for z = ix + 2 J a + 2 J b, where x 4π/3 and J J 0. So, define the set { D J 0 = z C : 0 z 2 J 0 b, Iz 4π 3 + z a }, b and note that z = ix + 2 J a + 2 J b D J 0 when x 4π/3 and J J 0. Also, consider the set D J 0 ɛ = {w C : inf z D J 0 z w < ɛ}. The functions F and F 1 are holomorphic on D J 0 ɛ D J 0 for small enough ɛ, and thus Assumption 3 holds. Consider now Assumption 4. The condition 3.8 is satisfied for k = 0 by the definition of ĝ and the implicit assumption ĝ L 2 that is, the polynomial h has a higher degree than the polynomial f. When k = 1, note that ĝx x = f x hx fxh x hx 2 9

10 and the condition 3.8 follows since the difference between the degrees of f x and hx, and those of fxh x and hx 2 increased by 1. The case k = 2 can be argued in a similar way. To show 3.9 in Assumption 5, it is enough to prove it for P J 0 = 1 e 2 J b 2 J ia 2 J ia + 2 J b. This can be done by using standard properties of exponentials and using their Taylor expansions. Finally, let us note that the discrete approximations g J based on 4.9 correspond to AMA time series X J Brockwell and Davis Adaptive wavelet decompositions We first reestablish the decomposition 1.3 of Zhang and Walter 1994 by providing a more rigorous proof. Theorem 5.1 Zhang and Walter 1994 Let X be a Gaussian stationary process given by 1.1. Suppose that Assumptions 1 and 2 of Section 3 hold. Then, with the notation of Section 1, the process X admits the following wavelet-based decomposition: for any J Z, Xt = a J,n θ J t 2 J n + d j,n Ψ j t 2 j n 5.1 = j= j=j d j,n Ψ j t 2 j n, 5.2 with the convergence in the L 2 Ω-sense for each t, and independent N 0, 1 random variables a J,n, d j,n that are expressed through 1.5. Proof: Zhang and Walter 1994 Under Assumptions 1 and 2, the basis functions θ J and Ψ j in 5.1 and 5.2 are well-defined pointwise Section 3. The coefficients a j,n, d j,n are well-defined, independent N 0, 1 random variables Section 3. Except for more rigor, the rest of the proof follows that of Zhang and Walter Since the proof is short, we provide it to the reader s convenience. Observe that N 2 K M 2 2 E Xt a J,n θ J t 2 J n d j,n Ψ j t 2 j n n= N 1 j=j n= M 1 N 2 K M 2 = E Xt 2 2 Xta J,n θ J t 2 J n 2 Xtd j,n Ψ j t 2 j n n= N 1 j=j n= M 1 N2 K M a J,n θ J t 2 J n + d J,n Ψ j t 2 j n. 5.3 n= N 1 j=j n= M 1 By using Appendix A and the definition of function θ j Sections 1 and 3, we have EXta J,n = EXt Xsθ J s 2 J nds = 1 e itx ĝx 2 θ J 2 J nxdx = 1 e it 2 J nx ĝx 2 θj xdx = 1 e it 2 J nx 2 J/2 ĝx φ J 2 J xdx = θ J t 2 J n

11 Similarly, we have EXtd j,n = Ψ j t 2 j n. 5.5 Using 5.4, 5.5 and independence of a J,n, d j,n, relation 5.3 becomes 0 N 2 Observe from the definition of θ J that = 1 and similarly n= N 1 θ J t 2 J n 2 θ J t 2 J n = 1 K j=j M 2 ĝx2 J/2 φ2 J + t nxdx = 1 n= M 1 Ψ j t 2 j n e it 2 J nx 2 J/2 ĝx φ J 2 J xdx gu2 J/2 φn 2 J u + tdu, 5.7 Ψ j t 2 j n = 1 gu2 j/2 ψn 2 j u + tdu. 5.8 Since the collection of functions 2 J/2 φn 2 J u + t, 2 j/2 φn 2 j u + t, j J, n Z, makes an orthonormal basis of L 2 for any t, and since 0 = gt 2 dt, we obtain from 5.7 and 5.8 that relation 5.6 converges to 0 as N i, M i i = 1, 2 and K approach infinity. In the next result, we modify the approximation term in the decomposition 5.1 according to 1.7. Theorem 5.2 Let X be a Gaussian stationary process given by 1.1. Suppose that Assumptions 1 and 2 of Section 3 hold. Then, with the notation of Section 1, the process X admits the following wavelet-based decomposition: for any J Z, Xt = X J,n Φ J t 2 J n + j=j d j,n Ψ j t 2 j n. 5.9 The convergence in 5.9 is in the L 2 Ω-sense for each t under Assumption 3, and it is almost sure, uniform over compact intervals of t under Assumptions 3 and 4. The sequence X J = {X J,n } n Z is defined by either 1.11 or 3.1. Proof: We first argue that the definitions 1.11 and 3.1 of X j are equivalent. By using Appendix A, observe that, for X J,n defined by 1.11 and a J,n defined by 1.5, E X J,n N 2 k= N 1 g J,k a J,n k = 1 2 = E ĝ J 2 J x Xt Φ J t 2 J n N 2 k= N 1 g J,k θ J t 2 J n k N 2 g J,k e ix2 J k 2 2 J φ2 J x 2 dx 0, k= N 1 as N i i = 1, 2, since k g J,ke ixk converges to ĝ J x in L 2 π, π and φ has a compact support. dt 2 11

12 To show 5.9, we start with 5.1 and modify its first sum as 1.7, that is, a J,n θ J t 2 J n = X J,n Φ J t 2 J n We first show that, under Assumption 3, the.h.s. converges in the L 2 Ω-sense for fixed t and, under Assumptions 3 and 4, almost surely, uniformly over compacts of t. Observe that, by Lemma 3.1, Φ J t 2 J n C/1 + t 2 J n 2 and, by Lemma 3 in Meyer et al. 1999, X J,n A log2 + n a.s., where a random variable A does not depend on n. The almost sure convergence uniformly on compacts t K follows since log2 + n sup X J,n Φ J t 2 J n A sup 1 + t 2 J n 2 < a.s. t K t K For the convergence in L 2 Ω, observe that, for fixed t, E X J,n Φ J t 2 J n 2 CE X J,n n 2 We shall now prove the equality in Observe that, for each u, Indeed, arguing as above, F m u = m k= m θ J u = k= g J,k Φ J u 2 J k F u = n 2 <. g J,k Φ J u 2 J k k= g J,k Φ J u 2 J k pointwise, and F m x = m k= m g J,k e i2 J kx ĝx ĝ J 2 J x 2 J/2 φ2 J x θ J x 5.13 in L 2, since m k= m g J,ke ikx converges to ĝ J x in L 2 π, π, and ĝx/ĝ J 2 J x is bounded by Assumption 3 on the compact support of φ2 J x. Hence, F m θ J in L 2 and θ J = F a.e. Since both F and θ J are continuous, we obtain Set now, for m 1, a m J,n = { aj,n, n m, 0, n > m, X m J,n = k= g J,k a m J,n k. By using 5.11, we obtain that a m J,n θj t 2 J n = X m J,n ΦJ t 2 J n

13 The L.H.S. of 5.14 converges in L 2 Ω to the L.H.S. of 5.10 and, in fact, also almost surely by the Three Series Theorem. Let us show that the.h.s. of 5.14 converges to the.h.s. of We want to argue next that sup X m J,n A log2 + n a.s m 1 for a random variable A which only depends on J. By using the Lévy-Octaviani inequality e.g. Proposition in Kwapień and Woyczyński 1992, we have P sup X m J,n > a 2P X M J,n, > a 5.16 m=1,...,m for any a > 0 and M 1. By the Three Series Theorem, X M J,n X J,n almost surely, as M. Hence, passing to the limit with M in 5.16, we have P sup X m J,n > a 2P X J,n > a. m 1 The bound 5.15 now follows as in the proof of Lemma 3 in Meyer et al By using Lemma 3.1 and the bound 5.15, the.h.s. of 5.14 converges a.s. to the.h.s. of It is left to show that the second term in 5.9 converges almost surely and uniformly on compacts. By Lemma 3 in Meyer et al. 1999, d j,n A log2 + j log2 + n a.s., where a random variable A does not depend on j, n. By Lemma 3.1, we have Ψ j t 2 j n = Ψ j 2 j 2 j t n C2 j/ j t n 2, for j J. Then, as in the proof of Theorem 2 in Meyer et al. 1999, j=j d j,n Ψ j t 2 j n A 2 j/2 log2 + j A j=j j=j a.s. uniformly over compact intervals of t. 6 FWT-like algorithm log2 + n j t n 2 2 j/2 log2 + j log2 + 2 j t < We show here that discrete approximation sequences X j are related across different scales by a FWT-like algorithm. Proposition 6.1 Let X j and d j be the sequences appearing in 5.9, and let u and v denote the CMFs associated with the orthogonal Meyer MA. Then, under Assumptions 1 4 of Section 3: i econstruction step X j+1 = u j 2 X j + v j 2 d j,

14 where the filters u j and v j are defined through their discrete Fourier transforms ii Decomposition step û j x = ĝj+1x ĝ j 2x ûx, v jx = ĝ j+1 x vx; 6.2 X j = 2 u d j X j+1, d j = 2 v d j X j+1, 6.3 where x stand for the time reversal of a sequence x, and the filters u d j and vd j are defined through their discrete Fourier transforms by û d ĝ j 2x j x = ûx, v d 1 j x = vx. 6.4 ĝ j+1 x ĝ j+1 x The convergence in 6.1 and 6.3 is in the L 2 Ω-sense, and also absolute almost surely. Proof: Observe first that the filters u j, v j, u d j, vd j are well-defined since û j, v j, û d j, vd j L2 π, π. The latter follows by writing 1ûx, û j x = G j+1 x G j 2x vj x = G j+1 xĝ2 j+1 x vx, û d j x = G j+1 x 1 G j 2xûx, v d j x = G j+1 x 1 ĝ2 j+1 x 1 vx 6.5 see 3.3, and using Assumptions 1 and 3. i To show 6.1, we need to prove X j+1,n = k= X j,k u j,n 2k + k= d j,k v j,n 2k. 6.6 We first prove the convergence in 6.6 in the L 2 Ω-sense. Observe that, by using 1.11, 1.5 and Appendix A, K 2 E X j+1,n X j,k u j,n 2k + d j,k v j,n 2k = E Xt Φ j+1 t 2 j 1 n = 1 k= K K k= K ĝx 2 e i2 j 1 nx Φj+1 x Φ j x = 1 Φ j t 2 j ku j,n 2k K k= K K k= K e i2 j kx u j,n 2k Ψ j x Ψ j t 2 j kv j,n 2k dt K k= K 2 e i2 j kx v j,n 2k 2 dx e i2 j 1 nxĝ j+1 2 j 1 x2 j+1/2 φ2 j 1 x ĝ j 2 j x2 j/2 φ2 j x K k= K Hence, it is sufficient to prove that ĝ j 2 j x2 j/2 φ2 j x e i2 j kx u j,n 2k 2 j/2 ψ2 j x k= K k= K e i2 j kx u j,n 2k + 2 j/2 ψ2 j x 14 e i2 j kx v j,n 2k 2 dx. k= e i2 j kx v j,n 2k

15 = e i2 j 1 nxĝ j+1 2 j 1 x2 j+1/2 φ2 j 1 x 6.7 with the convergence in L 2. We only consider the case n = 2p the case n = 2p + 1 may be treated in an analogous fashion. Then, relation 6.7 becomes ĝ j 2 j x 2 j/2 φ2 j x m= e i2 j mx u j,2m + 2 j/2 ψ2 j x m= e i2 j mx v j,2m The L.H.S. of 6.8 is = ĝ j+1 2 j 1 x 2 j+1/2 φ2 j 1 x. 6.8 ĝ j 2 j x 2 j/2 φ2 j xûj2 j 1 x + û j 2 j 1 x + π +2 j/2 ψ2 j x v j2 j 1 x + v j 2 j 1 x + π 2 2 = j/2 φ2 j x ĝ j+1 2 j 1 x û2 j 1 x + ĝ j+1 2 j 1 x + π û2 j 1 x + π j/2 ψ2 j x ĝ j+1 2 j 1 x v2 j 1 x + ĝ j+1 2 j 1 x + π v2 j 1 x + π = ĝ j+1 2 j 1 x 2 j/2 φ2 j x + û2 j 1 x + π xû2 j j/2 ψ2 j x v2 j 1 x + v2 j 1 x + π j/2 φ2 j xû2 j 1 x + π + ψ2 j x v2 j 1 x + π ĝ j+1 2 j 1 x + π ĝ j+1 2 j 1 x. This is also.h.s. of 6.8 since 2 j/2 φ2 j xû2 j 1 x + û2 j 1 x + π j/2 ψ2 j x v2 j 1 x + v2 j 1 x + π 2 = 2 j+1/2 φ2 j 1 x this is the Fourier transform of the last relation in the proof of Theorem 7.7 in Mallat 1998 and, with y = 2 j 1 x, φ2 j xû2 j 1 x + π + ψ2 j x v2 j 1 x + π = φ2yûy + π + ψ2y vy + π = 2 1/2 φy ûyûy + π + vy vy + π = 0, where we used the facts φ2y = 2 1/2 φyûy 7.30 in Mallat 1998, ψ2y = 2 1/2 φy vy 7.57 in Mallat 1998 and ûyûy + π + vy vy + π = 0 Theorem 7.8 in Mallat We now show that the convergence in 6.6 is also absolute almost surely. By using Assumptions 3, 4, and integration by parts twice, we may conclude that u j,k, v j,k C1 + k 2 1, k Z. By Lemma 3 in Meyer et al. 1999, X j,k, d j,k A log2 + k a.s., where a random variable A does not depend on k. The absolute convergence a.s. now follows. ii The proof of 6.3 follows by similar arguments. We need to prove that X j,n = k= X j+1,k u d j,k 2n

16 and d j,n = k= X j+1,k v d j,k 2n To show 6.9 with convergence in the L 2 Ω-sense, it suffices to prove that But the.h.s. of 6.11 is e i2 j nxĝ j 2 j x2 j/2 φ2 j x = ĝ j+1 2 j+1 x2 j+1/2 φ2 j+1 x ĝ j+1 2 j+1 x2 j+1/2 φ2 j+1 x k= m= e ik2 j+1x u d j,k 2n e i2n+m2 j+1x u d j,m = ĝ j+1 2 j+1 x2 j+1/2 φ2 j+1 xe in2jxû d j 2 j+1 x = 2 j+1/2 φ2 j+1 xe in2jxĝ j 2 j xû2 j+1 x, 6.12 which is also the L.H.S. of 6.11 by using φ2y = 2 1/2 φyûy. The proof of the equality 6.10 in the L 2 Ω-sense is similar. The absolute almost surely convergence of 6.9 and 6.10 may be deduced by arguments analogous to those for the absolute almost surely convergence of Convergence of random discrete approximations We will also assume the following: Assumption 6: Suppose that there are β N {0} and α 0, 1] such that, for any compact K, Xt Xs X 1 st s... X β s t sβ β! A t s β+α for all t, s K, a.s., 7.1 where a random variable A depends only on K. As usual, f k denotes the kth derivative of f. Note that 7.1 implies, for some random variable B, Xt Xs B t s γ for all t, s K, with γ = 1 β + α. 7.2 Condition 7.1 in Assumption 6 is satisfied by many Gaussian stationary processes. It follows, in particular, from the two conditions: and X β is α-hölder a.s. 7.3 Xt C1 + t β+α a.s. 7.4 By Theorem and a discussion on pp in Cramér and Leadbetter 1967, 7.3 follows from 0 x 2β+2α log1 + x ĝx 2 dx <

17 There is also an equivalent condition in terms of the autocovariance function of a stationary Gaussian process. Condition 7.4 is always satisfied for stationary Gaussian processes that are bounded on compact intervals, such as for those satisfying 7.3. In fact, a stronger condition holds: Xt C log2 + t a.s., 7.6 where C is a random variable. To see this, note that the discrete-time sequence X k = sup t [k,k+1 Xt is stationary. Moreover, by Theorem 2 in Lifshits 1995, p. 142, for some m and σ > 0, P X 0 m + τ 21 Φτ/σ, τ > 0, where Φ is the distribution function of standard normal law. In other words, the right tail of the distribution function of X n decays at least as fast as that of the distribution function of standard normal law. The bound 7.6 can then be obtained as 3.15 in Lemma 3 of Meyer et al We shall need some assumptions stronger than parts of Assumptions 3 and 5. Assumption 3*: Suppose that, for any J 0 Z, max sup k=0,1,...,β+[α]+2 Assumption 5*: Assume that, for large J, sup J J 0 x 4π/3 k G J x x k <. 7.7 G J 0 1 const 2 β+1j. 7.8 As in Lemma 3.1, under Assumption 3*, we have 2 j/2 Φ j 2 j u where a constant C does not depend on j j 0, for fixed j 0. In addition, we will suppose the following: Assumption 7: If β 1 in Assumption 6, suppose that G n J C, u, u β+[α]+2 0 = n G J 0 = 0, n = 1,..., β xn The next result establishes convergence of random discrete approximations. Proposition 7.1 Under Assumptions 2,3,5 of Section 3 and Assumption 6 above, we have sup 2 J/2 X J,[2 J t] Xt A 1 2 Jγ a.s., 7.11 t K where K is a compact interval and A 1 is a random variable that does not depend on J. If, in addition, Assumptions 3*,5* and 7 above hold, then sup 2 J/2 X J,[2 J t] X[2 J t]2 J A 2 2 Jβ+α a.s., 7.12 t K where a random variable A 2 does not depend on J. 17

18 Proof: Suppose without loss of generality that K = [0, 1]. In view of Assumption 6, it is enough to show 7.12 or that 2 J/2 X J,k Xk2 J A 2 Jβ+α a.s. sup k=0,...,2 J Note by Assumption 7 and the properties of the scaling function φ that u n n n Φ J udu = i Φn J 0 = i n x n G J x2 J/2 φ2 x J x=0 = 0, for n = 1,..., β. By using Appendix A, Assumptions 5* and 6 with 7.9, we have 2 J/2 X J,k Xk2 J 2 J/2 Xt Xk2 J... X β k2 J t k2 J β Φ J t k2 J dt β! +Xk2 J G J 0 1 A2 J/2 t k2 J β+α Φ J t k2 J dt + B2 β+1j = A 2 Jβ+α = A2 J/2 u β+α Φ J u du + B2 β+1j setting u = 2 J v v β+α 2 J/2 Φ J 2 J v dv A 2 Jβ+α v β+α 1 + v β+[α]+2 dv = A 2 Jβ+α. According to Proposition 7.1, the discrete approximations 2 J/2 X J,[2 J t] converge to the process Xt. Note also that, when β 1, the convergence is faster on the dyadics than on the whole interval. An interesting question is whether the faster convergence rate β + α can be obtained on an interval for some other approximation based on X J,[2 J t]. For a function f, defined on either or Z, consider the operator p h fa = p k=0 p 1 p k fa + kh, p N, k where a, h are in either or Z, respectively. When f = f k is a function on Z, we write p f k = p 1 fk and p f = p 1 f0. In view of the condition 7.1, to obtain the faster rate β + α on a whole interval, it is natural to try an approximation which includes the terms mimicking the β derivatives in 7.1. Thus, for β 1, consider the approximations X β,j t = 2 J/2 X J,[2 J t] + 2 J/2 β p X J,[2 J t] t [2 J t]2 J p 2 Jp, 7.13 p! with the idea that 2 J/2 p X J,[2 J t] X p t2 Jp for large J. For example, when β = 1, p=1 X 1,J t = 2 J/2 X J,[2 J t] + 2 J/2 X J,[2 J t]+1 X J,[2 J t] 2 J t [2 J t]2 J. When β = 0, we get X 0,J t = 2 J/2 X J,[2 J t]. 18

19 Although intuitive, the approximation X β,j in 7.13 may not converge to Xt at the faster rate β +α on compact intervals see emark 7.1 below. It turns out, though, that a modification of 7.13 does attain that rate. In order to build such approximation, we will make use of Lemma 7.1 below, stated without proof. Observe from 7.12 that replacing 2 J/2 p X J,[2 J t] by p X[2 J t]/2 J in the approximation 7.13 makes an error of the desired faster rate α + β. 2 J Lemma 7.1 shows that, after suitable correction, p X[2 J t]/2 J approximates X p [2 J t]/2 J 2 J and then X p t at the desired rate α + β. This correction needs to be taken into account when considering a modification to X β,j. For any x, define the function s x on Z by s x k = x + k, k Z. Note that p s j 0 = p k=0 p k 1 p k k j, j N recall from above that p s j 0 = p 1 sj 0 0. Lemma 7.1 Let β N, α 0, 1 and G be an open interval. If f : G is a Lipschitz function of order β + α in the sense of 7.1, then, for N p β, a G, we have as h 0. p h fa β p h p f p f p+j a a = h j p s p+j 0 + Oh β+α p, 7.14 p + j! j=1 Define the approximation function X β,j t by X β,j t = X 0,J + β p=1 X p,j t p! t [2 J t]2 J p, 7.15 where and X 0,J := 2 J/2 X J,[2 J t], Xβ,J := 2J/2 β X J,[2 J t] 2 Jβ X p,j := 2J/2 p β p X J,[2 J t] 2 J + j=1 X p+j,j p + j! 2 Jj p s p+j 0, p = 1, 2,..., β 1. Proposition 7.2 Under stronger assumptions of Proposition 7.1, we have sup X β,j t Xt A2 Jβ+α a.s., 7.16 t K where K is a compact interval and A is random variable that does not depend on J. Proof: If the relation X p,j X p [2 J t]2 J = O2 Jβ+α p 7.17 holds for p = 0, 1, 2,..., β, then, by Assumption 6, X β,j t Xt X β,j t X[2 J t]2 J β p=1 X p [2 J t]2 J p! t [2 J t]2 J p 19

20 + Xt X[2J t]2 J β p=1 X p [2 J t]2 J p! t [2 J t]2 J p = O2 Jβ+α, which proves elation 7.17 holds for p = 0 by Proposition 7.1. To show 7.17 for β 1, we argue by backward induction. For p = β, by Proposition 7.1 and Lemma 7.1, we have X β,j X β [2 J t]2 J 2 J/2 β X J,[2 J t] 2 Jβ β X[2 J t]2 J 2 Jβ β X[2 J t]2 J + 2 Jβ X β [2 J t]2 J = O2 Jβ+α β. Assume by induction that 7.17 holds for p + 1,..., β 1, β with p 1. Then, by Proposition 7.1 and Lemma 7.1, we obtain that X p,j X p [2 J t]2 J 2 J/2 p X J,[2 J t] 2 Jp p X[2 J t]2 J 2 Jp β p + j=1 + p X[2 J t]2 J β p 2 Jp X p [2 J t]2 J X p+j [2 J t]2 J β p 2 Jj p s p+j 0 p + j! j=1 j=1 X p+j,j X p+j [2 J t]2 J p + j! p + j! 2 Jj p s p+j 0 2 Jj p s p+j 0 = O2 Jβ+α p. emark 7.1 When β = 2, the approximation X β,j becomes XJ,[2 X 2,J = 2 J/2 X J,[2 J t] +2 J/2 J t]+1 X J,[2 J t] 2 J + XJ,[2J t]+2 2X J,[2J t]+1 + X J,[2J t] 2 2 J t [2 J t]2 J +2 J/2 X J,[2 J t]+2 2X J,[2 J t]+1 + X J,[2 J t] 2 J 2 t [2 J t]2 J Compare 7.18 with the approximations X 2,J given in Observe that, if X 2,J also converges to X at the rate 2 + α, then O2 J2+α = X 2,J X 2,J = 2 J/2 X J,[2 J t]+2 2X J,[2 J t]+1 + X J,[2 J t] 2 2 J t [2 J t]2 J or, by using 7.12 in Proposition 7.1, O2 J2+α = X[2J t] + 22 J 2X[2 J t] + 12 J + X[2 J t]2 J 2 J t [2 J t]2 J or, by using Taylor expansions, O2 J2+α = 2X t 1 X t 2 2 J t [2 J t]2 J, with t 1 = t 1 J and t 2 = t 2 J that are close to t. The last relation may not be satisfied under our assumptions, showing that one cannot expect X 2,J to converge to X at the rate 2 + α. 20

21 Although the approximations X β,j converge to X at the faster rate β+α, these approximations do not necessarily have continuous paths. Indeed, it can be easily verified that X β,j is continuous when β = 1, 2 but not so when β = 3. For a fixed β 2, it may be desirable to have not only a continuous but also a C β 1 approximation X β,j. Moreover, in analogy to 7.15, in order to have the faster convergence, we would expect the p-th derivative of the approximation X β,j at [2 J t]2 J to approximate the p-th derivative of the process X at t. We generally found such C β 1 approximations difficult to construct. One difficulty is the following. As in 7.15, we may seek an approximation X β,j which is a polynomial of order β on an interval [2 J t]2 J, [2 J t]2 J + 1. Since X β,j is globally C β 1, we would require its derivatives X p β,j, p = 0, 1,..., β 1, to be equal to prescribed values at the endpoints [2 J t]2 J and [2 J t]2 J +1. equiring this yields 2β equations that a polynomial X β,j must satisfy. Since a polynomial of order β has only β + 1 coefficients, this is not possible in general. Despite this difficulty, we have found the following general scheme to yield C β 1 approximations, at least for the first several values of β 2. To construct a C 1 approximation X 2,J, we could require first that its derivative 2 J/2 X 1 2,Jt = 2 J/2 X1,J t based on the sequence X J,[2 J t] 2 J = X J,[2 J t] 2 J + 1 XJ,[2 J t]+1 2 J 2 J X J,[2J t] 2 J t [2 J t]2 J = X J,[2 J t] 2 J + 2 X J,[2J t] 2 J 2 t [2 J t]2 J Observe that, by construction using continuous approximation X 1,J, X 1 2,J is continuous. Moreover, X 1 2,J approximates X 1 t, and X 2 2,J on the interval [2 J t]2 J, [2 J t]2 J + 1 approximates X 2 t. Integrating 7.19 and requiring it to be continuous yields the following approximation 2 J/2 X 2,J t = X J,[2 J t]+1 + X J,[2 J t] 2 + X J,[2 J t] 2 J t [2 J t]2 J + 2 X J,[2 J t] 2 J 2 t [2 J t]2 J Note that X 2,J differs from X 2,J by the constant term. Similarly, to construct a C 2 approximation X 3,J, we could require that X 1 3,Jt = X 2,J t based on the sequence X J,[2 J t] 2 J. Integrating the resulting expression and requiring it to be continuous yields X 3,J t = 1 6 X J,[2 J t] X J,[2 J t] X J,[2 J t] X J,[2 J t] 2 J t [2 J t]2 J 2 X J,[2 J t] 2 J 2 t [2 J t]2 J X J,[2 J t] 6 2 J 3 t [2 J t]2 J the subindex 2 in 2 X J,[2 J t] is not a typo. The approximation 7.21 is C 2 and its derivatives of orders p = 0, 1, 2, 3 approximate those of the process X. 21

22 We expect that the above scheme yields C β 1 approximations X β,j for any β 2. However, as explained in emark 7.2 below, we cannot expect these approximations to convergence at the faster rate β + α. This is perhaps not surprising, because the discontinuous approximations in 7.15 are already nontrivial. emark 7.2 One cannot expect the approximation X 2,J in 7.21 to converge to X at the faster rate 2 + α. Indeed, if this rate were achieved, we would have see 7.18 O2 J2+α = X 2,J t X 2,J t = X J,[2 J t]+1 X J,[2 J t] X J,[2 J t]+2 2X J,[2 J t]+1 + X J,[2 J t] 2 2 J t [2 J t]2 J or, by using 7.11, O2 J2+α = X[2 J t] + 12 J X[2 J t]2 J X[2J t] + 22 J 2X[2 J t] + 12 J + X[2 J t]2 J 2 J t [2 J t]2 J or, by Taylor expansions, O2 J2+α = X [2 J t]2 J 2 J X t 1 2 2J X t 2 2 J t [2 J t]2 J, with t 1 = t 1 J and t 2 = t 2 J close to t, or, by expanding X [2 J t]2 J further, O2 J2+α = X t2 J X t 1 2 2J X t 2 2 J t [2 J t]2 J. This relation may not be satisfied under our assumptions on X. 8 Simulation: the case of the OU process We will illustrate here how the results of Sections 6 and 7 can be used to simulate a stationary process X. We consider only the case of the OU process in Example 4.1. ecall from that example that the discrete approximations taken for the OU process are ĝ J x = 2 J/2 1 e σ 2λ2 J 1 e λ2 J e ix λ and the corresponding discrete random approximations X J are suitable A1 time series in 4.2. With the choice 8.1 of approximations, observe that the filters u j and v j used in reconstruction 6.1 become 2 1/2 û j x = 1 + e λ2 j+1 e ix ûx, 8.2 2λ2 j e v j x = 2 j+1/2 1 e σ 2λ2 j+1 1 e λ2 j+1 e ix 1 vx λ Suppose one wants to simulate the OU process on the interval [0, 1]. The idea is to begin by generating a discrete approximation X 0 at scale 2 0. This step is easy as X 0 is an A1 time 22

23 1.5 Approximations to OU process 0.5 Log sup difference between approximations X J diff t j Figure 1: Approximations X J and the logarithms of their sup differences. series. Then, substituting X 0 into 6.1, one may get the approximation X 1, and continuing recursively from X 1 now, the approximation X J for arbitrary fixed J 1. Note that applying 6.1 recursively each time essentially involves just simulating independent N 0, 1 random variables and computing filters u j and v j. Proposition 7.1 ensures that the properly normalized X J approximate the OU process uniformly over [0, 1] and exponentially fast in J. We illustrate this in Figure 1 for the OU process with λ = 1, σ = 1. The plot on the left depicts the consecutive approximations X j from X 0 at scale 2 0 to X J at the finest scale 2 J with J = 11. In the right plot, we present the sup-differences between consecutive approximations X j 1 and X j, j = 2,..., 11, on the log scale. The decay in that plot confirms that normalized approximations X J converge to the OU process exponentially fast in J. Several comments should be made on how approximations X j are obtained in Figure 1. Though theoretically unjustified, we use not Meyer but the celebrated Daubechies CMFs with N = 8 zero moments. The advantage of these CMFs is that they have finite length equal to 2N. In particular, the filters u j in 8.3 are then also finite of length 2N + 2 for any j. The filters v j, however, are not finite and are truncated in practice, disregarding those elements that are smaller than a prescribed level δ = Let us also note that applications of 6.1 involve more elements of X j than those plotted in Figure 1. This is achieved by taking the initial approximation X 0 of suitable length. Some indication on how this is done, can be seen from the analogous simulation of fractional Brownian motion in Pipiras Finally, let us indicate another interesting feature of the above simulation. Focus on the filters v j defined by 8.3. They have infinite length and are truncated in practice. It may seem from the definition 8.3 that v j have to be taken of very long length as j increases because the elements of the filter 1 e λ2 j+1 e ix 1 = e λ2 j+1k e ixk decay extremely slowly for larger j. In fact, the opposite turns out to be true. As j increases, the filters v j can essentially be taken of finite length 2N 2, and things get even better for larger j in a way! To explain why this happens, recall e.g. Mallat 1998, Theorem 7.4 that N zero moments k=0 23

24 translates into the factorization vx = 1 e ix N v 0,N x, 8.4 where, in the case of Daubechies CMF v, the filters v 0,N have also finite length. An explanation follows by observing that 1 e ix 1 e λ2 j+1 e = a j ix k e ixk 1, or a j 0 1, a k 0 0, k 1, as j. More precisely, 1 e ix 1 e λ2 j+1 e 1 = e ix 1 e λ2 j+1 = 1 e λ2 j+1 e λ2 j+1 k 1 e ixk, ix 1 e λ2 j+1 e ix so that the elements a j k, k 1, are bounded by 1 e λ2 j+1 λ2 j+1 0, as j. k=0 k=1 A On the integration of stationary Gaussian processes Let {Xt} t be a Gaussian stationary process given by 1.1. We define here the integral Xtftdt, A.1 for suitable functions f and state its properties as used throughout the paper. No proofs will be given as most of them are standard. Our strategy will be to define A.1 both pathwise and as an L 2 Ω limit and to show that the two definitions coincide in relevant cases. In the pathwise case, the integral A.1 will be denoted by I ω f i.e. defined ω-wise, and, in the L 2 Ω case, it will be denoted by I 2 f. For simplicity, we assume that the sample paths of X are continuous. Path continuity is not a stringent assumption since, by Belayev s alternative Belayev 1960, either the sample paths of a Gaussian stationary process are continuous or very badly-behaved in the sense of possessing discontinuities of the second type. Assume first that ft = n i=1 f i1 [ai,b i t is a step function. For such function, the stochastic integral A.1 may be defined pathwise as the ordinary iemann integral n n bi I ω f i 1 [ai,b i ] = Xtdt. A.2 a i i=1 Lemma A.1 The integral A.2 has the following properties: for step functions f, f 1 and f 2, and with the notation If = I ω f: P1 If is a Gaussian random variable with mean zero. P2 The following moment formulae hold: EIf 2 = 1 ĝx 2 fx 2 dx; A.3 [ ] E If 1 If 2 [ ] E IfXt = 1 = 1 i=1 P3 For real c and d, Icf 1 + df 2 = cif 1 + dif ĝx 2 f1 x f 2 xdx; e itx ĝx 2 fxdx. A.4 A.5

25 An extension of the integral A.1 to more general functions f can be achieved by an argument of approximation in L 2 Ω. Consider the space of deterministic functions { L 2 g := f L 2 : fx } 2 ĝx 2 dx < A.6 with the inner product Denote also equipped with the ordinary L 2 Ω inner product f 1, f 2 L 2 g := 1 f 1 x f 2 x ĝx 2 dx. A.7 { } IX s = I ω f : f is a step function, A.8 EI ω f 1 I ω f 2. A.9 The space IX s and the restriction of L2 g to step functions are isometric since, for elementary functions f 1 and f 2, EI ω f 1 I ω f 2 = 1 f 1 x f 2 x ĝx 2 dx = f 1, f 2 L 2 g. A.10 Thus, a natural way to define the integral I 2 for a given f L 2 g is to take a sequence of step functions l n that approximate f in the L 2 g norm, and set I 2 f as the corresponding L 2 Ω limit of I ω l n. The following result can be proved as Lemma 5.1 in Pipiras and Taqqu Lemma A.2 For every function f L 2 g, there is a sequence {l n } of step functions such that f l n L 2 g 0. Given f L 2 g, we may use Lemma A.2 to define A.1 as I 2 f = liml 2 ΩI ω l n, A.11 where {l n } is a sequence of step functions such that f l n L 2 g 0. This definition does not depend on the approximating sequence of f. The integral I 2 f has the following properties. Theorem A.1 The map I 2 : f I 2 f defined by A.11 is an isometry between the spaces L 2 g and I X = {I 2 f : f L 2 g}. Moreover, I 2 f = I ω f a.s. for step functions f, and the integral I 2 f satisfies the properties P1, P2 and P3 of Lemma A.1 with If = I 2 f and f, f 1, f 2 L 2 g. It is possible to define A.1 also pathwise for more general integrand functions. As discussed in Section 7, for a Gaussian stationary process {Xt} t, we have, almost surely, Xt C log2 + t, t, A.12 where C is a random variable. Consider the space L := {f L 1 : log2 + t ft dt < }. For f L, in view of A.12 we may define I ω f = Xtftdt A.13 pathwise as an improper iemann integral. One can show that I 2 f = I ω f a.s. for f L L 2 g. 25

26 eferences Belayev, Y. K. 1960, Continuity and hölder s conditions for sample functions of stationary gaussian processes, Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability. Vol. II: Contributions to probability theory. Benassi, A. & Jaffard, S. 1994, Wavelet decomposition of one- and several-dimensional Gaussian processes, in ecent advances in wavelet analysis, Vol. 3 of Wavelet Anal. Appl., Academic Press, Boston, MA, pp Brockwell, P. J. & Davis,. A. 1991, Time Series: Theory and Methods, Springer Series in Statistics, second edn, Springer-Verlag, New York. Cramér, H. & Leadbetter, M , Stationary and elated Stochastic Processes, Dover Publications Inc., Mineola, NY. Daubechies, I. 1992, Ten Lectures on Wavelets, Society for Industrial and Applied Mathematics SIAM, Philadelphia, PA. Didier, G. & Pipiras, V. 2006, Adaptive wavelet decompositions of stationary time series, Preprint. Donoho, D. L. 1995, Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition, Applied and Computational Harmonic Analysis. Time-Frequency and Time-Scale Analysis, Wavelets, Numerical Algorithms, and Applications 22, Kwapień, S. & Woyczyński, W. 1992, andom Series and Stochastic Integrals: Simple and Multiple, Birkhäuser, Boston, MA. Lifshits, M. 1995, Gaussian andom Functions, Kluwer Academic Publishers, Boston, MA. Mallat, S. 1998, A Wavelet Tour of Signal Processing, Academic Press Inc., San Diego, CA. Meyer, Y. 1992, Wavelets and Operators, Vol. 37 of Cambridge Studies in Advanced Mathematics, Cambridge University Press, Cambridge. Meyer, Y., Sellan, F. & Taqqu, M. S. 1999, Wavelets, generalized white noise and fractional integration: the synthesis of fractional Brownian motion, The Journal of Fourier Analysis and Applications 55, Pipiras, V. 2004, Wavelet-type expansion of the osenblatt process, The Journal of Fourier Analysis and Applications 106, Pipiras, V. 2005, Wavelet-based simulation of fractional Brownian motion revisited, Applied and Computational Harmonic Analysis 191, Pipiras, V. & Taqqu, M. S. 2000, Integration questions related to fractional Brownian motion, Probability Theory and elated Fields 1182, ozanov, Y. A. 1967, Stationary andom Processes, Holden-Day Inc., San Francisco, Calif. Sellan, F. 1995, Synthèse de mouvements browniens fractionnaires à l aide de la transformation par ondelettes, Comptes endus de l Académie des Sciences. Série I. Mathématique 3213,

27 Vidakovic, B. 1999, Statistical Modeling by Wavelets, John Wiley & Sons Inc., New York. Walter, G. G. & Shen, X. 2001, Wavelets and Other Orthogonal Systems, Studies in Advanced Mathematics, second edn, Chapman & Hall/CC, Boca aton, FL. Zhang, J. & Walter, G. 1994, A wavelet-based KL-like expansion for wide-sense stationary random processes, IEEE Transactions on Signal Processing 427, Gustavo Didier and Vladas Pipiras Dept. of Statistics and Operations esearch UNC at Chapel Hill CB#3260, Smith Bldg. Chapel Hill, NC 27599, USA didier@unc.edu, pipiras@ .unc.edu 27

Adaptive wavelet decompositions of stochastic processes and some applications

Adaptive wavelet decompositions of stochastic processes and some applications Adaptive wavelet decompositions of stochastic processes and some applications Vladas Pipiras University of North Carolina at Chapel Hill SCAM meeting, June 1, 2012 (joint work with G. Didier, P. Abry)

More information

On the usefulness of wavelet-based simulation of fractional Brownian motion

On the usefulness of wavelet-based simulation of fractional Brownian motion On the usefulness of wavelet-based simulation of fractional Brownian motion Vladas Pipiras University of North Carolina at Chapel Hill September 16, 2004 Abstract We clarify some ways in which wavelet-based

More information

A New Wavelet-based Expansion of a Random Process

A New Wavelet-based Expansion of a Random Process Columbia International Publishing Journal of Applied Mathematics and Statistics doi:10.7726/jams.2016.1011 esearch Article A New Wavelet-based Expansion of a andom Process Ievgen Turchyn 1* eceived: 28

More information

Lectures notes. Rheology and Fluid Dynamics

Lectures notes. Rheology and Fluid Dynamics ÉC O L E P O L Y T E C H N IQ U E FÉ DÉR A L E D E L A U S A N N E Christophe Ancey Laboratoire hydraulique environnementale (LHE) École Polytechnique Fédérale de Lausanne Écublens CH-05 Lausanne Lectures

More information

Optimal series representations of continuous Gaussian random fields

Optimal series representations of continuous Gaussian random fields Optimal series representations of continuous Gaussian random fields Antoine AYACHE Université Lille 1 - Laboratoire Paul Painlevé A. Ayache (Lille 1) Optimality of continuous Gaussian series 04/25/2012

More information

Sobolev spaces. May 18

Sobolev spaces. May 18 Sobolev spaces May 18 2015 1 Weak derivatives The purpose of these notes is to give a very basic introduction to Sobolev spaces. More extensive treatments can e.g. be found in the classical references

More information

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA

S. Lototsky and B.L. Rozovskii Center for Applied Mathematical Sciences University of Southern California, Los Angeles, CA RECURSIVE MULTIPLE WIENER INTEGRAL EXPANSION FOR NONLINEAR FILTERING OF DIFFUSION PROCESSES Published in: J. A. Goldstein, N. E. Gretsky, and J. J. Uhl (editors), Stochastic Processes and Functional Analysis,

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

WAVELET EXPANSIONS OF DISTRIBUTIONS

WAVELET EXPANSIONS OF DISTRIBUTIONS WAVELET EXPANSIONS OF DISTRIBUTIONS JASSON VINDAS Abstract. These are lecture notes of a talk at the School of Mathematics of the National University of Costa Rica. The aim is to present a wavelet expansion

More information

SOLUTIONS TO HOMEWORK ASSIGNMENT 4

SOLUTIONS TO HOMEWORK ASSIGNMENT 4 SOLUTIONS TO HOMEWOK ASSIGNMENT 4 Exercise. A criterion for the image under the Hilbert transform to belong to L Let φ S be given. Show that Hφ L if and only if φx dx = 0. Solution: Suppose first that

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

Nonlinear Filtering Revisited: a Spectral Approach, II

Nonlinear Filtering Revisited: a Spectral Approach, II Nonlinear Filtering Revisited: a Spectral Approach, II Sergey V. Lototsky Center for Applied Mathematical Sciences University of Southern California Los Angeles, CA 90089-3 sergey@cams-00.usc.edu Remijigus

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

1 Continuity Classes C m (Ω)

1 Continuity Classes C m (Ω) 0.1 Norms 0.1 Norms A norm on a linear space X is a function : X R with the properties: Positive Definite: x 0 x X (nonnegative) x = 0 x = 0 (strictly positive) λx = λ x x X, λ C(homogeneous) x + y x +

More information

JASSON VINDAS AND RICARDO ESTRADA

JASSON VINDAS AND RICARDO ESTRADA A QUICK DISTRIBUTIONAL WAY TO THE PRIME NUMBER THEOREM JASSON VINDAS AND RICARDO ESTRADA Abstract. We use distribution theory (generalized functions) to show the prime number theorem. No tauberian results

More information

From Fractional Brownian Motion to Multifractional Brownian Motion

From Fractional Brownian Motion to Multifractional Brownian Motion From Fractional Brownian Motion to Multifractional Brownian Motion Antoine Ayache USTL (Lille) Antoine.Ayache@math.univ-lille1.fr Cassino December 2010 A.Ayache (USTL) From FBM to MBM Cassino December

More information

Bochner s Theorem on the Fourier Transform on R

Bochner s Theorem on the Fourier Transform on R Bochner s heorem on the Fourier ransform on Yitao Lei October 203 Introduction ypically, the Fourier transformation sends suitable functions on to functions on. his can be defined on the space L ) + L

More information

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x . Define f n, g n : [, ] R by f n (x) = Advanced Calculus Math 27B, Winter 25 Solutions: Final nx2 + n 2 x, g n(x) = n2 x 2 + n 2 x. 2 Show that the sequences (f n ), (g n ) converge pointwise on [, ],

More information

From Fourier to Wavelets in 60 Slides

From Fourier to Wavelets in 60 Slides From Fourier to Wavelets in 60 Slides Bernhard G. Bodmann Math Department, UH September 20, 2008 B. G. Bodmann (UH Math) From Fourier to Wavelets in 60 Slides September 20, 2008 1 / 62 Outline 1 From Fourier

More information

Construction of Biorthogonal B-spline Type Wavelet Sequences with Certain Regularities

Construction of Biorthogonal B-spline Type Wavelet Sequences with Certain Regularities Illinois Wesleyan University From the SelectedWorks of Tian-Xiao He 007 Construction of Biorthogonal B-spline Type Wavelet Sequences with Certain Regularities Tian-Xiao He, Illinois Wesleyan University

More information

On the Hilbert Transform of Wavelets

On the Hilbert Transform of Wavelets On the Hilbert Transform of Wavelets Kunal Narayan Chaudhury and Michael Unser Abstract A wavelet is a localized function having a prescribed number of vanishing moments. In this correspondence, we provide

More information

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space Statistical Inference with Reproducing Kernel Hilbert Space Kenji Fukumizu Institute of Statistical Mathematics, ROIS Department

More information

Wavelets in Scattering Calculations

Wavelets in Scattering Calculations Wavelets in Scattering Calculations W. P., Brian M. Kessler, Gerald L. Payne polyzou@uiowa.edu The University of Iowa Wavelets in Scattering Calculations p.1/43 What are Wavelets? Orthonormal basis functions.

More information

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES November 1, 1 POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES FRITZ KEINERT AND SOON-GEOL KWON,1 Abstract Two-direction multiscaling functions φ and two-direction multiwavelets

More information

Size properties of wavelet packets generated using finite filters

Size properties of wavelet packets generated using finite filters Rev. Mat. Iberoamericana, 18 (2002, 249 265 Size properties of wavelet packets generated using finite filters Morten Nielsen Abstract We show that asymptotic estimates for the growth in L p (R- norm of

More information

RIESZ BASES AND UNCONDITIONAL BASES

RIESZ BASES AND UNCONDITIONAL BASES In this paper we give a brief introduction to adjoint operators on Hilbert spaces and a characterization of the dual space of a Hilbert space. We then introduce the notion of a Riesz basis and give some

More information

ON SAMPLING RELATED PROPERTIES OF B-SPLINE RIESZ SEQUENCES

ON SAMPLING RELATED PROPERTIES OF B-SPLINE RIESZ SEQUENCES ON SAMPLING RELATED PROPERTIES OF B-SPLINE RIESZ SEQUENCES SHIDONG LI, ZHENGQING TONG AND DUNYAN YAN Abstract. For B-splineRiesz sequencesubspacesx span{β k ( n) : n Z}, thereis an exact sampling formula

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Stochastic Differential Equations.

Stochastic Differential Equations. Chapter 3 Stochastic Differential Equations. 3.1 Existence and Uniqueness. One of the ways of constructing a Diffusion process is to solve the stochastic differential equation dx(t) = σ(t, x(t)) dβ(t)

More information

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS

THE PROBLEMS FOR THE SECOND TEST FOR BRIEF SOLUTIONS THE PROBLEMS FOR THE SECOND TEST FOR 18.102 BRIEF SOLUTIONS RICHARD MELROSE Question.1 Show that a subset of a separable Hilbert space is compact if and only if it is closed and bounded and has the property

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM STEVEN P. LALLEY 1. GAUSSIAN PROCESSES: DEFINITIONS AND EXAMPLES Definition 1.1. A standard (one-dimensional) Wiener process (also called Brownian motion)

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 13 Sequences and Series of Functions These notes are based on the notes A Teacher s Guide to Calculus by Dr. Louis Talman. The treatment of power series that we find in most of today s elementary

More information

The Central Limit Theorem: More of the Story

The Central Limit Theorem: More of the Story The Central Limit Theorem: More of the Story Steven Janke November 2015 Steven Janke (Seminar) The Central Limit Theorem:More of the Story November 2015 1 / 33 Central Limit Theorem Theorem (Central Limit

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

f(x n ) [0,1[ s f(x) dx.

f(x n ) [0,1[ s f(x) dx. ACTA ARITHMETICA LXXX.2 (1997) Dyadic diaphony by Peter Hellekalek and Hannes Leeb (Salzburg) 1. Introduction. Diaphony (see Zinterhof [13] and Kuipers and Niederreiter [6, Exercise 5.27, p. 162]) is a

More information

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued

Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Introduction to Empirical Processes and Semiparametric Inference Lecture 09: Stochastic Convergence, Continued Michael R. Kosorok, Ph.D. Professor and Chair of Biostatistics Professor of Statistics and

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg

Metric Spaces. Exercises Fall 2017 Lecturer: Viveka Erlandsson. Written by M.van den Berg Metric Spaces Exercises Fall 2017 Lecturer: Viveka Erlandsson Written by M.van den Berg School of Mathematics University of Bristol BS8 1TW Bristol, UK 1 Exercises. 1. Let X be a non-empty set, and suppose

More information

CONDITIONAL FULL SUPPORT OF GAUSSIAN PROCESSES WITH STATIONARY INCREMENTS

CONDITIONAL FULL SUPPORT OF GAUSSIAN PROCESSES WITH STATIONARY INCREMENTS J. Appl. Prob. 48, 561 568 (2011) Printed in England Applied Probability Trust 2011 CONDITIONAL FULL SUPPOT OF GAUSSIAN POCESSES WITH STATIONAY INCEMENTS DAIO GASBAA, University of Helsinki TOMMI SOTTINEN,

More information

Partial Differential Equations

Partial Differential Equations Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions

Economics 204 Fall 2011 Problem Set 2 Suggested Solutions Economics 24 Fall 211 Problem Set 2 Suggested Solutions 1. Determine whether the following sets are open, closed, both or neither under the topology induced by the usual metric. (Hint: think about limit

More information

be the set of complex valued 2π-periodic functions f on R such that

be the set of complex valued 2π-periodic functions f on R such that . Fourier series. Definition.. Given a real number P, we say a complex valued function f on R is P -periodic if f(x + P ) f(x) for all x R. We let be the set of complex valued -periodic functions f on

More information

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES

A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES A NOTE ON STOCHASTIC INTEGRALS AS L 2 -CURVES STEFAN TAPPE Abstract. In a work of van Gaans (25a) stochastic integrals are regarded as L 2 -curves. In Filipović and Tappe (28) we have shown the connection

More information

A Concise Course on Stochastic Partial Differential Equations

A Concise Course on Stochastic Partial Differential Equations A Concise Course on Stochastic Partial Differential Equations Michael Röckner Reference: C. Prevot, M. Röckner: Springer LN in Math. 1905, Berlin (2007) And see the references therein for the original

More information

Biorthogonal Spline Type Wavelets

Biorthogonal Spline Type Wavelets PERGAMON Computers and Mathematics with Applications 0 (00 1 0 www.elsevier.com/locate/camwa Biorthogonal Spline Type Wavelets Tian-Xiao He Department of Mathematics and Computer Science Illinois Wesleyan

More information

Topics in Fourier analysis - Lecture 2.

Topics in Fourier analysis - Lecture 2. Topics in Fourier analysis - Lecture 2. Akos Magyar 1 Infinite Fourier series. In this section we develop the basic theory of Fourier series of periodic functions of one variable, but only to the extent

More information

Spectral representations and ergodic theorems for stationary stochastic processes

Spectral representations and ergodic theorems for stationary stochastic processes AMS 263 Stochastic Processes (Fall 2005) Instructor: Athanasios Kottas Spectral representations and ergodic theorems for stationary stochastic processes Stationary stochastic processes Theory and methods

More information

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES INTRODUCTION TO REAL ANALYSIS II MATH 433 BLECHER NOTES. As in earlier classnotes. As in earlier classnotes (Fourier series) 3. Fourier series (continued) (NOTE: UNDERGRADS IN THE CLASS ARE NOT RESPONSIBLE

More information

Convergence of greedy approximation I. General systems

Convergence of greedy approximation I. General systems STUDIA MATHEMATICA 159 (1) (2003) Convergence of greedy approximation I. General systems by S. V. Konyagin (Moscow) and V. N. Temlyakov (Columbia, SC) Abstract. We consider convergence of thresholding

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

How much should we rely on Besov spaces as a framework for the mathematical study of images?

How much should we rely on Besov spaces as a framework for the mathematical study of images? How much should we rely on Besov spaces as a framework for the mathematical study of images? C. Sinan Güntürk Princeton University, Program in Applied and Computational Mathematics Abstract Relations between

More information

Analysis Qualifying Exam

Analysis Qualifying Exam Analysis Qualifying Exam Spring 2017 Problem 1: Let f be differentiable on R. Suppose that there exists M > 0 such that f(k) M for each integer k, and f (x) M for all x R. Show that f is bounded, i.e.,

More information

Adaptive Control with Multiresolution Bases

Adaptive Control with Multiresolution Bases Adaptive Control with Multiresolution Bases Christophe P. Bernard Centre de Mathématiques Appliquées Ecole Polytechnique 91128 Palaiseau cedex, FRANCE bernard@cmapx.polytechnique.fr Jean-Jacques E. Slotine

More information

2. Function spaces and approximation

2. Function spaces and approximation 2.1 2. Function spaces and approximation 2.1. The space of test functions. Notation and prerequisites are collected in Appendix A. Let Ω be an open subset of R n. The space C0 (Ω), consisting of the C

More information

Measure and Integration: Solutions of CW2

Measure and Integration: Solutions of CW2 Measure and Integration: s of CW2 Fall 206 [G. Holzegel] December 9, 206 Problem of Sheet 5 a) Left (f n ) and (g n ) be sequences of integrable functions with f n (x) f (x) and g n (x) g (x) for almost

More information

On an uniqueness theorem for characteristic functions

On an uniqueness theorem for characteristic functions ISSN 392-53 Nonlinear Analysis: Modelling and Control, 207, Vol. 22, No. 3, 42 420 https://doi.org/0.5388/na.207.3.9 On an uniqueness theorem for characteristic functions Saulius Norvidas Institute of

More information

arxiv: v2 [math.pr] 27 Oct 2015

arxiv: v2 [math.pr] 27 Oct 2015 A brief note on the Karhunen-Loève expansion Alen Alexanderian arxiv:1509.07526v2 [math.pr] 27 Oct 2015 October 28, 2015 Abstract We provide a detailed derivation of the Karhunen Loève expansion of a stochastic

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Affine and Quasi-Affine Frames on Positive Half Line

Affine and Quasi-Affine Frames on Positive Half Line Journal of Mathematical Extension Vol. 10, No. 3, (2016), 47-61 ISSN: 1735-8299 URL: http://www.ijmex.com Affine and Quasi-Affine Frames on Positive Half Line Abdullah Zakir Husain Delhi College-Delhi

More information

Introduction to Fourier Analysis

Introduction to Fourier Analysis Lecture Introduction to Fourier Analysis Jan 7, 2005 Lecturer: Nati Linial Notes: Atri Rudra & Ashish Sabharwal. ext he main text for the first part of this course would be. W. Körner, Fourier Analysis

More information

MATH 5640: Fourier Series

MATH 5640: Fourier Series MATH 564: Fourier Series Hung Phan, UMass Lowell September, 8 Power Series A power series in the variable x is a series of the form a + a x + a x + = where the coefficients a, a,... are real or complex

More information

LECTURE 5: THE METHOD OF STATIONARY PHASE

LECTURE 5: THE METHOD OF STATIONARY PHASE LECTURE 5: THE METHOD OF STATIONARY PHASE Some notions.. A crash course on Fourier transform For j =,, n, j = x j. D j = i j. For any multi-index α = (α,, α n ) N n. α = α + + α n. α! = α! α n!. x α =

More information

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes

Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Alea 4, 117 129 (2008) Convergence rates in weighted L 1 spaces of kernel density estimators for linear processes Anton Schick and Wolfgang Wefelmeyer Anton Schick, Department of Mathematical Sciences,

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION

PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced

More information

1.4 Complex random variables and processes

1.4 Complex random variables and processes .4. COMPLEX RANDOM VARIABLES AND PROCESSES 33.4 Complex random variables and processes Analysis of a stochastic difference equation as simple as x t D ax t C bx t 2 C u t can be conveniently carried over

More information

Convergence of the long memory Markov switching model to Brownian motion

Convergence of the long memory Markov switching model to Brownian motion Convergence of the long memory Marov switching model to Brownian motion Changryong Bae Sungyunwan University Natércia Fortuna CEF.UP, Universidade do Porto Vladas Pipiras University of North Carolina February

More information

L p -boundedness of the Hilbert transform

L p -boundedness of the Hilbert transform L p -boundedness of the Hilbert transform Kunal Narayan Chaudhury Abstract The Hilbert transform is essentially the only singular operator in one dimension. This undoubtedly makes it one of the the most

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),

More information

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM OLEG ZUBELEVICH DEPARTMENT OF MATHEMATICS THE BUDGET AND TREASURY ACADEMY OF THE MINISTRY OF FINANCE OF THE RUSSIAN FEDERATION 7, ZLATOUSTINSKY MALIY PER.,

More information

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1.

STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, = (2n 1)(2n 3) 3 1. STATISTICS 385: STOCHASTIC CALCULUS HOMEWORK ASSIGNMENT 4 DUE NOVEMBER 23, 26 Problem Normal Moments (A) Use the Itô formula and Brownian scaling to check that the even moments of the normal distribution

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Introduction to the Numerical Solution of IVP for ODE

Introduction to the Numerical Solution of IVP for ODE Introduction to the Numerical Solution of IVP for ODE 45 Introduction to the Numerical Solution of IVP for ODE Consider the IVP: DE x = f(t, x), IC x(a) = x a. For simplicity, we will assume here that

More information

Lecture 12: Detailed balance and Eigenfunction methods

Lecture 12: Detailed balance and Eigenfunction methods Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Matched Filtering for Generalized Stationary Processes

Matched Filtering for Generalized Stationary Processes Matched Filtering for Generalized Stationary Processes to be submitted to IEEE Transactions on Information Theory Vadim Olshevsky and Lev Sakhnovich Department of Mathematics University of Connecticut

More information

Sparse Multidimensional Representation using Shearlets

Sparse Multidimensional Representation using Shearlets Sparse Multidimensional Representation using Shearlets Demetrio Labate a, Wang-Q Lim b, Gitta Kutyniok c and Guido Weiss b, a Department of Mathematics, North Carolina State University, Campus Box 8205,

More information

PART IV Spectral Methods

PART IV Spectral Methods PART IV Spectral Methods Additional References: R. Peyret, Spectral methods for incompressible viscous flow, Springer (2002), B. Mercier, An introduction to the numerical analysis of spectral methods,

More information

Course and Wavelets and Filter Banks

Course and Wavelets and Filter Banks Course 18.327 and 1.130 Wavelets and Filter Bans Numerical solution of PDEs: Galerin approximation; wavelet integrals (projection coefficients, moments and connection coefficients); convergence Numerical

More information

Empirical Processes: General Weak Convergence Theory

Empirical Processes: General Weak Convergence Theory Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated

More information

Gaussian Processes. 1. Basic Notions

Gaussian Processes. 1. Basic Notions Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian

More information

A square Riemann integrable function whose Fourier transform is not square Riemann integrable

A square Riemann integrable function whose Fourier transform is not square Riemann integrable A square iemann integrable function whose Fourier transform is not square iemann integrable Andrew D. Lewis 2009/03/18 Last updated: 2009/09/15 Abstract An explicit example of such a function is provided.

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

An Introduction to Wavelets and some Applications

An Introduction to Wavelets and some Applications An Introduction to Wavelets and some Applications Milan, May 2003 Anestis Antoniadis Laboratoire IMAG-LMC University Joseph Fourier Grenoble, France An Introduction to Wavelets and some Applications p.1/54

More information

Honours Analysis III

Honours Analysis III Honours Analysis III Math 354 Prof. Dmitry Jacobson Notes Taken By: R. Gibson Fall 2010 1 Contents 1 Overview 3 1.1 p-adic Distance............................................ 4 2 Introduction 5 2.1 Normed

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

Construction of Multivariate Compactly Supported Orthonormal Wavelets

Construction of Multivariate Compactly Supported Orthonormal Wavelets Construction of Multivariate Compactly Supported Orthonormal Wavelets Ming-Jun Lai Department of Mathematics The University of Georgia Athens, GA 30602 April 30, 2004 Dedicated to Professor Charles A.

More information

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets

Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Decomposition of Riesz frames and wavelets into a finite union of linearly independent sets Ole Christensen, Alexander M. Lindner Abstract We characterize Riesz frames and prove that every Riesz frame

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Wavelets and modular inequalities in variable L p spaces

Wavelets and modular inequalities in variable L p spaces Wavelets and modular inequalities in variable L p spaces Mitsuo Izuki July 14, 2007 Abstract The aim of this paper is to characterize variable L p spaces L p( ) (R n ) using wavelets with proper smoothness

More information

CALCULUS JIA-MING (FRANK) LIOU

CALCULUS JIA-MING (FRANK) LIOU CALCULUS JIA-MING (FRANK) LIOU Abstract. Contents. Power Series.. Polynomials and Formal Power Series.2. Radius of Convergence 2.3. Derivative and Antiderivative of Power Series 4.4. Power Series Expansion

More information

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Topics in Harmonic Analysis Lecture 1: The Fourier transform Topics in Harmonic Analysis Lecture 1: The Fourier transform Po-Lam Yung The Chinese University of Hong Kong Outline Fourier series on T: L 2 theory Convolutions The Dirichlet and Fejer kernels Pointwise

More information

Differentiating an Integral: Leibniz Rule

Differentiating an Integral: Leibniz Rule Division of the Humanities and Social Sciences Differentiating an Integral: Leibniz Rule KC Border Spring 22 Revised December 216 Both Theorems 1 and 2 below have been described to me as Leibniz Rule.

More information

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Chapter One. The Calderón-Zygmund Theory I: Ellipticity Chapter One The Calderón-Zygmund Theory I: Ellipticity Our story begins with a classical situation: convolution with homogeneous, Calderón- Zygmund ( kernels on R n. Let S n 1 R n denote the unit sphere

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information