Simulation of stationary fractional Ornstein-Uhlenbeck process and application to turbulence

Similar documents
On the (multi)scale nature of fluid turbulence

Joint Parameter Estimation of the Ornstein-Uhlenbeck SDE driven by Fractional Brownian Motion

The Kolmogorov Law of turbulence

Integral representations in models with long memory

Statistics of Stochastic Processes

GAUSSIAN PROCESSES; KOLMOGOROV-CHENTSOV THEOREM

Hilbert Space Methods for Reduced-Rank Gaussian Process Regression

LES of Turbulent Flows: Lecture 3

National Sun Yat-Sen University CSE Course: Information Theory. Maximum Entropy and Spectral Estimation

Gaussian Fields and Percolation

Modeling and testing long memory in random fields

The Laplace driven moving average a non-gaussian stationary process

Topics in fractional Brownian motion

Applications of controlled paths

Advanced Digital Signal Processing -Introduction

From Fractional Brownian Motion to Multifractional Brownian Motion

A scaling limit from Euler to Navier-Stokes equations with random perturbation

Adaptive wavelet decompositions of stochastic processes and some applications

Turbulent Flows. g u

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

On Stochastic Adaptive Control & its Applications. Bozenna Pasik-Duncan University of Kansas, USA

Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations.

Power Series Solutions We use power series to solve second order differential equations

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Infinitely iterated Brownian motion

Beyond the color of the noise: what is memory in random phenomena?

Bulk scaling limits, open questions

Time Series: Theory and Methods

Averaging vs Chaos in Turbulent Transport?

LAN property for sde s with additive fractional noise and continuous time observation

Lecture 4: Introduction to stochastic processes and stochastic calculus

Note the diverse scales of eddy motion and self-similar appearance at different lengthscales of the turbulence in this water jet. Only eddies of size

Gaussian Processes. 1. Basic Notions

21 Gaussian spaces and processes

On 2 d incompressible Euler equations with partial damping.

A Class of Fractional Stochastic Differential Equations

A Concise Course on Stochastic Partial Differential Equations

Harnack Inequalities and Applications for Stochastic Equations

Gaussian, Markov and stationary processes

Max Planck Institut für Plasmaphysik

Bootstrapping high dimensional vector: interplay between dependence and dimensionality

What s more chaotic than chaos itself? Brownian Motion - before, after, and beyond.

Long-Range Dependence and Self-Similarity. c Vladas Pipiras and Murad S. Taqqu

Hölder continuity of the solution to the 3-dimensional stochastic wave equation

Exercises in stochastic analysis

RANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor

An inverse scattering problem in random media

Gaussian Random Fields: Geometric Properties and Extremes

Effects of Forcing Scheme on the Flow and the Relative Motion of Inertial Particles in DNS of Isotropic Turbulence

Markov processes and queueing networks

arxiv: v1 [physics.flu-dyn] 4 Jul 2015

Introduction to Signal Processing

Multivariate Generalized Ornstein-Uhlenbeck Processes

Fast-slow systems with chaotic noise

PART IV Spectral Methods

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

State Space Representation of Gaussian Processes

LANGEVIN THEORY OF BROWNIAN MOTION. Contents. 1 Langevin theory. 1 Langevin theory 1. 2 The Ornstein-Uhlenbeck process 8

On rough PDEs. Samy Tindel. University of Nancy. Rough Paths Analysis and Related Topics - Nagoya 2012

Optimal series representations of continuous Gaussian random fields

An adaptive numerical scheme for fractional differential equations with explosions

Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes

LECTURE 2: LOCAL TIME FOR BROWNIAN MOTION

GARCH processes continuous counterparts (Part 2)

Time Series Examples Sheet

Finite element approximation of the stochastic heat equation with additive noise

Hölder regularity for operator scaling stable random fields

Rough paths methods 1: Introduction

Handbook of Spatial Statistics Chapter 2: Continuous Parameter Stochastic Process Theory by Gneiting and Guttorp

Chapter 2. Some basic tools. 2.1 Time series: Theory Stochastic processes

In terms of measures: Exercise 1. Existence of a Gaussian process: Theorem 2. Remark 3.

Lecture 2. Turbulent Flow

Engineering. Spring Department of Fluid Mechanics, Budapest University of Technology and Economics. Large-Eddy Simulation in Mechanical

Persistence as a spectral property

Spatial Statistics with Image Analysis. Lecture L08. Computer exercise 3. Lecture 8. Johan Lindström. November 25, 2016

Rough solutions of the Stochastic Navier-Stokes Equation

Review of Fundamental Equations Supplementary notes on Section 1.2 and 1.3

A Sufficient Condition for the Kolmogorov 4/5 Law for Stationary Martingale Solutions to the 3D Navier-Stokes Equations

Bessel-like SPDEs. Lorenzo Zambotti, Sorbonne Université (joint work with Henri Elad-Altman) 15th May 2018, Luminy

Some functional (Hölderian) limit theorems and their applications (II)

Exercises. T 2T. e ita φ(t)dt.

Convoluted Brownian motions: a class of remarkable Gaussian processes

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

MP463 QUANTUM MECHANICS

Potential Theory on Wiener space revisited

Effects of Forcing Scheme on the Flow and the Relative Motion of Inertial Particles in DNS of Isotropic Turbulence

Sobolev Spaces. Chapter 10

µ X (A) = P ( X 1 (A) )

Lecture 3: Central Limit Theorem

Interest Rate Models:

Lecture Characterization of Infinitely Divisible Distributions

Regularization by noise in infinite dimensions

MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS

Critical percolation under conservative dynamics

Homework 2 Foundations of Computational Math 2 Spring 2019

A Short Introduction to Diffusion Processes and Ito Calculus

The Schrodinger Wave Equation (Engel 2.4) In QM, the behavior of a particle is described by its wave function Ψ(x,t) which we get by solving:

Parametric Signal Modeling and Linear Prediction Theory 1. Discrete-time Stochastic Processes

INVARIANT MANIFOLDS WITH BOUNDARY FOR JUMP-DIFFUSIONS

Generalized Langevin equation for relative turbulent dispersion

Transcription:

Simulation of stationary fractional Ornstein-Uhlenbeck process and application to turbulence Georg Schoechtel TU Darmstadt, IRTG 1529 Japanese-German International Workshop on Mathematical Fluid Dynamics, IRTG 1529 12.06.2012 1 / 22

Contents Motivation to use fractional noise The Model Matching desired statistical properties Simulation of stationary fractional Ornstein-Uhlenbeck process 2 / 22

Motivation Assume that the measurable random velocity field v : Ω R + R 2 R 2 on (Ω, F, P) is time stationary, space homogeneous, (local) isotropic. Phenomenological approach by Kolmogorov (1941) in 3D, by Kraichnan, Leith, Batchelor (1967-1969) in 2D yields 3 / 22

Motivation phenomenological correspondence between second-order structure function ( E v(x + r, t) v(x, t) 2) = C r α 1 (1) and energy spectrum E ( ) of the velocity field v E (k) = Ck α, (2) where C, C > 0 are some constants, 1 < α < 3, r R 2 and k > 0 in the inertial subrange. Special case: α = 5/3 (1): Kolmogorov s two-thirds law (2): Kolmogorov s five-thirds law (or Kolmogorov energy spectrum) 4 / 22

Motivation Connection to fractional Brownian motion (fbm): Taylor s frozen turbulence hypothesis(th)(1938) Spatial pattern of turbulent motion is unchanged as it is advected by a ( constant (in space and time) mean velocity V, V := i V 2 ) 1 2 i, let us say along the x axis. Mathematically, TH says that for any scalar valued fluid mechanics variable ξ (e.g. v i ) we have ξ t = V ξ x. (3) TH enables us to express the statistical characteristics space differences v(x + r, t) v(x, t) in terms of time differences v(x, t) v(x, t + s) corresponding to a fixed time t. 5 / 22

Motivation Indeed, (3) implies v i (x, t + s) = v i (x V s, t) and therefore by (1) we deduce ( E v(x, t) v(x, t + s) 2) = C V α 1 s α 1 (4) in the inertial subrange along the time axis. Note that due to our derivation the properties (2) and (4) are closely related to each other! Now comparing (4) with the statistical property ( E β H t+s β H t 2) = s 2H fbm (β H t ) t R with Hurst parameter H (0, 1) indicates that it is reasonable to model the random velocity field v with noise driven by a fbm. 6 / 22

In view of this motivation Shao (1993/1994), Sreenivasan et al. (1993), Papanicolaou and Solna (2003) Motivation used finite-dimensional fbm in their models for the velocity field of turbulent flows with special interest in the case H = 1 3. Besides, Flandoli (2002). Short background on fbm (β H t ) t 0 : same definition as standard Bm, but E(β H t β H s ) = 1 2 ( t 2H + s 2H t s 2H ), s, t 0, with Hurst parameter H (0, 1), H = 1/2: standard Bm, stationary increments, self-similar, P-a.s. Hölder continuous with exponent < H, H = 1/2: not a Markov process, not a semimartingale. 7 / 22

The Model We are concerned with the following system of equations in non-dimensional form: v(x, t) = ψ(x, t) = ( ψ (x, t), ψ ) (x, t), x [0, 1] 2, t 0 x 2 x 1 (M1) dψ t = νaψ t dt + ν H Q 1 2 db H t, ψ 0 V, t 0. (M2) Assumption A: (i) ν > 0, (ii) ( V := {f L per 2 ([0, 1] 2 ) [0,1] 2 f (x)dx = 0},, V =, L2 ([0,1] 2 )) is the separable Hilbert space with orthonormal basis (ONB) ( e k ( ) ) k K := ( e i k, ) k K where k K := 2πZ 2 \{(0, 0)} and i denotes the imaginary unit, (iii) A : D(A) V V is a linear operator such that there is a strictly positive sequence ( α k )k K [c, ) with c > 0, α k = α k, Ae k = α k e k and α k for k, 8 / 22

The Model (iv) Q 2 1 : V V is a bounded linear operator such that there is a positive sequence ( λ k )k K [0, ) with λ k = λ k and Q 1 2 e k = λ k e k. (iiv) ( Bt H ) is an infinite dimensional fractional Brownian motion in t 0 V with Hurst parameter H (0, 1) defined on a probability space (Ω, F, P) by the formal series B H (t) = β H k (t)e k, t R, k K where ( (β H k (t)) t R, k K ) is a sequence of complex valued and normalized fractional Brownian motions, each with the same fixed Hurst parameter H (0, 1), i.e. β H k = 1 2 Re(β H k ) + i 1 2 Im(β H k ), where Re(β H k ) and Im(βH k ) are independent real valued and normalized fractional Brownian motions on R, and different β H k are independent except β H k = (βh k ). 9 / 22

Mainly interested in the special case when A is the Laplace operator with periodic boundary conditions. Then α k = k 2, k K, and D( ) = W 2,2 ([0, 1] 2 ) V. Theorem Suppose Assumption A holds and assume that there is ɛ > 0 such that λ k α 2(ɛ H) k <. k K Then there exists a unique ergodic mild solution ψ to equation (M2) given by t ψ(t) = λk ν H e (t u)να k d β H k (u)e k, t R. k K 10 / 22

Theorem Suppose Assumption A holds. Further, assume that there is m N 0 and γ (0, 1) such that λ k α 2γ 2H k k 2m < and λ k αk 2H k 2m+2γ <. k K k K Then there is a unique ergodic mild solution ψ to equation (M2). Further, for all δ N 2 0 with δ m there is a version of Dδ ψ (again denoted by D δ ψ) such that D δ ψ C ɛ( [0, 1] 2 R ) P-a.s. for any ɛ (0, min{γ, H}). In particular, D δ ψ is real-valued. 11 / 22

Statistical properties Set A = and thereby α k = k 2, k K. The autocovariance function R : [0, 1] 2 [0, 1] 2 R R R 2 2 of v is given by R(x, y, t, s) = ( E ( v i (x, t)v j (y, s) )) 1 i,j 2 ( ) k 2 = 2 k 2 k 1 k k K 1 k 2 k1 2 λ k k 4H δ k (t s; H, ν) e k (x y), where we set δ k (t s; H, ν) := Γ(2H + 1) sin(πh) π 0 cos((t s)ν k 2 z z) 1 + z 2 dz. Therefore, v is a stationary, homogeneous and incompressible mean zero Gaussian random field. To ensure isotropy, we set λ k := ζ( k ), k K, for a suitable positive function ζ : [0, ) [0, ). 12 / 22

Statistical properties Recall our main motivation to use fractional noise: ( E v(x, t) v(x, s) 2) C t s 2H for t, s 0, x [0, 1] 2 and some constant C > 0. Theorem Suppose Assumption A holds and that there is m N and ɛ > 0 such that k K λ k k 2m+ɛ <. Then v = ψ C ( R, C m 1 ([0, 1] 2, R 2 ) ) P-a.s.. Further, there is a constant C (H, ν) > 0 such that for any t, s R and x [0, 1] 2 we have ( E v(x, t) v(x, s) 2) C (H, ν) t s 2H and for a fixed T > 0 there is a constant C (H, ν, T ) > 0 such that for any t, s [ T, T ] and x [0, 1] 2 we have ( C (H, ν, T ) t s 2H E v(x, t) v(x, s) 2). 13 / 22

Statistical properties Example: To match the Kolmogorov spectrum, i.e. E ( k ) C k 5/3 and therefore due to Taylor s frozen turbulence hypothesis ( E v(x, t) v(x, s) 2) C t s 2/3 we set λ k k 14 3 +4H H = 1/3 and by the last theorem v C (R, C ([0, 1], R 2 )) P-a.s.. 14 / 22

Suppose that Assumption A with for Simulation A = and thereby α k = k 2, k K = 2πZ 2 \{(0, 0)} and { ξ( k ) if k 2πR λ k =, k K, 0 else a fixed R N and a suitable function ξ : [0, ) [0, ) to obtain an isotropic random velocity field v with a desired energy spectrum. We have v C (R, C ([0, 1] 2, R 2 )) P-a.s. and v(x, t) = k K, i( k 2 k k 2πR 1 ) ˆψ k (t)e i<k,x>, x [0, 1] 2, t R, 15 / 22

where ˆψ k (t) := λ k ν H t = Question: λk 2 νh t =: ˆψ k,re (t) + i ˆψ k,im (t). Simulation e (t u)να k d β H k (u) e (t u)να k dre(β H k )(u) + i λk t 2 νh e (t u)να k dim(β H k )(u) How to simulate the real-valued stationary fractional Ornstein-Uhlenbeck (fou) processes ˆψ k,re, ˆψ k,im? Then (not in this talk) we can evaluate the series v 1 (x, t) = v 2 (x, t) = ik 2 ˆψ k (t)e i<k,x>, k K, k 2πR ( i)k 1 ˆψ k (t)e i<k,x> k K, k 2πR efficiently using fast Fourier transform (FFT) algorithm. 16 / 22

Simulation Let λ, α > 0. To simulate the real-valued stationary fou process X (t) = t λ e (t u)α d β H (u), t R, we introduce methods based on the covariance function of (X (t)) t R r(s) := E ( X (0)X (s) ) = λ Γ(2H + 1) sin(πh) α 2H cos(sαx) x1 2H π 0 1 + x 2 dx = λ 2α 2H cosh(αs)γ(1 + 2H) λ ( 2 s 2H 1F 2 and the (not normalized) spectral density f : D H R R 1; H + 1 2, H + 1; α2 s 2 x f (x) = λ Γ(2H + 1) sin(πh) x 1 2H α 2H 2π 1 + x 2 where Γ( ) is the Gamma function, cosh( ) the hyperbolic cosine, 1 F 2 the generalized hypergeometric function 1 F 2 and D H = R \ {0} if H > 1/2 and D H = R else. Remark: X (t) t R short rate dependent, i.e. n N r(n) <, for H 1/2 and long range dependent, i.e. n N r(n) =, for H > 1/2. 4 ), 17 / 22

Standard Cholesky method We fix t > 0, n N and set r n := r(n t) and X n := X n t. Further, we denote by r 0 r 1 r n r 1 r 0 r n 1 C n := (c ij ) i,j=0,1,...,n :=..... r n r n 1 r 0 the covariance matrix of X(n) := ( ). X 0, X 1,..., X n C n admits a Cholesky decomposition C n = G n G n, where G n := (g ij ) i,j=0,1,...,n is square lower triangular. LetZ 0, Z 1,..., Z n be iid standard normal random variables. Define the Y(n) := (Y 0, Y 1,..., Y n ) by Y i := i g ik Z k, i = 0, 1,..., n. k=0 We have Y(n) d = X(n)! 18 / 22

Durbin-Levinson method Idea: Generate recursively X n+1 given X 0, X 1,..., X n. We have where X n+1 (X 0, X 1,..., X n ) N (µ n+1, σ 2 n+1 ), µ n+1 := (C 1 n J n r 1:n+1 ) (X 0, X 1,..., X n ), µ 0 := 0, σ 2 n+1 := r 0 (J n r 1:n+1 ) C 1 n J n r 1:n+1, σ 2 0 := r 0, J n = (j lk ) l,k=0,1,...,n denotes the exchange matrix with ones on the antidiagonal and r 1:n+1 := (r 1, r 2,..., r n+1 ). The Durbin-Levinson (DB) algorithm computes recursively the Cholesky decomposition Cn 1 = L nd n 1 L n, together with σn 2 and Cn 1 J n r 1:n+1 and with that also µ n, where L n = (l ij ) i,j=0,1,...,n is unit lower triangular and D n = (d ij ) ij=0,1,...,n is diagonal with d nn = σn. 2 Advantage: DB algorithm is an exact method with complexity of O(n 2 ). 19 / 22

We have for p {0, 1,..., n} d π f t (u) X p = π 0 Spectral approximate method π cos(pu)dw 1(u) 0 f t (u) π sin(pu)dw 2(u) =: Y p, where f t (x) := f (x/ t)/ t and W 1, W 2 are two real-valued independent standard Brownian motions. Let l N, set s k = πk/l for k = 0, 1,..., l 1 define the simple function ξ (l) p (u) := f t (s 1 ) cos(ps 1 )1 π {0} (u) l 1 f t (s + k+1 ) cos(ps π k+1 )1 (sk,s k+1 ](u), k=0 u [0, π]. Let θ p (l) terms. be defined as ξ (l) p but with the cosine terms replaced by sine 20 / 22

Then Y (l) p := d = π l 1 k=0 0 ξ(l) Spectral approximate method π p (u)dw 1 (u) f t (s k+1 ) l 0 θ(l) p (u)dw 2 (u) {cos(ps k+1 )Z (0) k sin(ps k+1 )Z (1) k } =: X (l) p, where Z (0) k, Z (1) k, k = 0, 1,..., l 1, are iid standard normal random variables. Advantage: RHS of (5) can be evaluated efficiently using FFT (with n := N 1 and l := N/2 where N = 2 m for some m N). Fact: Y p (l) converges in (L) r, r 1, to Y p and therefore X (l) (n) := (X (l) 0, X (l) 1,..., X n (l) ) converges in distribution to X(n) for l. We can prove: E( Y p (l) Y p 2 ) 1/2 Cl H 1 for some C > 0. Drawback: The rate of convergence is quite slow and for H > 1/2 we approximate a long range dependent process by a short range dependent 21 / 22 (5)

Thank you for your attention! 22 / 22