Stochastic methods for solving partial differential equations in high dimension

Size: px
Start display at page:

Download "Stochastic methods for solving partial differential equations in high dimension"

Transcription

1 Stochastic methods for solving partial differential equations in high dimension Marie Billaud-Friess Joint work with : A. Macherey, A. Nouy & C. Prieur marie.billaud-friess@ec-nantes.fr Centrale Nantes, Laboratoire de Mathématiques Jean Leray July 2 nd, th International Conference in Monte Carlo & Quasi-Monte Carlo Methods in Scientific Computing

2 General context High-dimensional problem Find u solution of L(u) = g in D, u = f on D. (1) u : D R multivariate function of x = (x 1,..., x d ) on D R d L linear differential operator f, g : D R boundary condition and source term respectively M. Billaud-Friess MCQMC 18 Introduction 2/28

3 General context High-dimensional problem Find u solution of L(u) = g in D, u = f on D. (1) u : D R multivariate function of x = (x 1,..., x d ) on D R d L linear differential operator f, g : D R boundary condition and source term respectively Main challenges : 1 How to approximate u up to precision ε with reasonable computational cost? 2 In particular for high dimensions (d 1) while overcoming the so-called curse of dimensionality M. Billaud-Friess MCQMC 18 Introduction 2/28

4 Solving high dimensional problems Deterministic way... Framework : Nonlinear and sparse approximation M. Billaud-Friess MCQMC 18 Introduction 3/28

5 Solving high dimensional problems Deterministic way... Framework : Nonlinear and sparse approximation 1 Tensor based methods : General tensor networks with application in physics [Verstarete,Vidal]... Low-rank approximation methods [Bachmayer, Dahmen, Grasedyck, Hackbush, Khoromskij, Kressner, Matthies, Nouy, Oseledets, Schwab, Schneider, Uschmajew... ] M. Billaud-Friess MCQMC 18 Introduction 3/28

6 Solving high dimensional problems Deterministic way... Framework : Nonlinear and sparse approximation 1 Tensor based methods : General tensor networks with application in physics [Verstarete,Vidal]... Low-rank approximation methods [Bachmayer, Dahmen, Grasedyck, Hackbush, Khoromskij, Kressner, Matthies, Nouy, Oseledets, Schwab, Schneider, Uschmajew... ] 2 Sparse (tensor) approximation : Contributions for parameter dependent PDEs [Chkifa, Cohen, De Vore, Nobile, Schwab,... ] u(x) u Λ(x) = α νϕ ν(x) = ν Λ ν Λ α ν j 1 ϕ ν ν j (x j), How to compute u Λ? Polynomial expansions e.g [Chkifa 13, Cohen 15] Projection based methods : Galerkin e.g. with multilevel FE, wavelets [Schwab 11], least-square or polynomial interpolation [Chkifa 14] M. Billaud-Friess MCQMC 18 Introduction 3/28

7 Solving high dimensional problems Deterministic way... Framework : Nonlinear and sparse approximation 1 Tensor based methods : General tensor networks with application in physics [Verstarete,Vidal]... Low-rank approximation methods [Bachmayer, Dahmen, Grasedyck, Hackbush, Khoromskij, Kressner, Matthies, Nouy, Oseledets, Schwab, Schneider, Uschmajew... ] 2 Sparse (tensor) approximation : Contributions for parameter dependent PDEs [Chkifa, Cohen, De Vore, Nobile, Schwab,... ] u(x) u Λ(x) = ν Λ α νx ν = ν Λ α ν j 1 x ν j j, How to compute u Λ? Polynomial expansions e.g [Chkifa 13, Cohen 15] Projection based methods : Galerkin e.g. with multilevel FE, wavelets [Schwab 11], least-square or polynomial interpolation [Chkifa 14] Sparse polynomial interpolation : ϕ ν(x) = x ν Sample based methods requiring evaluation of u at some points of D Adaptive selection of Λ leading to sparse polynomial space [Chkifa 13, Chkifa 14] M. Billaud-Friess MCQMC 18 Introduction 3/28

8 Solving high dimensional problems Stochastic way... Single point estimations of u at x D : Combine probabilistic representation of u(x) together with Monte Carlo estimation [Graham 13, Gobet 13] u(x) u M (x) = 1 M M ψ(f, g, X x (ω m)), with X x (ω m) a realization of X x the diffusion process starting from x at t = 0. m=1 M. Billaud-Friess MCQMC 18 Introduction 4/28

9 Solving high dimensional problems Stochastic way... Single point estimations of u at x D : Combine probabilistic representation of u(x) together with Monte Carlo estimation [Graham 13, Gobet 13] u(x) u M (x) = 1 M M ψ(f, g, X x (ω m)), with X x (ω m) a realization of X x the diffusion process starting from x at t = 0. Toward a global approximation of u over D : m=1 Interpolation combined together with sequential variance reduction technique [Gobet 04, Gobet 09] Limited to small dimension! Deep learning based on artificial neural network approximations for linear and nonlinear parabolic high-dimensional problems [Beck 17, Weinan 17, Beck 18] Efficient method, but no rigorous analysis! M. Billaud-Friess MCQMC 18 Introduction 4/28

10 Outline Goal Gather a probabilistic approach for pointwise estimation of the solution together with a sparse interpolation method to compute global approximation to solution of highdimensional partial differential equations. 1 A sequential algorithm for variance reduction 2 A sequential algorithm in high dimension 3 A perturbed sparse adaptive algorithm M. Billaud-Friess MCQMC 18 Introduction 5/28

11 Outline 1 A sequential algorithm for variance reduction 2 A sequential algorithm in high dimension 3 A perturbed sparse adaptive algorithm M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 6/28

12 Pointwise estimation of u(x) Let define L in (1) by ( L = 1 d (σ(x)σ(x) T ) ij x 2 2 i x j + i,j=1 ) d b i(x) xi u(x) + k(x) as the infinitesimal generator of X x the d-dimensional diffusion process given by dx x t = b(x x t )dt + σ(x x t )dw t, X 0 = x D. where W is a d-dimensional brownian motion. i=1 M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 7/28

13 Pointwise estimation of u(x) Let define L in ( (1) by L = 1 d (σ(x)σ(x) T ) ij x 2 2 i x j + i,j=1 ) d b i(x) xi u(x) + k(x) as the infinitesimal generator of X x the d-dimensional diffusion process given by i=1 dx x t = b(x x t )dt + σ(x x t )dw t, X 0 = x D. where W is a d-dimensional brownian motion. Feynman-Kac formula Assuming that i) b, σ are lipschitz, ii) f, g : D R are continuous functions, iii) there exists u : D R in C 2 on all open subsets of D solution of (1), then for x D we have u(x) = E(ψ(f, g, X x )) := E with τ x = inf{s > 0 : X x s / D} the first exit time of D. ( f(x x τ x )e τx ) τx 0 k(x x r )dt + g(x x s )e s 0 k(x x r )dr ds. 0 M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 7/28

14 Pointwise estimation of u(x) Let define L in ( (1) by L = 1 d (σ(x)σ(x) T ) ij x 2 2 i x j + i,j=1 ) d b i(x) xi u(x) + k(x) as the infinitesimal generator of X x the d-dimensional diffusion process given by i=1 dx x t = b(x x t )dt + σ(x x t )dw t, X 0 = x D. where W is a d-dimensional brownian motion. Feynman-Kac formula Assuming that i) b, σ are lipschitz, ii) f, g : D R are continuous functions, iii) there exists u : D R in C 2 on all open subsets of D solution of (1), then for x D we have u(x) = E(φ(u, X x )) := E ( u(x x τ x )e τx ) τx 0 k(x x r )dt + Lu(X x s )e s 0 k(x x r )dr ds. 0 with τ x = inf{s > 0 : X x s / D} the first exit time of D. M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 7/28

15 In practice u(x) u t,m (x) Estimation of the expectation by Monte-Carlo simulation Noting 0 = t 0 < t 1 <... with t n = n t, n = 0, 1,..., let X x X x, t where = Xx n is given by Euler-Maruyama scheme X t,x t n where W n = W t n+1 W t n. X x n+1 = X x n + t b(x x n) + σ(x x n) W n, Let { X t,x (ω } M m), M independent realisations of X t,x m=1 t n u t,m (x) = 1 M M u(x t,x τx t m=1 N 1 (ω m)) + t i=0 L(u)(X t,x (ω t m))1 i t i τx t. M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 8/28

16 In practice u(x) u t,m (x) Estimation of the expectation by Monte-Carlo simulation Noting 0 = t 0 < t 1 <... with t n = n t, n = 0, 1,..., let X x X x, t where = Xx n is given by Euler-Maruyama scheme X t,x t n where W n = W t n+1 W t n. X x n+1 = X x n + t b(x x n) + σ(x x n) W n, Let { X t,x (ω } M m), M independent realisations of X t,x m=1 t n u t,m (x) := 1 M M φ(u, X t,x )(ω m). m=1 M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 8/28

17 In practice u(x) u t,m (x) Estimation of the expectation by Monte-Carlo simulation Noting 0 = t 0 < t 1 <... with t n = n t, n = 0, 1,..., let X x X x, t where = Xx n is given by Euler-Maruyama scheme X t,x t n where W n = W t n+1 W t n. X x n+1 = X x n + t b(x x n) + σ(x x n) W n, Let { X t,x (ω } M m), M independent realisations of X t,x m=1 t n u t,m (x) := 1 M φ(u, X t,x )(ω m). M m=1 Error : Integration error O( ( ) 1 t) + MC error O M Slow convergence w.r.t. M with large error for large variance V(u t,m (x)) M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 8/28

18 In practice u(x) u t,m (x) Estimation of the expectation by Monte-Carlo simulation Noting 0 = t 0 < t 1 <... with t n = n t, n = 0, 1,..., let X x X x, t where = Xx n is given by Euler-Maruyama scheme X t,x t n where W n = W t n+1 W t n. X x n+1 = X x n + t b(x x n) + σ(x x n) W n, Let { X t,x (ω } M m), M independent realisations of X t,x m=1 t n u t,m (x) := 1 M φ(u, X t,x )(ω m). M m=1 Error : Integration error O( ( ) 1 t) + MC error O M Slow convergence w.r.t. M with large error for large variance V(u t,m (x)) Improving the convergence : Multilevel MC [Giles 08], Variance reduction : control variate, importance sampling, antithetic sampling [Gobet 13, Graham 13],... M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 8/28

19 Sequential algorithm for variance reduction Gobet-Maire Algorithm [Gobet 04, Gobet 09] Notations : Let v be a smooth real-valued function defined on D the solution of an EDP under the form (1) Stochastic approximation of v at x D : v t,m (x) Approximation of v at step k of the algorithm : ṽ k Linear approximation (e.g. interpolant) of v using pointwise evaluations at {z i} N i=1 D for some basis functions {l i} N i=1 : I(v) = N v(z i)l i(x). i=1 M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 9/28

20 Algorithm 1. Gobet & Maire algorithm 1. Set ũ 0 = For k = 1,..., K : Compute e k t,m (z i) e k (z i), i = 1,..., N where e k = u ũ k is s.t. Define ẽ k = L(x)e k (x) = g(x) Lũ k (x), x D, e k (x) = f(x) ũ k (x), x D. N e k t,m (z i)l i(x) i=1 Update ũ k+1 = ũ k + ẽ k M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 10/28

21 Algorithm 1. Gobet & Maire algorithm 1. Set ũ 0 = For k = 1,..., K : Remarks : Compute e k t,m (z i) e k (z i), i = 1,..., N where e k = u ũ k is s.t. Define ẽ k = L(x)e k (x) = g(x) Lũ k (x), x D, e k (x) = f(x) ũ k (x), x D. N e k t,m (z i)l i(x) i=1 Update ũ k+1 = ũ k + ẽ k For t small enough, max i=1,...,n E(ũk (z i) z i) and V(ũ k (x i)) converge geometrically with k, up to threshold term. [Gobet 09]. Algorithm designed for small d. M. Billaud-Friess MCQMC 18 A sequential algorithm for variance reduction 10/28

22 Outline 1 A sequential algorithm for variance reduction 2 A sequential algorithm in high dimension 3 A perturbed sparse adaptive algorithm M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 11/28

23 Going to high dimension Sparse interpolation (in brief) [Chkifa 14] Let u : D R where D = [ 1, 1] d. M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 12/28

24 Going to high dimension Sparse interpolation (in brief) [Chkifa 14] Let u : D R where D = [ 1, 1] d. 1 Given a finite set of multi-indices ν = (ν 1,..., ν d ) noted Λ N d that is downward closed ν Λ, µ ν µ Λ, construct by tensorisation the multivariate polynomial P ν associated to ν P ν(x) = d p νi (x i) where x i p νi (x i) are univariate polynomial basis (e.g. Legendre). i=1 M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 12/28

25 Going to high dimension Sparse interpolation (in brief) [Chkifa 14] Let u : D R where D = [ 1, 1] d. 1 Given a finite set of multi-indices ν = (ν 1,..., ν d ) noted Λ N d that is downward closed ν Λ, µ ν µ Λ, construct by tensorisation the multivariate polynomial P ν associated to ν P ν(x) = d p νi (x i) where x i p νi (x i) are univariate polynomial basis (e.g. Legendre). 2 The interpolant I Λ(u) of u in P Λ is given by i=1 I Λ(u) = ν Λ α νp ν. It is associated with interpolation points (e.g. Leja, Magic Points [Maday 09]) {z ν} ν Λ [ 1, 1] d unisolvent for P Λ = span{p ν, ν Λ}, M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 12/28

26 Going to high dimension Sparse interpolation (in brief) [Chkifa 14] Let u : D R where D = [ 1, 1] d. 1 Given a finite set of multi-indices ν = (ν 1,..., ν d ) noted Λ N d that is downward closed ν Λ, µ ν µ Λ, construct by tensorisation the multivariate polynomial P ν associated to ν P ν(x) = d p νi (x i) where x i p νi (x i) are univariate polynomial basis (e.g. Legendre). 2 The interpolant I Λ(u) of u in P Λ is given by i=1 I Λ(u) = ν Λ u(z ν)l ν. It is associated with interpolation points (e.g. Leja, Magic Points [Maday 09]) {z ν} ν Λ [ 1, 1] d unisolvent for P Λ = span{p ν, ν Λ}, and {l ν} ν is a basis of P Λ s.t. l ν(z µ) = δ νµ. M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 12/28

27 Going to high dimension Adaptive selection of Λ and construction of I Λ (u) [Chkifa 13, Chkifa 14] Algorithm 2. Adaptive sparse interpolation algorithm 1. Set Λ 1 = {0 d } and n = While n < N and ε n 1 > ε : Define M n. Set Λ = Λ n 1 M n and compute I Λ (u). Select N n = {ν M n; E ν(i Λ (u)) θe Mn (I Λ (u))} Update Λ n = Λ n 1 N n. Compute I Λn (u) and ε n Update n = n + 1 M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 13/28

28 Going to high dimension Adaptive selection of Λ and construction of I Λ (u) [Chkifa 13, Chkifa 14] Algorithm 2. Adaptive sparse interpolation algorithm 1. Set Λ 1 = {0 d } and n = While n < N and ε n 1 > ε : Define M n. Set Λ = Λ n 1 M n and compute I Λ (u). Select N n = {ν M n; E ν(i Λ (u)) θe Mn (I Λ (u))} Update Λ n = Λ n 1 N n. Compute I Λn (u) and ε n Update n = n + 1 Remarks : 1 Using the reduced margin of Λ n we define M n = {ν / Λ n 1, ν j 0 ν e j Λ n 1}. M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 13/28

29 Going to high dimension Adaptive selection of Λ and construction of I Λ (u) [Chkifa 13, Chkifa 14] Algorithm 2. Adaptive sparse interpolation algorithm 1. Set Λ 1 = {0 d } and n = While n < N and ε n 1 > ε : Define M n. Set Λ = Λ n 1 M n and compute I Λ (u). Select N n = {ν M n; E ν(i Λ (u)) θe Mn (I Λ (u))} Update Λ n = Λ n 1 N n. Compute I Λn (u) and ε n Update n = n + 1 Remarks : 2 N n selected using a bulk chasing algorithm, ensuring Λ n downward closed, where E S(I Λ (u)) = β 2 ν i ν i S measures the norm of the interpolant coefficients {β ν j } ν j Λ decreasing values, associated to multi-indices in S. of IΛ (u) ordered by M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 13/28

30 Going to high dimension Adaptive selection of Λ and construction of I Λ (u) [Chkifa 13, Chkifa 14] Algorithm 2. Adaptive sparse interpolation algorithm 1. Set Λ 1 = {0 d } and n = While n < N and ε n 1 > ε : Define M n. Set Λ = Λ n 1 M n and compute I Λ (u). Select N n = {ν M n; E ν(i Λ (u)) θe Mn (I Λ (u))} Update Λ n = Λ n 1 N n. Compute I Λn (u) and ε n Update n = n + 1 Remarks : 3 Here ε n = ν M n α 2 ν ν Λ α 2 ν M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 13/28

31 Gobet-Maire algorithm in high-dimension Algorithm 1. (High dimension Gobet & Maire algorithm) 1. Set ũ 0 = For k = 1,..., K : Set e k = u ũ k is s.t. Interpolate ẽ k Update L(x)e k (x) = g(x) Lũ k (x), x D, e k (x) = f(x) ũ k (x), x D. ũ k+1 = ũ k + ẽ k Remark : Here ẽ k = I ε Λ k (e k t,m ) computed using Algorithm 2 for given precision ε using realizations {e k t,m (z ν)} ν. M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 14/28

32 Numerical results Problem setting Laplacian in d dimensions in D = [ 1, 1] d u(x) = g(x) x D, u(x) = f(x) x D, (2) M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 15/28

33 Numerical results Problem setting Laplacian in d dimensions in D = [ 1, 1] d u(x) = g(x) x D, u(x) = f(x) x D, (2) Tested methods : Let Λ = {ν N d, ν p}. Method 1 : No adaptive selection of multi-indices : ẽ k = I Λ(e k t,m ) Method 2 : Adaptive selection of multi-indices with Λ k Λ s.t. θ = 0.5 : ẽ k = I ε Λ (e k k t,m ) where ε = M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 15/28

34 Numerical results Problem setting Laplacian in d dimensions in D = [ 1, 1] d u(x) = g(x) x D, u(x) = f(x) x D, (2) Tested methods : Let Λ = {ν N d, ν p}. Method 1 : No adaptive selection of multi-indices : ẽ k = I Λ(e k t,m ) Method 2 : Adaptive selection of multi-indices with Λ k Λ s.t. θ = 0.5 : ẽ k = I ε Λ (e k k t,m ) where ε = Test configurations : Test 1 : u(x) = x 2 2, and d = 5, p = 2 Test 2 : u(x) = x sin(x 2) + exp(x 3) + sin(x 4)(x 5 + 1), and d = 5, p = 10. M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 15/28

35 Numerical results Method 1, test n 1, d = 5, p = 2, #Λ = M= M=500 M=1000 M=1000 u ũ k M=2000 u ũ k M= iteration k iteration k 10 1 dt= dt=0.1 dt=0.01 dt=0.01 u ũ k dt=0.001 dt= u ũ k dt=0.001 dt= iteration k iteration k Figure Test n 1 : Evolution of errors w.r.t. t (left) and M (right). 1 Convergence in few iterations 2 Error decreases with respect to M and t but finally stagnates M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 16/28

36 Numerical results Method 2, test n 1 d = 5, p = 2 ũ k u dt=0.1 dt=0.01 dt=0.001 ũ k u dt=0.1 dt=0.01 dt=0.001 u ũ k iteration k M=500 M=1000 M=2000 u ũ k iteration k M=500 M=1000 M= iteration k iteration k Figure Test n 1 : Evolution of errors w.r.t. t (left) and M (right). 1 Convergence with respect to M and t 2 Slower convergence w.r.t. to k then for non adapted Λ M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 17/28

37 Numerical results Method 2, test n 2 d = 5, p = 10 and t = 0.001, M = ũ k u 2 ũ k u #Λk iteration k iteration k Figure Test n 2 : Errors (left) and evolution of #Λ k (right). 1 The method converges with respect to k 2 Reasonnable #Λ k 200 during iterations M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 18/28

38 Error analysis for Λ k = Λ 1 Pointwise error Notations : Let a : D R be a smooth function, the integration error is e(a, t, x) = E(φ(a, X t,x ) φ(a, X x )). M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 19/28

39 Error analysis for Λ k = Λ 1 Pointwise error Notations : Let a : D R be a smooth function, the integration error is Pointwise error : We have e(a, t, x) = E(φ(a, X t,x ) φ(a, X x )). E(ũ k+1 (z ν) u(z ν)) = ν Λ E(u(z ν) ũ k (z ν))e(l ν, t, z ν) + e(u I Λ(u), t, z ν) M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 19/28

40 Error analysis for Λ k = Λ 1 Pointwise error Notations : Let a : D R be a smooth function, the integration error is Pointwise error : We have e(a, t, x) = E(φ(a, X t,x ) φ(a, X x )). E(ũ k+1 (z ν) u(z ν)) = ν Λ E(u(z ν) ũ k (z ν))e(l ν, t, z ν) + e(u I Λ(u), t, z ν) Taking absolute value and the supremum over ν with m k = sup E(ũ k+1 (z ν) u(z ν)), ν ρ m = sup e(l ν, t, zν), ν ν Λ r( t, u I Λ (u)) = sup e(u I Λ (u), t, z ν) ν m k+1 m k ρ m + r( t, u I Λ(u)). (3) M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 19/28

41 Error analysis for Λ k = Λ 1 Pointwise error Notations : Let a : D R be a smooth function, the integration error is Pointwise error : We have e(a, t, x) = E(φ(a, X t,x ) φ(a, X x )). E(ũ k+1 (z ν) u(z ν)) = ν Λ E(u(z ν) ũ k (z ν))e(l ν, t, z ν) + e(u I Λ(u), t, z ν) Taking absolute value and the supremum over ν m k+1 m k ρ m + r( t, u I Λ(u)). (3) Convergence theorem [Gobet 09] For t small enough s.t. ρ m < 1, {m k } k 0 converges geometrically at rate ρ m up to a threshold term that vanishes as e(u I(u), t, z ν) goes to 0 : 1 lim sup m k r( t, u I Λ(u)). k 1 ρ m M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 19/28

42 Error analysis for Λ k = Λ 2 Global error Starting from ( ) E ũ k+1 I Λ(u)(x) = E(ũ k+1 (z ν) u(z ν))l ν(x) (4) ν Λ we get by taking the absolute value and then the supremum over D x D ν Λ sup E(ũ k+1 I Λ(u))(x) m k+1 L Λ (5) x D were L Λ = sup l ν(x) denotes the Lebesgue constant. Then sup E(ũ k+1 u)(x) m k+1 L Λ + I Λ(u) u,d x D M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 20/28

43 Error analysis for Λ k = Λ 2 Global error Starting from ( ) E ũ k+1 I Λ(u)(x) = E(ũ k+1 (z ν) u(z ν))l ν(x) (4) ν Λ we get by taking the absolute value and then the supremum over D x D ν Λ sup E(ũ k+1 I Λ(u))(x) m k+1 L Λ (5) x D were L Λ = sup l ν(x) denotes the Lebesgue constant. Then sup E(ũ k+1 u)(x) m k+1 L Λ + I Λ(u) u,d x D Remark : When t is small enough s.t. ρ m < 1 the approximation error converges geometrically with k up to the a threshold term i.e. ( ) lim sup sup E(ũ k+1 u)(x) L Λ r( t, u I(u)) + I Λ(u) u,d. k x D 1 ρ m M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 20/28

44 Summary Pros and cons : Method 1 Method 2 Λ fixed Λ k adapted at each step Error analysis Convergence up to threshold Require additional assumptions Convergence Convergence in few iterations Slower convergence Problems Only small d Adapted for large d M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 21/28

45 Summary Pros and cons : Method 1 Method 2 Λ fixed Λ k adapted at each step Error analysis Convergence up to threshold Require additional assumptions Convergence Convergence in few iterations Slower convergence Problems Only small d Adapted for large d Alternative strategy : Perturbed sparse adaptive algorithm FK based evaluation instead of the true solution provided via GM algorithm Possible control up of the error of the approximation to a precision ε (?) M. Billaud-Friess MCQMC 18 A sequential algorithm in high dimension 21/28

46 Outline 1 A sequential algorithm for variance reduction 2 A sequential algorithm in high dimension 3 A perturbed sparse adaptive algorithm M. Billaud-Friess MCQMC 18 A perturbed sparse adaptive algorithm 22/28

47 Proposed algorithm Algorithm 3. Perturbed adaptive sparse interpolation algorithm 1. Set Λ 1 = {0 d } and set n = While n < N and ε n 1 > ε : Define M n. Set Λ = Λ n 1 M n and compute ũ Λ. Select N n = {ν M n; E ν(ũ Λ ) θe Mn (ũ Λ )}. Update Λ n = Λ n 1 N n. Compute ũ Λn and ε n. Set n = n + 1. M. Billaud-Friess MCQMC 18 A perturbed sparse adaptive algorithm 23/28

48 Proposed algorithm Algorithm 3. Perturbed adaptive sparse interpolation algorithm 1. Set Λ 1 = {0 d } and set n = While n < N and ε n 1 > ε : Remarks : Define M n. Set Λ = Λ n 1 M n and compute ũ Λ. Select N n = {ν M n; E ν(ũ Λ ) θe Mn (ũ Λ )}. Update Λ n = Λ n 1 N n. Compute ũ Λn and ε n. Set n = n + 1. Here ũ Λ, ũ Λn can be computed with Algorithm 1. stopped either for a stopping criterion based on fixed number of iterations K or given precision δ. For error analysis, the second choice is preferable but we need a practical error estimate ε (e.g. variance?). M. Billaud-Friess MCQMC 18 A perturbed sparse adaptive algorithm 23/28

49 First numerical results : exact vs. perturbed algorithm Test n 2 : t = 0.001, M = 2000, δ = ε = #Λ n ε n ε n K e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-05 4 Table #Λ n, ε n, ε n and K w.r.t. #Λ n Error L 2 -norm L -norm u I Λ20 (u) e e-03 u ũ δ Λ e e-03 Table Error to exact solution Remarks : Since δ = ε, approximation ũ δ Λ 20 as accurate as I Λ20 (u). 15 first iterates : low impact of the error due to ũ δ Λ n since governed by the interpolation error (> δ) Last iterates : δ is reached with few iterations using exact error stopping criterion M. Billaud-Friess MCQMC 18 A perturbed sparse adaptive algorithm 24/28

50 Conclusion Summary : Stochastic approaches for computing global approximation to solution of high-dimensional partial differential equations. M. Billaud-Friess MCQMC 18 Conclusions 25/28

51 Conclusion Summary : Stochastic approaches for computing global approximation to solution of high-dimensional partial differential equations. Ongoing work :? Clarify error analysis : for fixed Λ especially for the stagnation terms (w.r.t. t, M) and for the variance, for varying Λ k (nested sets)? Improve Algorithm 3. with better control of the error for the estimate provided by the approximation ũ Λn? Study the convergence of the perturbed sparse adaptive interpolation algorithm M. Billaud-Friess MCQMC 18 Conclusions 25/28

52 Conclusion Summary : Stochastic approaches for computing global approximation to solution of high-dimensional partial differential equations. Ongoing work :? Clarify error analysis : for fixed Λ especially for the stagnation terms (w.r.t. t, M) and for the variance, for varying Λ k (nested sets)? Improve Algorithm 3. with better control of the error for the estimate provided by the approximation ũ Λn? Study the convergence of the perturbed sparse adaptive interpolation algorithm Thanks for attention! M. Billaud-Friess MCQMC 18 Conclusions 25/28

53 References I [Bachmayer 16] M., Bachmayr, R. Schneider & A., Uschmajew Tensor Networks and Hierarchical Tensors for the Solution of High-Dimensional Partial Differential Equations, Found Comput Math [Beck 18] C. Beck, S. Becker, P. Grohs, N. Jaafari & A. Jentzen Solving stochastic differential equations and Kolmogorov equations by means of deep learning ArXiv 2018 [Beck 17] C. Beck & A. Jentzen Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations ArXiv 2017 Christian Beck, Weinan E, Arnulf Jentzen [Chkifa 13] A. Chkifa, A., Cohen, R., DeVore, R., & C. Schwab, Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs. ESAIM : Mathematical Modelling and Numerical Analysis [Chkifa 14] A. Chkifa, A. Cohen & C.Schwab, High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs, Found. Comput. Math [Cohen 15] A., Cohen, A., & R. DeVore Approximation of high-dimensional parametric PDEs, Acta Numerica [Nouy 17] A. Nouy, Low-Rank Methods for High-Dimensional Approximation and Model Order Reduction Model Reduction and Approximation, Chapter [Giles 08] M. Giles Multi-level Monte Carlo path simulation. Operations research, 2008 M. Billaud-Friess MCQMC 18 Conclusions 26/28

54 References II [Grasedyck 13] L. Grasedyck, D. Kressner & C. Tobler, A literature survey of low- rank tensor approximation techniques, GAMM-Mitteilungen [Gobet 04] E. Gobet and S. Maire, A spectral Monte Carlo method for the Poisson equation, Monte Carlo Methods Appl [Gobet 09] E. Gobet and S. Maire, Sequential control variates for functionals of Markov processes, SIAM Journal on Numerical Analysis [Gobet 13] E. Gobet, Méthodes de Monte-Carlo et processus stochastiques : du linéaire au non linéaire, Editions de l école polytechnique 2013 [Graham 13] G. Graham & D. Talay, Stochastic simulation and Monte Carlo methods Stochastic Modelling and Applied Probability, Springer [Hackbush 14] W. Hackbusch. Numerical tensor calculus Acta numerica 2014 [Khoromskij 12] B. Khoromskij, Tensors-structured numerical methods in scientific computing : Survey on recent advances, Chemometrics and Intelligent Laboratory Systems [Kolda 09] T. G. Kolda & B. W. Bader, Tensor decompositions and applications. SIAM Review [Kloeden 99] P.E. Kloeden & E. Platen, Numerical Solution of Stochastic Differential Equations, Springer Verlag 1999 M. Billaud-Friess MCQMC 18 Conclusions 27/28

55 References III [Maday 09] Y. Maday, N. C. Nguyen, A. T. Patera, & G. S. H. Pau. A general multipurpose interpolation procedure : The magic points, Communications on Pure & Applied Analysis, 2009 [Oseledets 11] I. Oseledets. Tensor-train decomposition. SIAM J. Sci. Comput [Schwab 11] C. Schwab, & C. J. Gittelson, Sparse tensor discretizations of high-dimensional parametric and stochastic PDEs, Acta Numerica [Weinan 17] E. Weinan, H. Jiequn & A. Jentzen, Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations Arxiv M. Billaud-Friess MCQMC 18 Conclusions 28/28

Fast Numerical Methods for Stochastic Computations

Fast Numerical Methods for Stochastic Computations Fast AreviewbyDongbinXiu May 16 th,2013 Outline Motivation 1 Motivation 2 3 4 5 Example: Burgers Equation Let us consider the Burger s equation: u t + uu x = νu xx, x [ 1, 1] u( 1) =1 u(1) = 1 Example:

More information

Giovanni Migliorati. MATHICSE-CSQI, École Polytechnique Fédérale de Lausanne

Giovanni Migliorati. MATHICSE-CSQI, École Polytechnique Fédérale de Lausanne Analysis of the stability and accuracy of multivariate polynomial approximation by discrete least squares with evaluations in random or low-discrepancy point sets Giovanni Migliorati MATHICSE-CSQI, École

More information

Sparse Quadrature Algorithms for Bayesian Inverse Problems

Sparse Quadrature Algorithms for Bayesian Inverse Problems Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research

More information

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University

Sparse Grids. Léopold Cambier. February 17, ICME, Stanford University Sparse Grids & "A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions" from MK Stoyanov, CG Webster Léopold Cambier ICME, Stanford University

More information

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs

Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Tensor Sparsity and Near-Minimal Rank Approximation for High-Dimensional PDEs Wolfgang Dahmen, RWTH Aachen Collaborators: Markus Bachmayr, Ron DeVore, Lars Grasedyck, Endre Süli Paris, Oct.11, 2013 W.

More information

A spectral Monte Carlo method for the Poisson equation

A spectral Monte Carlo method for the Poisson equation A spectral Monte Carlo method for the Poisson equation Emmanuel Gobet and Sylvain Maire December 4th, 3 ABSTRACT. Using a sequential Monte Carlo algorithm, we compute a spectral approximation of the solution

More information

Deep Learning for Partial Differential Equations (PDEs)

Deep Learning for Partial Differential Equations (PDEs) Deep Learning for Partial Differential Equations (PDEs) Kailai Xu kailaix@stanford.edu Bella Shi bshi@stanford.edu Shuyi Yin syin3@stanford.edu Abstract Partial differential equations (PDEs) have been

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Approximation of High-Dimensional Rank One Tensors

Approximation of High-Dimensional Rank One Tensors Approximation of High-Dimensional Rank One Tensors Markus Bachmayr, Wolfgang Dahmen, Ronald DeVore, and Lars Grasedyck March 14, 2013 Abstract Many real world problems are high-dimensional in that their

More information

arxiv: v1 [math.na] 3 Apr 2019

arxiv: v1 [math.na] 3 Apr 2019 arxiv:1904.02017v1 [math.na] 3 Apr 2019 Poly-Sinc Solution of Stochastic Elliptic Differential Equations Maha Youssef and Roland Pulch Institute of Mathematics and Computer Science, University of Greifswald,

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs

Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation

More information

Solving stochastic differential equations and Kolmogorov equations by means of deep learning

Solving stochastic differential equations and Kolmogorov equations by means of deep learning Solving stochastic differential equations and Kolmogorov equations by means of deep learning Christian Beck 1, Sebastian Becker 2, Philipp Grohs 3, Nor Jaafari 4, and Arnulf Jentzen 5 arxiv:186.421v1 [math.na]

More information

Space-time Finite Element Methods for Parabolic Evolution Problems

Space-time Finite Element Methods for Parabolic Evolution Problems Space-time Finite Element Methods for Parabolic Evolution Problems with Variable Coefficients Ulrich Langer, Martin Neumüller, Andreas Schafelner Johannes Kepler University, Linz Doctoral Program Computational

More information

Integro-differential equations: Regularity theory and Pohozaev identities

Integro-differential equations: Regularity theory and Pohozaev identities Integro-differential equations: Regularity theory and Pohozaev identities Xavier Ros Oton Departament Matemàtica Aplicada I, Universitat Politècnica de Catalunya PhD Thesis Advisor: Xavier Cabré Xavier

More information

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Solving the Stochastic Steady-State Diffusion Problem Using Multigrid Tengfei Su Applied Mathematics and Scientific Computing Advisor: Howard Elman Department of Computer Science Sept. 29, 2015 Tengfei

More information

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems

The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems Weinan E 1 and Bing Yu 2 arxiv:1710.00211v1 [cs.lg] 30 Sep 2017 1 The Beijing Institute of Big Data Research,

More information

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( )

Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio ( ) Mathematical Methods for Neurosciences. ENS - Master MVA Paris 6 - Master Maths-Bio (2014-2015) Etienne Tanré - Olivier Faugeras INRIA - Team Tosca November 26th, 2014 E. Tanré (INRIA - Team Tosca) Mathematical

More information

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition

Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition Tensor networks, TT (Matrix Product States) and Hierarchical Tucker decomposition R. Schneider (TUB Matheon) John von Neumann Lecture TU Munich, 2012 Setting - Tensors V ν := R n, H d = H := d ν=1 V ν

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Discretization of SDEs: Euler Methods and Beyond

Discretization of SDEs: Euler Methods and Beyond Discretization of SDEs: Euler Methods and Beyond 09-26-2006 / PRisMa 2006 Workshop Outline Introduction 1 Introduction Motivation Stochastic Differential Equations 2 The Time Discretization of SDEs Monte-Carlo

More information

MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics

MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 05.2013 February 2013 (NEW 10.2013 A weighted empirical

More information

Splitting methods with boundary corrections

Splitting methods with boundary corrections Splitting methods with boundary corrections Alexander Ostermann University of Innsbruck, Austria Joint work with Lukas Einkemmer Verona, April/May 2017 Strang s paper, SIAM J. Numer. Anal., 1968 S (5)

More information

Adaptive low-rank approximation in hierarchical tensor format using least-squares method

Adaptive low-rank approximation in hierarchical tensor format using least-squares method Workshop on Challenges in HD Analysis and Computation, San Servolo 4/5/2016 Adaptive low-rank approximation in hierarchical tensor format using least-squares method Anthony Nouy Ecole Centrale Nantes,

More information

Adaptive timestepping for SDEs with non-globally Lipschitz drift

Adaptive timestepping for SDEs with non-globally Lipschitz drift Adaptive timestepping for SDEs with non-globally Lipschitz drift Mike Giles Wei Fang Mathematical Institute, University of Oxford Workshop on Numerical Schemes for SDEs and SPDEs Université Lille June

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information

Introduction to Random Diffusions

Introduction to Random Diffusions Introduction to Random Diffusions The main reason to study random diffusions is that this class of processes combines two key features of modern probability theory. On the one hand they are semi-martingales

More information

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0

Introduction to multiscale modeling and simulation. Explicit methods for ODEs : forward Euler. y n+1 = y n + tf(y n ) dy dt = f(y), y(0) = y 0 Introduction to multiscale modeling and simulation Lecture 5 Numerical methods for ODEs, SDEs and PDEs The need for multiscale methods Two generic frameworks for multiscale computation Explicit methods

More information

Greedy Control. Enrique Zuazua 1 2

Greedy Control. Enrique Zuazua 1 2 Greedy Control Enrique Zuazua 1 2 DeustoTech - Bilbao, Basque Country, Spain Universidad Autónoma de Madrid, Spain Visiting Fellow of LJLL-UPMC, Paris enrique.zuazua@deusto.es http://enzuazua.net X ENAMA,

More information

Polynomial chaos expansions for sensitivity analysis

Polynomial chaos expansions for sensitivity analysis c DEPARTMENT OF CIVIL, ENVIRONMENTAL AND GEOMATIC ENGINEERING CHAIR OF RISK, SAFETY & UNCERTAINTY QUANTIFICATION Polynomial chaos expansions for sensitivity analysis B. Sudret Chair of Risk, Safety & Uncertainty

More information

Foundations of the stochastic Galerkin method

Foundations of the stochastic Galerkin method Foundations of the stochastic Galerkin method Claude Jeffrey Gittelson ETH Zurich, Seminar for Applied Mathematics Pro*oc Workshop 2009 in isentis Stochastic diffusion equation R d Lipschitz, for ω Ω,

More information

Random and Deterministic perturbations of dynamical systems. Leonid Koralov

Random and Deterministic perturbations of dynamical systems. Leonid Koralov Random and Deterministic perturbations of dynamical systems Leonid Koralov - M. Freidlin, L. Koralov Metastability for Nonlinear Random Perturbations of Dynamical Systems, Stochastic Processes and Applications

More information

On the fast convergence of random perturbations of the gradient flow.

On the fast convergence of random perturbations of the gradient flow. On the fast convergence of random perturbations of the gradient flow. Wenqing Hu. 1 (Joint work with Chris Junchi Li 2.) 1. Department of Mathematics and Statistics, Missouri S&T. 2. Department of Operations

More information

Computation of operators in wavelet coordinates

Computation of operators in wavelet coordinates Computation of operators in wavelet coordinates Tsogtgerel Gantumur and Rob Stevenson Department of Mathematics Utrecht University Tsogtgerel Gantumur - Computation of operators in wavelet coordinates

More information

Multilevel Monte Carlo for Lévy Driven SDEs

Multilevel Monte Carlo for Lévy Driven SDEs Multilevel Monte Carlo for Lévy Driven SDEs Felix Heidenreich TU Kaiserslautern AG Computational Stochastics August 2011 joint work with Steffen Dereich Philipps-Universität Marburg supported within DFG-SPP

More information

Introduction to asymptotic techniques for stochastic systems with multiple time-scales

Introduction to asymptotic techniques for stochastic systems with multiple time-scales Introduction to asymptotic techniques for stochastic systems with multiple time-scales Eric Vanden-Eijnden Courant Institute Motivating examples Consider the ODE {Ẋ = Y 3 + sin(πt) + cos( 2πt) X() = x

More information

Uniformly Uniformly-ergodic Markov chains and BSDEs

Uniformly Uniformly-ergodic Markov chains and BSDEs Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,

More information

On the Stability of Polynomial Interpolation Using Hierarchical Sampling

On the Stability of Polynomial Interpolation Using Hierarchical Sampling On the Stability of Polynomial Interpolation Using Hierarchical Sampling Albert Cohen, Abdellah Chkifa To cite this version: Albert Cohen, Abdellah Chkifa. On the Stability of Polynomial Interpolation

More information

On continuous time contract theory

On continuous time contract theory Ecole Polytechnique, France Journée de rentrée du CMAP, 3 octobre, 218 Outline 1 2 Semimartingale measures on the canonical space Random horizon 2nd order backward SDEs (Static) Principal-Agent Problem

More information

A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models

A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models A Posteriori Adaptive Low-Rank Approximation of Probabilistic Models Rainer Niekamp and Martin Krosche. Institute for Scientific Computing TU Braunschweig ILAS: 22.08.2011 A Posteriori Adaptive Low-Rank

More information

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches

Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Splitting Techniques in the Face of Huge Problem Sizes: Block-Coordinate and Block-Iterative Approaches Patrick L. Combettes joint work with J.-C. Pesquet) Laboratoire Jacques-Louis Lions Faculté de Mathématiques

More information

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient Quasi-optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient F. Nobile, L. Tamellini, R. Tempone, F. Tesei CSQI - MATHICSE, EPFL, Switzerland Dipartimento di

More information

Some lecture notes for Math 6050E: PDEs, Fall 2016

Some lecture notes for Math 6050E: PDEs, Fall 2016 Some lecture notes for Math 65E: PDEs, Fall 216 Tianling Jin December 1, 216 1 Variational methods We discuss an example of the use of variational methods in obtaining existence of solutions. Theorem 1.1.

More information

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction

WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES. 1. Introduction WEAK VERSIONS OF STOCHASTIC ADAMS-BASHFORTH AND SEMI-IMPLICIT LEAPFROG SCHEMES FOR SDES BRIAN D. EWALD 1 Abstract. We consider the weak analogues of certain strong stochastic numerical schemes considered

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Imprecise Filtering for Spacecraft Navigation

Imprecise Filtering for Spacecraft Navigation Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem

More information

Deep learning with differential Gaussian process flows

Deep learning with differential Gaussian process flows Deep learning with differential Gaussian process flows Pashupati Hegde Markus Heinonen Harri Lähdesmäki Samuel Kaski Helsinki Institute for Information Technology HIIT Department of Computer Science, Aalto

More information

Efficient Solvers for Stochastic Finite Element Saddle Point Problems

Efficient Solvers for Stochastic Finite Element Saddle Point Problems Efficient Solvers for Stochastic Finite Element Saddle Point Problems Catherine E. Powell c.powell@manchester.ac.uk School of Mathematics University of Manchester, UK Efficient Solvers for Stochastic Finite

More information

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs

Chapter Two: Numerical Methods for Elliptic PDEs. 1 Finite Difference Methods for Elliptic PDEs Chapter Two: Numerical Methods for Elliptic PDEs Finite Difference Methods for Elliptic PDEs.. Finite difference scheme. We consider a simple example u := subject to Dirichlet boundary conditions ( ) u

More information

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation

The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation The Generalized Empirical Interpolation Method: Analysis of the convergence and application to data assimilation coupled with simulation Y. Maday (LJLL, I.U.F., Brown Univ.) Olga Mula (CEA and LJLL) G.

More information

Mean Field Games on networks

Mean Field Games on networks Mean Field Games on networks Claudio Marchi Università di Padova joint works with: S. Cacace (Rome) and F. Camilli (Rome) C. Marchi (Univ. of Padova) Mean Field Games on networks Roma, June 14 th, 2017

More information

Math 5588 Final Exam Solutions

Math 5588 Final Exam Solutions Math 5588 Final Exam Solutions Prof. Jeff Calder May 9, 2017 1. Find the function u : [0, 1] R that minimizes I(u) = subject to u(0) = 0 and u(1) = 1. 1 0 e u(x) u (x) + u (x) 2 dx, Solution. Since the

More information

Gaussian processes for inference in stochastic differential equations

Gaussian processes for inference in stochastic differential equations Gaussian processes for inference in stochastic differential equations Manfred Opper, AI group, TU Berlin November 6, 2017 Manfred Opper, AI group, TU Berlin (TU Berlin) inference in SDE November 6, 2017

More information

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications

Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Stochastic Collocation Methods for Polynomial Chaos: Analysis and Applications Dongbin Xiu Department of Mathematics, Purdue University Support: AFOSR FA955-8-1-353 (Computational Math) SF CAREER DMS-64535

More information

Monte-Carlo Methods and Stochastic Processes

Monte-Carlo Methods and Stochastic Processes Monte-Carlo Methods and Stochastic Processes From Linear to Non-Linear EMMANUEL GOBET ECOLE POLYTECHNIQUE - UNIVERSITY PARIS-SACLAY CMAP, PALAISEAU CEDEX, FRANCE CRC Press Taylor & Francis Group 6000 Broken

More information

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations

Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Mean-square Stability Analysis of an Extended Euler-Maruyama Method for a System of Stochastic Differential Equations Ram Sharan Adhikari Assistant Professor Of Mathematics Rogers State University Mathematical

More information

Low-rank techniques applied to moment equations for the stochastic Darcy problem with lognormal permeability

Low-rank techniques applied to moment equations for the stochastic Darcy problem with lognormal permeability Low-rank techniques applied to moment equations for the stochastic arcy problem with lognormal permeability Francesca Bonizzoni 1,2 and Fabio Nobile 1 1 CSQI-MATHICSE, EPFL, Switzerland 2 MOX, Politecnico

More information

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true 3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO

More information

Collocation methods for uncertainty quantification in PDE models with random data

Collocation methods for uncertainty quantification in PDE models with random data Collocation methods for uncertainty quantification in PDE models with random data Fabio Nobile CSQI-MATHICSE, EPFL, Switzerland Acknowlegments: L. Tamellini, G. Migliorati (EPFL), R. Tempone, (KAUST),

More information

Comparison of Clenshaw-Curtis and Leja quasi-optimal sparse grids for the approximation of random PDEs

Comparison of Clenshaw-Curtis and Leja quasi-optimal sparse grids for the approximation of random PDEs MATHICSE Mathematics Institute of Computational Science and Engineering School of Basic Sciences - Section of Mathematics MATHICSE Technical Report Nr. 41.2014 September 2014 Comparison of Clenshaw-Curtis

More information

Controlled Diffusions and Hamilton-Jacobi Bellman Equations

Controlled Diffusions and Hamilton-Jacobi Bellman Equations Controlled Diffusions and Hamilton-Jacobi Bellman Equations Emo Todorov Applied Mathematics and Computer Science & Engineering University of Washington Winter 2014 Emo Todorov (UW) AMATH/CSE 579, Winter

More information

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid

Greedy control. Martin Lazar University of Dubrovnik. Opatija, th Najman Conference. Joint work with E: Zuazua, UAM, Madrid Greedy control Martin Lazar University of Dubrovnik Opatija, 2015 4th Najman Conference Joint work with E: Zuazua, UAM, Madrid Outline Parametric dependent systems Reduced basis methods Greedy control

More information

Numerical Solutions to Partial Differential Equations

Numerical Solutions to Partial Differential Equations Numerical Solutions to Partial Differential Equations Zhiping Li LMAM and School of Mathematical Sciences Peking University Numerical Methods for Partial Differential Equations Finite Difference Methods

More information

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems.

Applied Math Qualifying Exam 11 October Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. Printed Name: Signature: Applied Math Qualifying Exam 11 October 2014 Instructions: Work 2 out of 3 problems in each of the 3 parts for a total of 6 problems. 2 Part 1 (1) Let Ω be an open subset of R

More information

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle TD M EDP 08 no Elliptic equations: regularity, maximum principle Estimates in the sup-norm I Let be an open bounded subset of R d of class C. Let A = (a ij ) be a symmetric matrix of functions of class

More information

Approximation theory in neural networks

Approximation theory in neural networks Approximation theory in neural networks Yanhui Su yanhui su@brown.edu March 30, 2018 Outline 1 Approximation of functions by a sigmoidal function 2 Approximations of continuous functionals by a sigmoidal

More information

Greedy algorithms for high-dimensional non-symmetric problems

Greedy algorithms for high-dimensional non-symmetric problems Greedy algorithms for high-dimensional non-symmetric problems V. Ehrlacher Joint work with E. Cancès et T. Lelièvre Financial support from Michelin is acknowledged. CERMICS, Ecole des Ponts ParisTech &

More information

Exact Simulation of Diffusions and Jump Diffusions

Exact Simulation of Diffusions and Jump Diffusions Exact Simulation of Diffusions and Jump Diffusions A work by: Prof. Gareth O. Roberts Dr. Alexandros Beskos Dr. Omiros Papaspiliopoulos Dr. Bruno Casella 28 th May, 2008 Content 1 Exact Algorithm Construction

More information

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations

Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Multilevel Monte Carlo for Stochastic McKean-Vlasov Equations Lukasz Szpruch School of Mathemtics University of Edinburgh joint work with Shuren Tan and Alvin Tse (Edinburgh) Lukasz Szpruch (University

More information

recent developments of approximation theory and greedy algorithms

recent developments of approximation theory and greedy algorithms recent developments of approximation theory and greedy algorithms Peter Binev Department of Mathematics and Interdisciplinary Mathematics Institute University of South Carolina Reduced Order Modeling in

More information

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model

On a Data Assimilation Method coupling Kalman Filtering, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model On a Data Assimilation Method coupling, MCRE Concept and PGD Model Reduction for Real-Time Updating of Structural Mechanics Model 2016 SIAM Conference on Uncertainty Quantification Basile Marchand 1, Ludovic

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator

Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator Lipschitz continuity for solutions of Hamilton-Jacobi equation with Ornstein-Uhlenbeck operator Thi Tuyen Nguyen Ph.D student of University of Rennes 1 Joint work with: Prof. E. Chasseigne(University of

More information

Numerical Solution I

Numerical Solution I Numerical Solution I Stationary Flow R. Kornhuber (FU Berlin) Summerschool Modelling of mass and energy transport in porous media with practical applications October 8-12, 2018 Schedule Classical Solutions

More information

Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations

Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations arxiv:1706.04702v1 [math.na] 15 Jun 2017 Weinan E 1, Jiequn

More information

BIHARMONIC WAVE MAPS INTO SPHERES

BIHARMONIC WAVE MAPS INTO SPHERES BIHARMONIC WAVE MAPS INTO SPHERES SEBASTIAN HERR, TOBIAS LAMM, AND ROLAND SCHNAUBELT Abstract. A global weak solution of the biharmonic wave map equation in the energy space for spherical targets is constructed.

More information

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients

Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Multigrid and stochastic sparse-grids techniques for PDE control problems with random coefficients Università degli Studi del Sannio Dipartimento e Facoltà di Ingegneria, Benevento, Italia Random fields

More information

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197

MATH 56A SPRING 2008 STOCHASTIC PROCESSES 197 MATH 56A SPRING 8 STOCHASTIC PROCESSES 197 9.3. Itô s formula. First I stated the theorem. Then I did a simple example to make sure we understand what it says. Then I proved it. The key point is Lévy s

More information

Besov regularity for the solution of the Stokes system in polyhedral cones

Besov regularity for the solution of the Stokes system in polyhedral cones the Stokes system Department of Mathematics and Computer Science Philipps-University Marburg Summer School New Trends and Directions in Harmonic Analysis, Fractional Operator Theory, and Image Analysis

More information

Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties. Philipp Christian Petersen

Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties. Philipp Christian Petersen Deep Neural Networks and Partial Differential Equations: Approximation Theory and Structural Properties Philipp Christian Petersen Joint work Joint work with: Helmut Bölcskei (ETH Zürich) Philipp Grohs

More information

Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders

Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders Solution of Stochastic Nonlinear PDEs Using Wiener-Hermite Expansion of High Orders Dr. Mohamed El-Beltagy 1,2 Joint Wor with Late Prof. Magdy El-Tawil 2 1 Effat University, Engineering College, Electrical

More information

A regularized least-squares method for sparse low-rank approximation of multivariate functions

A regularized least-squares method for sparse low-rank approximation of multivariate functions Workshop Numerical methods for high-dimensional problems April 18, 2014 A regularized least-squares method for sparse low-rank approximation of multivariate functions Mathilde Chevreuil joint work with

More information

ξ,i = x nx i x 3 + δ ni + x n x = 0. x Dξ = x i ξ,i = x nx i x i x 3 Du = λ x λ 2 xh + x λ h Dξ,

ξ,i = x nx i x 3 + δ ni + x n x = 0. x Dξ = x i ξ,i = x nx i x i x 3 Du = λ x λ 2 xh + x λ h Dξ, 1 PDE, HW 3 solutions Problem 1. No. If a sequence of harmonic polynomials on [ 1,1] n converges uniformly to a limit f then f is harmonic. Problem 2. By definition U r U for every r >. Suppose w is a

More information

Obstacle problems for nonlocal operators

Obstacle problems for nonlocal operators Obstacle problems for nonlocal operators Camelia Pop School of Mathematics, University of Minnesota Fractional PDEs: Theory, Algorithms and Applications ICERM June 19, 2018 Outline Motivation Optimal regularity

More information

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p

p 1 ( Y p dp) 1/p ( X p dp) 1 1 p Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p

More information

Non-Intrusive Solution of Stochastic and Parametric Equations

Non-Intrusive Solution of Stochastic and Parametric Equations Non-Intrusive Solution of Stochastic and Parametric Equations Hermann G. Matthies a Loïc Giraldi b, Alexander Litvinenko c, Dishi Liu d, and Anthony Nouy b a,, Brunswick, Germany b École Centrale de Nantes,

More information

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME

ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME ON THE POLICY IMPROVEMENT ALGORITHM IN CONTINUOUS TIME SAUL D. JACKA AND ALEKSANDAR MIJATOVIĆ Abstract. We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems

More information

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form

Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Multi-dimensional Stochastic Singular Control Via Dynkin Game and Dirichlet Form Yipeng Yang * Under the supervision of Dr. Michael Taksar Department of Mathematics University of Missouri-Columbia Oct

More information

A stochastic particle system for the Burgers equation.

A stochastic particle system for the Burgers equation. A stochastic particle system for the Burgers equation. Alexei Novikov Department of Mathematics Penn State University with Gautam Iyer (Carnegie Mellon) supported by NSF Burgers equation t u t + u x u

More information

Finite element approximation of the stochastic heat equation with additive noise

Finite element approximation of the stochastic heat equation with additive noise p. 1/32 Finite element approximation of the stochastic heat equation with additive noise Stig Larsson p. 2/32 Outline Stochastic heat equation with additive noise du u dt = dw, x D, t > u =, x D, t > u()

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Fast-slow systems with chaotic noise

Fast-slow systems with chaotic noise Fast-slow systems with chaotic noise David Kelly Ian Melbourne Courant Institute New York University New York NY www.dtbkelly.com May 1, 216 Statistical properties of dynamical systems, ESI Vienna. David

More information

Spectral Representation of Random Processes

Spectral Representation of Random Processes Spectral Representation of Random Processes Example: Represent u(t,x,q) by! u K (t, x, Q) = u k (t, x) k(q) where k(q) are orthogonal polynomials. Single Random Variable:! Let k (Q) be orthogonal with

More information

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions:

Xt i Xs i N(0, σ 2 (t s)) and they are independent. This implies that the density function of X t X s is a product of normal density functions: 174 BROWNIAN MOTION 8.4. Brownian motion in R d and the heat equation. The heat equation is a partial differential equation. We are going to convert it into a probabilistic equation by reversing time.

More information

On semilinear elliptic equations with measure data

On semilinear elliptic equations with measure data On semilinear elliptic equations with measure data Andrzej Rozkosz (joint work with T. Klimsiak) Nicolaus Copernicus University (Toruń, Poland) Controlled Deterministic and Stochastic Systems Iasi, July

More information

Stochastic differential equation models in biology Susanne Ditlevsen

Stochastic differential equation models in biology Susanne Ditlevsen Stochastic differential equation models in biology Susanne Ditlevsen Introduction This chapter is concerned with continuous time processes, which are often modeled as a system of ordinary differential

More information

c 2002 Society for Industrial and Applied Mathematics

c 2002 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 4 No. pp. 507 5 c 00 Society for Industrial and Applied Mathematics WEAK SECOND ORDER CONDITIONS FOR STOCHASTIC RUNGE KUTTA METHODS A. TOCINO AND J. VIGO-AGUIAR Abstract. A general

More information

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011

HJB equations. Seminar in Stochastic Modelling in Economics and Finance January 10, 2011 Department of Probability and Mathematical Statistics Faculty of Mathematics and Physics, Charles University in Prague petrasek@karlin.mff.cuni.cz Seminar in Stochastic Modelling in Economics and Finance

More information

Approximating diffusions by piecewise constant parameters

Approximating diffusions by piecewise constant parameters Approximating diffusions by piecewise constant parameters Lothar Breuer Institute of Mathematics Statistics, University of Kent, Canterbury CT2 7NF, UK Abstract We approximate the resolvent of a one-dimensional

More information

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES)

LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) LECTURE # 0 BASIC NOTATIONS AND CONCEPTS IN THE THEORY OF PARTIAL DIFFERENTIAL EQUATIONS (PDES) RAYTCHO LAZAROV 1 Notations and Basic Functional Spaces Scalar function in R d, d 1 will be denoted by u,

More information