Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors
|
|
- Nora Wood
- 5 years ago
- Views:
Transcription
1 Well-posed Bayesian Inverse Problems: Beyond Gaussian Priors Bamdad Hosseini Department of Mathematics Simon Fraser University, Canada The Institute for Computational Engineering and Sciences University of Texas at Austin September 28, / 53
2 The Bayesian approach A model for indirect measurements y Y of a parameter u X X, Y are Banach spaces. G encompasses measurement noise. Simple example, additive noise model G deterministic forward map η independent random variable. Find u given a realization of y. y = G(u). y = G(u) + η. 2 / 53
3 Application 1: atmospheric source inversion ( t L)c = u in D (, T ], c(x, t) = on D (, T ), c(x, ) = y [m] Deposition in mg Advection-diffusion PDE. Estimate u from accumulated deposition measurements x [m] 1 B. Hosseini and J. M. Stockie. Bayesian estimation of airborne fugitive emissions using a Gaussian plume model. In: Atmospheric Environment 141 (216), pp / 53
4 Application 2: high intensity focused ultrasound treatment Attenuation (%) Phase shift (deg) Compensate for phase shift to focus the beam Acoustic waves converge. Ablate diseased tissue. Phase shift due to skull bone. Defocused beam. Estimate phase shift from MR-ARFI data 2. 2 B. Hosseini et al. A Bayesian approach for energy-based estimation of acoustic aberrations in high intensity focused ultrasound treatment. arxiv preprint: / 53
5 Running example ϕ u u Example: Deconvolution S(ϕ u) Let X = L 2 (T) and assume G(u) = S(ϕ u). Here ϕ C (T) and S : C(T) R m collects point values of a function at m distinct points {t k } m k=1. Noise η is additive and Gaussian. We want to find u given noisy pointwise observations of the blurred image. 5 / 53
6 The Bayesian approach Bayes rule 3 in the sense of Radon-Nikodym theorem, dµ y (u) = 1 exp( Φ(u; y)). (1) dµ Z(y) µ prior measure. Φ likelihood potential y = G(u). Z(y) = X exp( Φ(u; y))dµ (u) normalizing constant. µ y posterior measure. µ Likelihood µ y 3 A. M. Stuart. Inverse problems: a Bayesian perspective. In: Acta Numerica 19 (21), pp / 53
7 Why non-gaussian priors? dµ y (u) = 1 exp( Φ(u; y)). dµ Z(y) suppµ y suppµ since µ y µ. The prior has a major influence on the posterior. 7 / 53
8 Application 1: atmospheric source inversion Ω := D (, T ] Measurement operators M i : L 2 (Ω) R, M i (c) = Forward map J i (,T ] c dxdt, i = 1,, m. G : L 2 (Ω) R m, G(u) = (M 1 (c(u)),, M m (c(u)) T, c = ( t L) 1 u Linear in u. c L 2 (Ω) C u L 2 (Ω). G is bounded and linear. y [m] Deposition in mg x [m] 8 / 53
9 Application 1: atmospheric source inversion Assume y = G(u) + η where η N (, σ 2 I). Φ(u; y) = 1 2σ 2 G(u) y 2 2. Positivity constraint on source u. Sources are likely to be localized. y [m] x [m] Deposition in mg 9 / 53
10 Application 2: high intensity focused ultrasound treatment Underlying aberration field u. Pointwise evaluation map for points {t 1,, t d } in T 2 S : C(T 2 ) R m (S(u)) j = u(t j ). (Experiments) A collection of vectors {z j } m j=1 in Rd. Quadratic forward map G : C(T 2 ) R m (G(u)) j := zj T S(u) 2. Phase retrieval in essence Phase shift (deg) Attenuation (%) / 53
11 Application 2: high intensity focused ultrasound treatment Assume y = G(u) + η where η N (, σ 2 I). Φ(u; y) = 1 2σ 2 G(u) y 2 2. G(u) 2 C u 2 C(T 2 ). Nonlinear forward map. Hydrophone experiments show sharp interfaces. Gaussian priors are too smooth. Phase shift (deg) Attenuation (%) / 53
12 We need to go beyond Gaussian priors! 12 / 53
13 Key questions dµ y (u) = 1 exp( Φ(u; y)). dµ Z(y) Is µ y well-defined? What happens if y is perturbed? Easier to address when X = R n. More delicate when X is infinite dimensional. 13 / 53
14 Outline (i) General theory of well-posed Bayesian inverse problems. (ii) Convex prior measures. (iii) Models for compressible parameters. (iv) Infinitely divisible prior measures. 14 / 53
15 Well-posedness dµ y (u) = 1 exp( Φ(u; y)) dµ Z(y) Definition: Well-posed Bayesian inverse problem Suppose X is a Banach space and d(, ) R is a probability metric. Given a prior µ and likelihood potential Φ, the problem of finding µ y is well-posed if: (i) (Existence and uniqueness) There exists a unique posterior probability measure µ y µ given by Bayes rule. (ii) (Stability) For every choice of ɛ > there exists a δ > so that d(µ y, µ y ) ɛ for all y, y Y so that y y Y δ. 15 / 53
16 Metrics on probability measures The total variation and Hellinger metrics d T V (µ 1, µ 2 ) := 1 dµ 1 2 X dν dµ 2 dν dν d H (µ 1, µ 2 ) := 1 ( ) 2 dµ1 2 dν dµ2 dν dν Note: X 2d 2 H(µ 1, µ 2 ) d T V (µ 1, µ 2 ) 8d H (µ 1, µ 2 ). Hellinger is more attractive in practice. For h L 2 (X, µ 1 ) L 2 (X, µ 2 ) h(u)dµ 1 (u) h(u)dµ 2 (u) C(h)d H(µ 1, µ 2 ). X X Different convergence rates. 1/2. 16 / 53
17 Well-posedness: analogy The likelihood Φ depends on the map G. Given Φ what classes of priors can be used? PDE analogy A PDE where g H s and L : H p H s is a differential operator. Lu = g Seek a solution u = L 1 g H p. Well-posedness depends on the smoothing behavior of L 1 and regularity of g. In the Bayesian approach we seek µ y that satisfies Pµ y = µ. The mapping P 1 depends on Φ. Well-posedness depends on behavior of P 1 and tail behavior of µ. In a nutshell, if Φ grows at a certain rate we have well-posedness if µ has sufficient tail decay. 17 / 53
18 Assumptions on likelihood Minimal assumptions on Φ (BH, 216) The potential Φ : X Y R satisfies: ab (L1) (Locally bounded from below): There is a positive and non-decreasing function f 1 : R + [1, ) so that Φ(u; y) M log (f 1 ( u X )). (L2) (Locally bounded from above): (L3) (Locally Lipschitz in u): Φ(u; y) K. Φ(u 1 ; y) Φ(u 2, y) L u 1 u 2 X. (L4) (Continuity in y): There is a positive and non-decreasing function f 2 : R + R + so that Φ(u; y 1 ) Φ(u, y 2 ) Cf 2 ( u X ) y 1 y 2 Y. a Stuart, Inverse problems: a Bayesian perspective. b T. J. Sullivan. Well-posed Bayesian inverse problems and heavy-tailed stable Banach space priors. arxiv preprint: / 53
19 Well-posedness: existence and uniqueness (L1) (Bounded from below) Φ(u; y) M log (f 1 ( u X )). (L2) (Locally bounded from above) Φ(u; y) K. (L3) (Locally Lipschitz) Φ(u 1 ; y) Φ(u 2, y) L u 1 u 2 X. Existence and uniqueness (BH,216) Let Φ satisfy Assumptions L1 L3 with a function f 1 1, then the posterior µ y is well-defined if f 1 ( X ) L 1 (X, µ ). Example: If y = G(u) + η, η N (, Σ) then Φ(u; y) = 1 2 G(u) y 2 Σ and so M = and f 1 = 1 since Φ. 19 / 53
20 Well-posedness: stability (L1) (Lower bound) Φ(u; y) M log (f 1 ( u X )). (L2) (Locally bounded from above) Φ(u; y) K. (L4) (Continuity in y) Φ(u; y 1 ) Φ(u, y 2 ) Cf 2 ( u X ) y 1 y 2 Y. Total variation stability (BH,216) Let Φ satisfy Assumptions L1, L2 and L4 with functions f 1, f 2 and let µ y and µ y be two posterior measures for y and y Y. If f 2 ( X )f 1 ( X ) L 1 (X, µ ) then there is C > such that d T V (µ y, µ y ) C y y Y. Hellinger stability (BH,216) If the stronger condition (f 2 ( X )) 2 f 1 ( X ) L 1 (X, µ ) is satisfied then there is C > so that d H (µ y, µ y ) C y y Y. 2 / 53
21 The case of additive noise models let Y = R m, η N (, Σ) and suppose y = G(u) + η. Φ(u; y) = 1 2 G(u) y 2 Σ. Φ(u; y) thus (L1) is satisfied with f 1 = 1 and M =. Well-posedness with additive noise models (BH,216) Let the forward map G satisfy: (i) (Bounded) There is a positive and non-decreasing function f 1 so that G(u) Σ C f( u X ) u X. (ii) (Locally Lipschitz) G(u 1 ) G(u 2 ) Σ K u 1 u 2 X. Then the problem of finding µ y is well-posed if f( X ) L 1 (X, µ ). 21 / 53
22 The case of additive noise models Example: polynomially bounded forward map Consider the additive noise model when Y = R m, η N (, I). Then Φ(u; y) = 1 2 G(u) y 2 2. If G is locally Lipschitz, G(u) 2 C max{1, u p X } and p N then we have well-posedness if µ has bounded moments of degree p. In particular, if G is bounded and linear then it suffices for µ to have bounded moment of degree one. Recall the deconvolution example! Example: Gaussian priors In the setting of the above example, if µ is a centered Gaussian then it follows from Fernique s theorem that we have well-posedness if G(u) 2 C exp(α u X ) for any α >. 22 / 53
23 Outline (i) General theory of well-posed Bayesian inverse problems. (ii) Convex prior measures (µ has exponential tails). (iii) Models for compressible parameters. (iv) Infinitely divisible prior measures. 23 / 53
24 From convex regularization to convex priors Let X = R n and Y = R m. Common variational formulation for inverse problems { } 1 u = arg min v R n 2 G(v) y 2 Σ + R(v) R(v) = θ 2 Lv 2 2 (Tikhonov), R(v) = θ Lv 1 (Sparsity). Bayesian analog dµ y dλ Λ Lebesgue measure. ( (v) exp 1 ) 2 G(v) y 2 Σ exp ( R(v)). }{{} prior } {{ } Likelihood A random variable with a log-concave Lebesgue density is convex. 24 / 53
25 Convex priors Gaussian, Laplace, Logistic, etc. l 1 regularization corresponds to Laplace priors. dµ y ( (v) exp 1 ) dλ 2 G(v) y 2 Σ exp ( v 1 ). ( exp 1 ) 2 G(v) n y 2 Σ exp ( v j ) j=1 Definition: Convex measure 4 A Radon probability measure ν on X is called convex whenever it satisfies the following inequality for β [, 1] and Borel sets A, B X. ν(βa + (1 β)b) ν(a) β ν(b) 1 β 4 C. Borell. Convex measures on locally convex spaces. In: Arkiv för Matematik 12.1 (1974), pp / 53
26 Convex priors Convex measures have exponential tails 5 Let ν be a convex measure on X. If X < ν-a.s. then there exists a constant κ > so that X exp(κ u X)dν(u) <. Well-posedness with convex priors (BH & NN, 216) Let the prior µ be a convex measure assume where G is locally Lipschitz and Φ(u; y) = 1 2 G(u) y 2 Σ G(u) Σ C max{1, u p X }, for p N. Then we have a well-posed Bayesian inverse problem. 5 Borell, Convex measures on locally convex spaces. 26 / 53
27 Constructing convex priors Product prior (BH & NN, 216) Suppose X has an unconditional and normalized Schauder basis {x k }. (a) Pick a fixed sequence {γ k } l 2. (b) Pick a sequence of centered, real valued and convex random variables {ξ k } so that Varξ k < uniformly. (c) Take µ to be the law of u γ k ξ k x k. k=1 X <, µ -a.s. and X L 2 (X, µ ). The ξ k are convex then so is µ. Reminiscent of Karhunen-Loève expansion of Gaussians. u γ k ξ k x k, ξ k N (, 1). k=1 {γ k, x k } eigenpairs of covariance operator. 27 / 53
28 Returning to deconvolution ϕ u u t k Example: Deconvolution Let X = L 2 (T) and assume Φ(u; y) = 1 2 G(u) y 2 2 where G(u) = S(ϕ u). Here ϕ C (T) and S : C(T) R m collects point values of a function at m distinct points {t j }. We will construct a convex prior that is supported on B s pp(t) 28 / 53
29 Example: deconvolution with a Besov type prior Let {x k } be an r-regular wavelet basis for L 2 (T). For s < r, p 1 define the Besov space B s pp(t) B s pp(t) := { w L 2 (T) : } k (sp 1/2) w, x k p < k=1 The prior µ is the law of u k=1 γ kξ k x k. ξ k are Laplace random variables with Lebesgue density 1 2 exp( t ). γ k = k ( 1 2p +s). 6 M. Lassas, E. Saksman, and S. Siltanen. Discretization-invariant Bayesian inversion and Besov space priors. In: Inverse Problems and Imaging 3.1 (29), pp T. Bui-Thanh and O. Ghattas. A scalable algorithm for MAP estimators in Bayesian inverse problems with Besov priors. In: Inverse Problems and Imaging 9.1 (215), pp / 53
30 ξ k γ k k u(t) t.1.1 ξ k γ k k u(t) t Example: deconvolution with a Besov type prior B s pp (T) < µ -a.s. and µ is a convex measure. Forward map is bounded and linear. Problem is well-posed. 8 8 M. Dashti, S. Harris, and A. M. Stuart. Besov priors for Bayesian inverse problems. In: Inverse Problems and Imaging 6.2 (212), pp / 53
31 Outline (i) General theory of well-posed Bayesian inverse problems. (ii) Convex prior measures. (iii) Models for compressible parameters. (iv) Infinitely divisible prior measures. 31 / 53
32 Models for compressibility A common problem in compressed sensing p = 1, problem is convex. u 1 = arg min v R n 2 Av y θ v p p. p < 1, no longer convex but a good model for compressibility. Bayesian analog dµ y ( (v) exp 1 ) dλ 2 Av n y 2 2 exp ( θ v j p ). j=1 32 / 53
33 Models for compressibility p = 1. p = 1/2. 33 / 53
34 Models for compressibility Symmetric generalized gamma prior for < p, q 1 dµ n dλ (v) v j p 1 exp ( v j q ). j=1 Corresponding posterior dµ y (v) exp 1 dλ 2 Av y 2 2 v q q + Maximizer is no longer well-defined. Perturbed variational analog for ɛ > u 1 ɛ = arg min v R n 2 Av y v q q n (p 1) ln( v j ) j=1 n (p 1) ln(ɛ + v j ) j=1 34 / 53
35 Models for compressibility p = 1/2, q = 1 p = q = 1/2 35 / 53
36 Models for compressibility SG(p, q, α) density on the real line. p 2αΓ(q/p) t α p 1 Has bounded moments of all order. ( exp t q) α dλ(t). SG(p, q, α) prior: extension to infinite dimensions (BH,216) Suppose X has an unconditional and normalized Schauder basis {x k }. (a) Pick a fixed sequence {γ k } l 2. (b) {ξ k } is an i.i.d sequence of SG(p, q, α) random variables. (c) Take µ to be the law of u k=1 γ kξ k x k. 36 / 53
37 Returning to deconvolution Example: deconvolution with a SG(p, q, α) prior Let {x k } be the Fourier basis in L 2 (T). Define the Sobolev space H 1 (T) { H 1 (T) := w L 2 (T) : } (1 + k 2 ) w, x k 2 < k=1 The prior µ is the law of u k=1 γ kξ k x k. ξ k are i.i.d. SG(p, q, α) random variables. γ k = (1 + k 2 ) 3/4. 37 / 53
38 ξ k γ k u(t) k t ξ k γ k u(t) k t Example: deconvolution with a SG(p, q, α) prior H 1 (T) < µ -a.s. Forward map is bounded and linear. Problem is well-posed. 38 / 53
39 Outline (i) General theory of well-posed Bayesian inverse problems. (ii) Convex prior measures. (iii) Models for compressible parameters. (iv) Infinitely divisible prior measures. 39 / 53
40 Infinitely divisible priors Definition: infinitely divisible measure (ID) A Radon probability measure ν on X is infinitely divisible (ID) if for each n N there exists a Radon probability measure ν 1/n so that ν = (ν 1/n ) n. ξ is ID if for any n N there exist i.i.d random variables {ξ 1/n k } n k=1 so that ξ = d n k=1 ξ1/n k. SG(p, q, α) priors are ID. Gaussian, Laplace, compound Poisson, Cauchy, student s-t, etc. ID measures have an interesting compressible behavior 9. 9 M. Unser and P. Tafti. An introduction to sparse stochastic processes. Cambridge University Press, Cambridge, / 53
41 Deconvolution Example: deconvolution with a compound Poisson prior Let {x k } be the Fourier basis in L 2 (T). µ is the law of u k=1 γ kξ k x k. ξ k are i.i.d. compound Poisson random variables ξ k ν k j= η jk. ν k are i.i.d Poisson random variables with rate b >. η jk are i.i.d unit normals. γ k = (1 + k 2 ) 3/4. ξ k = with probability e b. 41 / 53
42 Deconvolution ξ k γ k u(t) k t ξ k γ k u(t) k t Example: deconvolution with a compound Poisson prior Truncations are sparse in the strict sense. H1 (T) < a.s. We have well-posedness. 42 / 53
43 Lévy-Khintchine Recall the characteristic function of a measure µ on X ˆµ(ϱ) := exp(iϱ(u))dµ(u) ϱ X. Lévy-Khintchine representation of ID measures X A Radon probability measure on X is infinitely divisible if and only if there exists an element m X, a (positive definite) covariance operator Q : X X and a Lévy measure λ, so that ˆµ(ϱ) = exp(ψ(ϱ)) ψ(ϱ) = iϱ(m) }{{} point mass 1 2 ϱ(q(ϱ)) }{{} Gaussian + X ID(m, Q, λ). If λ is a symmetric probability measure on X exp(i(ϱ(u)) 1 iϱ(u)1 BX (u)dλ(u). }{{} compound Poisson ID(m, Q, λ) = δ m N (, Q) compound Poisson. 43 / 53
44 Tail behavior of ID measures and well-posedness Tail behavior of ID is tied to the tail behavior of the Lévy measure λ Moments of ID measures Suppose µ = ID(m, Q, λ). If < λ(x) < and X < µ-a.s. then X L p (X, µ) whenever X L p (X, λ) for p [1, ). Well-posedness with ID priors (BH,216) Suppose µ = ID(m, Q, λ), < λ(x) < and take Φ(u; y) = 1 2 G(u) y 2 Σ. If max{1, p X } L1 (X, λ) for p N and G is locally Lipschitz so that G(u) X C max{1, u p X }, then we have a well-posed Bayesian inverse problem. 44 / 53
45 Deconvolution once more Example: deconvolution with a BV prior Consider the deconvolution problem on T. Stochastic process u(t) for t (, 1) defined via ( ) u() =, û t (s) = exp t exp(iξs) 1 dν(ξ). R ν is a symmetric measure and ξ dν(ξ) <. ξ 1 Pure jump Lévy process. Similar to the Cauchy difference prior 1. 1 M. Markkanen et al. Cauchy difference priors for edge-preserving Bayesian inversion with an application to X-ray tomography. arxiv preprint: / 53
46 Deconvolution once more 5 5 u(t) u(t) t t Example: deconvolution with a BV prior u has countably many jump discontinuities. u BV (T) < a.s. 11 µ is the measure induced by u(t). BV is non-separable. Forward map is bounded and linear. Well-posed problem. 11 R. Cont and P. Tankov. Financial modelling with jump processes. Chapman & Hall/CRC Financial mathematics series. CRC press LLC, New York, / 53
47 Closing remarks Well-posedness can be achieved with relaxed conditions. Gaussians have serious limitations in terms of modelling. Many different priors to choose from. 47 / 53
48 True signal Measurements Posterior standard deviation Posterior mean Closing remarks Sampling. Random walk Metropolis-Hastings for self-decomposable priors. Randomize-then-optimize 12. Fast Gibbs sampler n=32 n=64 n=128 n=256 n= x (a) Posterior mean n=32 n=64 n=128 n=256 n= x (b) Posterior standard deviation x Fig Example B: Variation in the posterior mean and posterior standard deviation with parameter dimension 12 n. Hyperparameter λ is fixed to 32. Z. WangFig. et4.5. al. Example Bayesian B: True signal and inverse noisy measurements. problems with l 1 priors: a Randomize-then-Optimize 4.2. Two-dimensional elliptic PDE inverse problem. Our next numerical example is an approach. arxiv preprint: slightly. 13 elliptic PDE coefficient inverse problem on a two-dimensional domain. The forward model maps the This is an important and encouraging result, as it is evidence of discretization log-conductivity invariancefield of the Poisson equation to observations of the potential field, not only in F. thelucka. problem formulation, Fast but Gibbs in the performance sampling of the for transformed high-dimensional RTO-MH sampling Bayesian inversion. (exp{θ(x)} s(x)) = h(x), x [, 1] scheme. Finally, as we increase the hyperparameter λ, the CM becomes smoother and the posterior 2, arxiv: standard deviation decreases, as in shown Figure 4.7. The sampling efficiency of our where algorithm θ is the also log-conductivity, s is the potential, and h is the forcing function. Neumann boundary deteriorates with increasing λ, as shown in Table 4.3. Overall, the results from these conditions parameter 48 / 53
49 Closing remarks Analysis of priors: What constitutes compressibility? What is the support of the prior? Hierarchical priors. Modelling constraints. 49 / 53
50 Thank you B. Hosseini. Well-posed Bayesian inverse problems with infinitely-divisible and heavy-tailed prior measures. arxiv preprint: B. Hosseini and N. Nigam. Well-posed Bayesian inverse problems: priors with exponential tails. arxiv preprint: / 53
51 References [1] C. Borell. Convex measures on locally convex spaces. In: Arkiv för Matematik 12.1 (1974), pp [2] T. Bui-Thanh and O. Ghattas. A scalable algorithm for MAP estimators in Bayesian inverse problems with Besov priors. In: Inverse Problems and Imaging 9.1 (215), pp [3] R. Cont and P. Tankov. Financial modelling with jump processes. Chapman & Hall/CRC Financial mathematics series. CRC press LLC, New York, 24. [4] M. Dashti, S. Harris, and A. M. Stuart. Besov priors for Bayesian inverse problems. In: Inverse Problems and Imaging 6.2 (212), pp [5] B. Hosseini. Well-posed Bayesian inverse problems with infinitely-divisible and heavy-tailed prior measures. arxiv preprint: [6] B. Hosseini and N. Nigam. Well-posed Bayesian inverse problems: priors with exponential tails. arxiv preprint: [7] B. Hosseini and J. M. Stockie. Bayesian estimation of airborne fugitive emissions using a Gaussian plume model. In: Atmospheric Environment 141 (216), pp [8] B. Hosseini et al. A Bayesian approach for energy-based estimation of acoustic aberrations in high intensity focused ultrasound treatment. arxiv preprint: [9] M. Lassas, E. Saksman, and S. Siltanen. Discretization-invariant Bayesian inversion and Besov space priors. In: Inverse Problems and Imaging 3.1 (29), pp [1] F. Lucka. Fast Gibbs sampling for high-dimensional Bayesian inversion. arxiv: [11] M. Markkanen et al. Cauchy difference priors for edge-preserving Bayesian inversion with an application to X-ray tomography. arxiv preprint: [12] A. M. Stuart. Inverse problems: a Bayesian perspective. In: Acta Numerica 19 (21), pp [13] T. J. Sullivan. Well-posed Bayesian inverse problems and heavy-tailed stable Banach space priors. arxiv preprint: [14] M. Unser and P. Tafti. An introduction to sparse stochastic processes. Cambridge University Press, Cambridge, 213. [15] Z. Wang et al. Bayesian inverse problems with l 1 priors: a Randomize-then-Optimize approach. arxiv preprint: / 53
52 Well-posedness Minimal assumptions on Φ (BH, 216) The potential Φ : X Y R satisfies: 1415 (L1) (Lower bound in u): There is a positive and non-decreasing function f 1 : R + [1, ) so that r >, there is a constant M(r) R such that u X and y Y with y Y < r, Φ(u; y) M log (f 1 ( u X )). (L2) (Boundedness above): r > there is a constant K(r) > such that u X and y Y with max{ u X, y Y } < r, Φ(u; y) K. (L3) (Continuity in u): r > there exists a constant L(r) > such that u 1, u 2 X and y Y with max{ u 1 X, u 2 X, y Y } < r, Φ(u 1 ; y) Φ(u 2, y) L u 1 u 2 X. (L4) (Continuity in y): There is a positive and non-decreasing function f 2 : R + R + so that r >, there is a constant C(r) R such that y 1, y 2 Y with max{ y 1 Y, y 2 Y } < r and u X, Φ(u; y 1 ) Φ(u, y 2 ) Cf 2 ( u X ) y 1 y 2 Y. 15 Stuart, Inverse problems: a Bayesian perspective. 52 / 53
53 The case of additive noise models Well-posedness with additive noise models Consider the above additive noise model. In addition, let the forward map G satisfy the following conditions with a positive, non-decreasing and locally bounded function f 1: (i) (Bounded) There is a constant C > for which G(u) Σ C f( u X ) u X. (ii) (Locally Lipschitz) r > there is a constant K(r) > so that for all u 1, u 2 X and max{ u 1 X, u 2 X } < r G(u 1 ) G(u 2 ) Σ K u 1 u 2 X. Then the problem of finding µ y is well-posed if µ is a Radon probability measure on X such that f( X ) L 1 (X, µ ). 53 / 53
MCMC Sampling for Bayesian Inference using L1-type Priors
MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling
More informationBayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems
Bayesian Methods and Uncertainty Quantification for Nonlinear Inverse Problems John Bardsley, University of Montana Collaborators: H. Haario, J. Kaipio, M. Laine, Y. Marzouk, A. Seppänen, A. Solonen, Z.
More informationStochastic Spectral Approaches to Bayesian Inference
Stochastic Spectral Approaches to Bayesian Inference Prof. Nathan L. Gibson Department of Mathematics Applied Mathematics and Computation Seminar March 4, 2011 Prof. Gibson (OSU) Spectral Approaches to
More informationPractical unbiased Monte Carlo for Uncertainty Quantification
Practical unbiased Monte Carlo for Uncertainty Quantification Sergios Agapiou Department of Statistics, University of Warwick MiR@W day: Uncertainty in Complex Computer Models, 2nd February 2015, University
More informationRecent Advances in Bayesian Inference for Inverse Problems
Recent Advances in Bayesian Inference for Inverse Problems Felix Lucka University College London, UK f.lucka@ucl.ac.uk Applied Inverse Problems Helsinki, May 25, 2015 Bayesian Inference for Inverse Problems
More informationHierarchical Bayesian Inversion
Hierarchical Bayesian Inversion Andrew M Stuart Computing and Mathematical Sciences, Caltech cw/ S. Agapiou, J. Bardsley and O. Papaspiliopoulos SIAM/ASA JUQ 2(2014), pp. 511--544 cw/ M. Dunlop and M.
More informationarxiv: v1 [math.st] 26 Mar 2012
POSTERIOR CONSISTENCY OF THE BAYESIAN APPROACH TO LINEAR ILL-POSED INVERSE PROBLEMS arxiv:103.5753v1 [math.st] 6 Mar 01 By Sergios Agapiou Stig Larsson and Andrew M. Stuart University of Warwick and Chalmers
More informationwissen leben WWU Münster
MÜNSTER Sparsity Constraints in Bayesian Inversion Inverse Days conference in Jyväskylä, Finland. Felix Lucka 19.12.2012 MÜNSTER 2 /41 Sparsity Constraints in Inverse Problems Current trend in high dimensional
More informationBayesian inverse problems with Laplacian noise
Bayesian inverse problems with Laplacian noise Remo Kretschmann Faculty of Mathematics, University of Duisburg-Essen Applied Inverse Problems 2017, M27 Hangzhou, 1 June 2017 1 / 33 Outline 1 Inverse heat
More informationgeneralized modes in bayesian inverse problems
generalized modes in bayesian inverse problems Christian Clason Tapio Helin Remo Kretschmann Petteri Piiroinen June 1, 2018 Abstract This work is concerned with non-parametric modes and MAP estimates for
More informationHierarchical Matérn fields and Cauchy difference priors for Bayesian inversion
Hierarchical Matérn fields and Cauchy difference priors for Bayesian inversion Lassi Roininen Bayesian Computation for High-Dimensional Statistical Models National University of Singapore, 27 August 2018
More informationDiscretization-invariant Bayesian inversion and Besov space priors. Samuli Siltanen RICAM Tampere University of Technology
Discretization-invariant Bayesian inversion and Besov space priors Samuli Siltanen RICAM 28.10.2008 Tampere University of Technology http://math.tkk.fi/inverse-coe/ This is a joint work with Matti Lassas
More informationA Unified Formulation of Gaussian Versus Sparse Stochastic Processes
A Unified Formulation of Gaussian Versus Sparse Stochastic Processes Michael Unser, Pouya Tafti and Qiyu Sun EPFL, UCF Appears in IEEE Trans. on Information Theory, 2014 Presented by Liming Wang M. Unser,
More informationThe Bayesian approach to inverse problems
The Bayesian approach to inverse problems Youssef Marzouk Department of Aeronautics and Astronautics Center for Computational Engineering Massachusetts Institute of Technology ymarz@mit.edu, http://uqgroup.mit.edu
More informationRobust MCMC Sampling with Non-Gaussian and Hierarchical Priors
Division of Engineering & Applied Science Robust MCMC Sampling with Non-Gaussian and Hierarchical Priors IPAM, UCLA, November 14, 2017 Matt Dunlop Victor Chen (Caltech) Omiros Papaspiliopoulos (ICREA,
More informationSparse Quadrature Algorithms for Bayesian Inverse Problems
Sparse Quadrature Algorithms for Bayesian Inverse Problems Claudia Schillings, Christoph Schwab Pro*Doc Retreat Disentis 2013 Numerical Analysis and Scientific Computing Disentis - 15 August, 2013 research
More informationBayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model
L. Wang, A. Mohammad-Djafari, N. Gac, MaxEnt 16, Ghent, Belgium. 1/26 Bayesian X-ray Computed Tomography using a Three-level Hierarchical Prior Model Li Wang, Ali Mohammad-Djafari, Nicolas Gac Laboratoire
More informationIterative regularization of nonlinear ill-posed problems in Banach space
Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and
More informationSignal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods
Signal Processing Problems on Function Space: Bayesian Formulation, Stochastic PDEs and Effective MCMC Methods March 4, 29 M. Hairer 1,2, A. Stuart 1, J. Voss 1,3 1 Mathematics Institute, Warwick University,
More informationDimension-Independent likelihood-informed (DILI) MCMC
Dimension-Independent likelihood-informed (DILI) MCMC Tiangang Cui, Kody Law 2, Youssef Marzouk Massachusetts Institute of Technology 2 Oak Ridge National Laboratory 2 August 25 TC, KL, YM DILI MCMC USC
More informationarxiv: v2 [math.st] 23 May 2017
SPARSITY-PROMOTING AND EDGE-PRESERVING MAXIMUM A POSTERIORI ESTIMATORS IN NON-PARAMETRIC BAYESIAN INVERSE PROBLEMS arxiv:705.03286v2 [math.st] 23 May 207 By Sergios Agapiou Martin Burger Masoumeh Dashti
More informationStatistical Estimation of the Parameters of a PDE
PIMS-MITACS 2001 1 Statistical Estimation of the Parameters of a PDE Colin Fox, Geoff Nicholls (University of Auckland) Nomenclature for image recovery Statistical model for inverse problems Traditional
More informationTranslation Invariant Experiments with Independent Increments
Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction
More informationarxiv: v1 [math.st] 19 Mar 2016
arxiv:63.635v [math.st] 9 Mar 6 Cauchy difference priors for edge-preserving Bayesian inversion with an application to X-ray tomography Markku Markkanen, Lassi Roininen, Janne M J Huttunen 3 and Sari Lasanen
More informationMyopic Models of Population Dynamics on Infinite Networks
Myopic Models of Population Dynamics on Infinite Networks Robert Carlson Department of Mathematics University of Colorado at Colorado Springs rcarlson@uccs.edu June 30, 2014 Outline Reaction-diffusion
More informationImplementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs
Implementation of Sparse Wavelet-Galerkin FEM for Stochastic PDEs Roman Andreev ETH ZÜRICH / 29 JAN 29 TOC of the Talk Motivation & Set-Up Model Problem Stochastic Galerkin FEM Conclusions & Outlook Motivation
More informationAn inverse of Sanov s theorem
An inverse of Sanov s theorem Ayalvadi Ganesh and Neil O Connell BRIMS, Hewlett-Packard Labs, Bristol Abstract Let X k be a sequence of iid random variables taking values in a finite set, and consider
More informationLévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32
Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional
More informationBayesian Inverse Problems with L[subscript 1] Priors: A Randomize-Then-Optimize Approach
Bayesian Inverse Problems with L[subscript ] Priors: A Randomize-Then-Optimize Approach The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters.
More informationProbability and Measure
Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability
More informationarxiv: v2 [math.pr] 7 Aug 2014
The Bayesian Approach to Inverse Problems arxiv:1302.6989v2 [math.pr] 7 Aug 2014 Masoumeh Dashti and Andrew M Stuart Abstract. These lecture notes highlight the mathematical and computational structure
More informationSPDES driven by Poisson Random Measures and their numerical September Approximation 7, / 42
SPDES driven by Poisson Random Measures and their numerical Approximation Hausenblas Erika Montain University Leoben, Austria September 7, 2011 SPDES driven by Poisson Random Measures and their numerical
More informationObstacle problems for nonlocal operators
Obstacle problems for nonlocal operators Camelia Pop School of Mathematics, University of Minnesota Fractional PDEs: Theory, Algorithms and Applications ICERM June 19, 2018 Outline Motivation Optimal regularity
More informationBayesian inversion: theoretical perspective
Bayesian inversion: theoretical perspective Lecture notes, spring 016 course Tapio Helin January 6, 018 Abstract These lecture notes are written for the theoretical part of the course Bayesian inversion
More informationReal Analysis Problems
Real Analysis Problems Cristian E. Gutiérrez September 14, 29 1 1 CONTINUITY 1 Continuity Problem 1.1 Let r n be the sequence of rational numbers and Prove that f(x) = 1. f is continuous on the irrationals.
More informationThe Analog Formulation of Sparsity Implies Infinite Divisibility and Rules Out Bernoulli-Gaussian Priors
The Analog Formulation of Sparsity Implies Infinite Divisibility and ules Out Bernoulli-Gaussian Priors Arash Amini, Ulugbek S. Kamilov, and Michael Unser Biomedical Imaging Group BIG) École polytechnique
More informationMachine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation
Machine Learning CMPT 726 Simon Fraser University Binomial Parameter Estimation Outline Maximum Likelihood Estimation Smoothed Frequencies, Laplace Correction. Bayesian Approach. Conjugate Prior. Uniform
More informationLecture Characterization of Infinitely Divisible Distributions
Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationBrownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539
Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory
More informationBayesian Regularization
Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors
More informationCharacterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method
Characterization of heterogeneous hydraulic conductivity field via Karhunen-Loève expansions and a measure-theoretic computational method Jiachuan He University of Texas at Austin April 15, 2016 Jiachuan
More informationCalibrating Environmental Engineering Models and Uncertainty Analysis
Models and Cornell University Oct 14, 2008 Project Team Christine Shoemaker, co-pi, Professor of Civil and works in applied optimization, co-pi Nikolai Blizniouk, PhD student in Operations Research now
More informationHolomorphic functions which preserve holomorphic semigroups
Holomorphic functions which preserve holomorphic semigroups University of Oxford London Mathematical Society Regional Meeting Birmingham, 15 September 2016 Heat equation u t = xu (x Ω R d, t 0), u(t, x)
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationOutline Lecture 2 2(32)
Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic
More informationLocal vs. Nonlocal Diffusions A Tale of Two Laplacians
Local vs. Nonlocal Diffusions A Tale of Two Laplacians Jinqiao Duan Dept of Applied Mathematics Illinois Institute of Technology Chicago duan@iit.edu Outline 1 Einstein & Wiener: The Local diffusion 2
More informationA Stochastic Collocation based. for Data Assimilation
A Stochastic Collocation based Kalman Filter (SCKF) for Data Assimilation Lingzao Zeng and Dongxiao Zhang University of Southern California August 11, 2009 Los Angeles Outline Introduction SCKF Algorithm
More informationTail negative dependence and its applications for aggregate loss modeling
Tail negative dependence and its applications for aggregate loss modeling Lei Hua Division of Statistics Oct 20, 2014, ISU L. Hua (NIU) 1/35 1 Motivation 2 Tail order Elliptical copula Extreme value copula
More informationDefault Priors and Effcient Posterior Computation in Bayesian
Default Priors and Effcient Posterior Computation in Bayesian Factor Analysis January 16, 2010 Presented by Eric Wang, Duke University Background and Motivation A Brief Review of Parameter Expansion Literature
More informationHierarchical prior models for scalable Bayesian Tomography
A. Mohammad-Djafari, Hierarchical prior models for scalable Bayesian Tomography, WSTL, January 25-27, 2016, Szeged, Hungary. 1/45. Hierarchical prior models for scalable Bayesian Tomography Ali Mohammad-Djafari
More informationOn Bayesian Computation
On Bayesian Computation Michael I. Jordan with Elaine Angelino, Maxim Rabinovich, Martin Wainwright and Yun Yang Previous Work: Information Constraints on Inference Minimize the minimax risk under constraints
More informationarxiv: v3 [math.st] 16 Jul 2013
Posterior Consistency for Bayesian Inverse Problems through Stability and Regression Results arxiv:30.40v3 [math.st] 6 Jul 03 Sebastian J. Vollmer Mathematics Institute, Zeeman Building, University of
More informationComputational statistics
Computational statistics Markov Chain Monte Carlo methods Thierry Denœux March 2017 Thierry Denœux Computational statistics March 2017 1 / 71 Contents of this chapter When a target density f can be evaluated
More informationContre-examples for Bayesian MAP restoration. Mila Nikolova
Contre-examples for Bayesian MAP restoration Mila Nikolova CMLA ENS de Cachan, 61 av. du Président Wilson, 94235 Cachan cedex (nikolova@cmla.ens-cachan.fr) Obergurgl, September 26 Outline 1. MAP estimators
More informationSparse Legendre expansions via l 1 minimization
Sparse Legendre expansions via l 1 minimization Rachel Ward, Courant Institute, NYU Joint work with Holger Rauhut, Hausdorff Center for Mathematics, Bonn, Germany. June 8, 2010 Outline Sparse recovery
More informationBayesian Regression Linear and Logistic Regression
When we want more than point estimates Bayesian Regression Linear and Logistic Regression Nicole Beckage Ordinary Least Squares Regression and Lasso Regression return only point estimates But what if we
More information9 Brownian Motion: Construction
9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of
More informationAn inverse elliptic problem of medical optics with experimental data
An inverse elliptic problem of medical optics with experimental data Jianzhong Su, Michael V. Klibanov, Yueming Liu, Zhijin Lin Natee Pantong and Hanli Liu Abstract A numerical method for an inverse problem
More informationLecture 5: GPs and Streaming regression
Lecture 5: GPs and Streaming regression Gaussian Processes Information gain Confidence intervals COMP-652 and ECSE-608, Lecture 5 - September 19, 2017 1 Recall: Non-parametric regression Input space X
More informationNonparametric Drift Estimation for Stochastic Differential Equations
Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,
More informationStatistical regularization theory for Inverse Problems with Poisson data
Statistical regularization theory for Inverse Problems with Poisson data Frank Werner 1,2, joint with Thorsten Hohage 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical
More informationCharacterization of dependence of multidimensional Lévy processes using Lévy copulas
Characterization of dependence of multidimensional Lévy processes using Lévy copulas Jan Kallsen Peter Tankov Abstract This paper suggests to use Lévy copulas to characterize the dependence among components
More informationExistence and Comparisons for BSDEs in general spaces
Existence and Comparisons for BSDEs in general spaces Samuel N. Cohen and Robert J. Elliott University of Adelaide and University of Calgary BFS 2010 S.N. Cohen, R.J. Elliott (Adelaide, Calgary) BSDEs
More informationSparsity Regularization
Sparsity Regularization Bangti Jin Course Inverse Problems & Imaging 1 / 41 Outline 1 Motivation: sparsity? 2 Mathematical preliminaries 3 l 1 solvers 2 / 41 problem setup finite-dimensional formulation
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationKarhunen-Loève Expansions of Lévy Processes
Karhunen-Loève Expansions of Lévy Processes Daniel Hackmann June 2016, Barcelona supported by the Austrian Science Fund (FWF), Project F5509-N26 Daniel Hackmann (JKU Linz) KLE s of Lévy Processes 0 / 40
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationICES REPORT March Tan Bui-Thanh And Mark Andrew Girolami
ICES REPORT 4- March 4 Solving Large-Scale Pde-Constrained Bayesian Inverse Problems With Riemann Manifold Hamiltonian Monte Carlo by Tan Bui-Thanh And Mark Andrew Girolami The Institute for Computational
More informationStochastic Partial Differential Equations with Levy Noise
Stochastic Partial Differential Equations with Levy Noise An Evolution Equation Approach S..PESZAT and J. ZABCZYK Institute of Mathematics, Polish Academy of Sciences' CAMBRIDGE UNIVERSITY PRESS Contents
More informationErgodicity in data assimilation methods
Ergodicity in data assimilation methods David Kelly Andy Majda Xin Tong Courant Institute New York University New York NY www.dtbkelly.com April 15, 2016 ETH Zurich David Kelly (CIMS) Data assimilation
More informationPoint spread function reconstruction from the image of a sharp edge
DOE/NV/5946--49 Point spread function reconstruction from the image of a sharp edge John Bardsley, Kevin Joyce, Aaron Luttman The University of Montana National Security Technologies LLC Montana Uncertainty
More informationLikelihood-free MCMC
Bayesian inference for stable distributions with applications in finance Department of Mathematics University of Leicester September 2, 2011 MSc project final presentation Outline 1 2 3 4 Classical Monte
More informationBayes spaces: use of improper priors and distances between densities
Bayes spaces: use of improper priors and distances between densities J. J. Egozcue 1, V. Pawlowsky-Glahn 2, R. Tolosana-Delgado 1, M. I. Ortego 1 and G. van den Boogaart 3 1 Universidad Politécnica de
More informationBayesian estimation of the discrepancy with misspecified parametric models
Bayesian estimation of the discrepancy with misspecified parametric models Pierpaolo De Blasi University of Torino & Collegio Carlo Alberto Bayesian Nonparametrics workshop ICERM, 17-21 September 2012
More informationState Space Representation of Gaussian Processes
State Space Representation of Gaussian Processes Simo Särkkä Department of Biomedical Engineering and Computational Science (BECS) Aalto University, Espoo, Finland June 12th, 2013 Simo Särkkä (Aalto University)
More informationResolving the White Noise Paradox in the Regularisation of Inverse Problems
1 / 32 Resolving the White Noise Paradox in the Regularisation of Inverse Problems Hanne Kekkonen joint work with Matti Lassas and Samuli Siltanen Department of Mathematics and Statistics University of
More informationAdvances and Applications in Perfect Sampling
and Applications in Perfect Sampling Ph.D. Dissertation Defense Ulrike Schneider advisor: Jem Corcoran May 8, 2003 Department of Applied Mathematics University of Colorado Outline Introduction (1) MCMC
More informationMultilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems
Jonas Latz 1 Multilevel Sequential 2 Monte Carlo for Bayesian Inverse Problems Jonas Latz Technische Universität München Fakultät für Mathematik Lehrstuhl für Numerische Mathematik jonas.latz@tum.de November
More informationInfinitely divisible distributions and the Lévy-Khintchine formula
Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every
More informationSPDES driven by Poisson Random Measures and their numerical Approximation
SPDES driven by Poisson Random Measures and their numerical Approximation Erika Hausenblas joint work with Thomas Dunst and Andreas Prohl University of Salzburg, Austria SPDES driven by Poisson Random
More informationHow much should we rely on Besov spaces as a framework for the mathematical study of images?
How much should we rely on Besov spaces as a framework for the mathematical study of images? C. Sinan Güntürk Princeton University, Program in Applied and Computational Mathematics Abstract Relations between
More informationOn the Bennett-Hoeffding inequality
On the Bennett-Hoeffding inequality of Iosif 1,2,3 1 Department of Mathematical Sciences Michigan Technological University 2 Supported by NSF grant DMS-0805946 3 Paper available at http://arxiv.org/abs/0902.4058
More informationTD 1: Hilbert Spaces and Applications
Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.
More informationLarge deviations and averaging for systems of slow fast stochastic reaction diffusion equations.
Large deviations and averaging for systems of slow fast stochastic reaction diffusion equations. Wenqing Hu. 1 (Joint work with Michael Salins 2, Konstantinos Spiliopoulos 3.) 1. Department of Mathematics
More informationEnKF and filter divergence
EnKF and filter divergence David Kelly Andrew Stuart Kody Law Courant Institute New York University New York, NY dtbkelly.com December 12, 2014 Applied and computational mathematics seminar, NIST. David
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationDesign of optimal Runge-Kutta methods
Design of optimal Runge-Kutta methods David I. Ketcheson King Abdullah University of Science & Technology (KAUST) D. Ketcheson (KAUST) 1 / 36 Acknowledgments Some parts of this are joint work with: Aron
More informationNumerical Methods for PDEs
Numerical Methods for PDEs Partial Differential Equations (Lecture 1, Week 1) Markus Schmuck Department of Mathematics and Maxwell Institute for Mathematical Sciences Heriot-Watt University, Edinburgh
More informationAsymptotics for posterior hazards
Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and
More informationBayesian data analysis in practice: Three simple examples
Bayesian data analysis in practice: Three simple examples Martin P. Tingley Introduction These notes cover three examples I presented at Climatea on 5 October 0. Matlab code is available by request to
More informationNumerical Approximation of Phase Field Models
Numerical Approximation of Phase Field Models Lecture 2: Allen Cahn and Cahn Hilliard Equations with Smooth Potentials Robert Nürnberg Department of Mathematics Imperial College London TUM Summer School
More informationScalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems
Scalable algorithms for optimal experimental design for infinite-dimensional nonlinear Bayesian inverse problems Alen Alexanderian (Math/NC State), Omar Ghattas (ICES/UT-Austin), Noémi Petra (Applied Math/UC
More informationPartial Differential Equations
Part II Partial Differential Equations Year 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2015 Paper 4, Section II 29E Partial Differential Equations 72 (a) Show that the Cauchy problem for u(x,
More informationConvergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit
Convergence of Square Root Ensemble Kalman Filters in the Large Ensemble Limit Evan Kwiatkowski, Jan Mandel University of Colorado Denver December 11, 2014 OUTLINE 2 Data Assimilation Bayesian Estimation
More informationBayesian Grouped Horseshoe Regression with Application to Additive Models
Bayesian Grouped Horseshoe Regression with Application to Additive Models Zemei Xu 1,2, Daniel F. Schmidt 1, Enes Makalic 1, Guoqi Qian 2, John L. Hopper 1 1 Centre for Epidemiology and Biostatistics,
More informationKernel Method: Data Analysis with Positive Definite Kernels
Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University
More informationLecture No 2 Degenerate Diffusion Free boundary problems
Lecture No 2 Degenerate Diffusion Free boundary problems Columbia University IAS summer program June, 2009 Outline We will discuss non-linear parabolic equations of slow diffusion. Our model is the porous
More informationOutline lecture 2 2(30)
Outline lecture 2 2(3), Lecture 2 Linear Regression it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic Control
More informationRecall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm
Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify
More information