Inverse problems in statistics
|
|
- Loren Gordon
- 5 years ago
- Views:
Transcription
1 Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/27
2 Table of contents YES, Eurandom, 10 October 2011 p. 2/27
3 Table of contents 1) Inverse problems Introduction Linear inverse problems with random noise SVD and sequence space model Examples YES, Eurandom, 10 October 2011 p. 2/27
4 Table of contents 1) Inverse problems 2) Adaptation and oracle inequalities Regularization methods Classes of functions Rates of convergence Adaptation Oracle inequalities Unbiased risk estimation (URE) YES, Eurandom, 10 October 2011 p. 2/27
5 Table of contents 1) Inverse problems 2) Adaptation and oracle inequalities 3) Risk hull method Penalized empirical risk Problems with URE Risk hull method Oracle inequality Proof Simulations YES, Eurandom, 10 October 2011 p. 2/27
6 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite). Econometrics (instrumental variables). Financial mathematics (model calibration). Medical image processing (X-rays). YES, Eurandom, 10 October 2011 p. 3/27
7 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite). Econometrics (instrumental variables). Financial mathematics (model calibration). Medical image processing (X-rays). These are problems where we have indirect observations of an object (a function) that we want to reconstruct. YES, Eurandom, 10 October 2011 p. 3/27
8 Inverse problems Let H et G be Hilbert spaces. Let A be a continuous linear operator from H into G. YES, Eurandom, 10 October 2011 p. 4/27
9 Inverse problems Let H et G be Hilbert spaces. Let A be a continuous linear operator from H into G. Given g G find f H such that Af = g. YES, Eurandom, 10 October 2011 p. 4/27
10 Inverse problems Let H et G be Hilbert spaces. Let A be a continuous linear operator from H into G. Given g G find f H such that Af = g. Solving an inverse problem Inversion of the operator A. YES, Eurandom, 10 October 2011 p. 4/27
11 Inverse problems Let H et G be Hilbert spaces. Let A be a continuous linear operator from H into G. Given g G find f H such that Af = g. Solving an inverse problem Inversion of the operator A. If A 1 is not continuous the problem is called ill-posed. YES, Eurandom, 10 October 2011 p. 4/27
12 Inverse problems Let H et G be Hilbert spaces. Let A be a continuous linear operator from H into G. Given g G find f H such that Af = g. Solving an inverse problem Inversion of the operator A. If A 1 is not continuous the problem is called ill-posed. Observe g ε a noisy version of g, then f ε = A 1 g ε could be far from f. YES, Eurandom, 10 October 2011 p. 4/27
13 Inverse problems Let H et G be Hilbert spaces. Let A be a continuous linear operator from H into G. Given g G find f H such that Af = g. Solving an inverse problem Inversion of the operator A. If A 1 is not continuous the problem is called ill-posed. Observe g ε a noisy version of g, then f ε = A 1 g ε could be far from f. Importance of the notion of noise or error. YES, Eurandom, 10 October 2011 p. 4/27
14 Linear inverse problems Let H and G two separable Hilbert spaces. Let A be a known linear bounded operator from the space H to G. YES, Eurandom, 10 October 2011 p. 5/27
15 Linear inverse problems Let H and G two separable Hilbert spaces. Let A be a known linear bounded operator from the space H to G. Let the model : Y = Af +εξ, where Y is the observation, f H unknown, A a continuous linear operator fom H into G, ξ is a white noise, ε corresponds to the noise level. YES, Eurandom, 10 October 2011 p. 5/27
16 Linear inverse problems Let H and G two separable Hilbert spaces. Let A be a known linear bounded operator from the space H to G. Let the model : Y = Af +εξ, where Y is the observation, f H unknown, A a continuous linear operator fom H into G, ξ is a white noise, ε corresponds to the noise level. Reconstruct (estimate) f with the observation Y. YES, Eurandom, 10 October 2011 p. 5/27
17 Linear inverse problems Let H and G two separable Hilbert spaces. Let A be a known linear bounded operator from the space H to G. Let the model : Y = Af +εξ, where Y is the observation, f H unknown, A a continuous linear operator fom H into G, ξ is a white noise, ε corresponds to the noise level. Reconstruct (estimate) f with the observation Y. Projection of a white noise on any orthonormal basis {ψ k } gives a sequence of i.i.d. standard Gaussian random variables. YES, Eurandom, 10 October 2011 p. 5/27
18 Singular value decomposition A major property of compact operators is that they have a discrete spectrum. YES, Eurandom, 10 October 2011 p. 6/27
19 Singular value decomposition A major property of compact operators is that they have a discrete spectrum. Suppose A A compact operator with a known basis of eigenfunctions in H: A Aϕ k = b 2 k ϕ k. YES, Eurandom, 10 October 2011 p. 6/27
20 Singular value decomposition A major property of compact operators is that they have a discrete spectrum. Suppose A A compact operator with a known basis of eigenfunctions in H: A Aϕ k = b 2 k ϕ k. Singular Value Decomposition (SVD) of A : YES, Eurandom, 10 October 2011 p. 6/27
21 Singular value decomposition A major property of compact operators is that they have a discrete spectrum. Suppose A A compact operator with a known basis of eigenfunctions in H: A Aϕ k = b 2 k ϕ k. Singular Value Decomposition (SVD) of A : Aϕ k = b k ψ k, A ψ k = b k ϕ k, where b k > 0 are the singular values, {ϕ k } o.n.b. on H, {ψ k } o.n.b. on G. YES, Eurandom, 10 October 2011 p. 6/27
22 Singular value decomposition A major property of compact operators is that they have a discrete spectrum. Suppose A A compact operator with a known basis of eigenfunctions in H: A Aϕ k = b 2 k ϕ k. Singular Value Decomposition (SVD) of A : Aϕ k = b k ψ k, A ψ k = b k ϕ k, where b k > 0 are the singular values, {ϕ k } o.n.b. on H, {ψ k } o.n.b. on G. A linear bounded compact operator between two Hilbert spaces may really be seen as an infinite matrix. YES, Eurandom, 10 October 2011 p. 6/27
23 Projection on {ψ k } Projection of Y on {ψ k } : YES, Eurandom, 10 October 2011 p. 7/27
24 Projection on {ψ k } Projection of Y on {ψ k } : Y,ψ k = Af,ψ k +ε ξ,ψ k YES, Eurandom, 10 October 2011 p. 7/27
25 Projection on {ψ k } Projection of Y on {ψ k } : Y,ψ k = f,a ψ k +ε ξ,ψ k YES, Eurandom, 10 October 2011 p. 7/27
26 Projection on {ψ k } Projection of Y on {ψ k } : Y,ψ k = f,a ψ k +ε ξ,ψ k = b k f,ϕ k +ε ξ,ψ k YES, Eurandom, 10 October 2011 p. 7/27
27 Projection on {ψ k } Projection of Y on {ψ k } : Y,ψ k = f,a ψ k +ε ξ,ψ k = b k f,ϕ k +εξ k where {ξ k } standard Gaussian sequence i.i.d., by projection of a white noise ξ on the o.n.b. {ψ k }. YES, Eurandom, 10 October 2011 p. 7/27
28 Sequence space model Equivalent Sequence space model y k = b k θ k +εξ k, k = 1,2,..., where {θ k } coefficients of f, ξ k N(0,1) i.i.d., b k 0 singular values. YES, Eurandom, 10 October 2011 p. 8/27
29 Sequence space model Equivalent Sequence space model y k = b k θ k +εξ k, k = 1,2,..., where {θ k } coefficients of f, ξ k N(0,1) i.i.d., b k 0 singular values. Estimate θ = {θ k } with the observation Y = {Y k }. Use L 2 risk, it is equivalent to estimate f. YES, Eurandom, 10 October 2011 p. 8/27
30 Sequence space model Equivalent Sequence space model y k = b k θ k +εξ k, k = 1,2,..., where {θ k } coefficients of f, ξ k N(0,1) i.i.d., b k 0 singular values. Estimate θ = {θ k } with the observation Y = {Y k }. Use L 2 risk, it is equivalent to estimate f. Remark that b k 0 weaken the signal θ k. YES, Eurandom, 10 October 2011 p. 8/27
31 Sequence space model Equivalent Sequence space model y k = b k θ k +εξ k, k = 1,2,..., where {θ k } coefficients of f, ξ k N(0,1) i.i.d., b k 0 singular values. Estimate θ = {θ k } with the observation Y = {Y k }. Use L 2 risk, it is equivalent to estimate f. Remark that b k 0 weaken the signal θ k. Ill-posed problem. YES, Eurandom, 10 October 2011 p. 8/27
32 Inversion We have to invert in some sense the operator A. YES, Eurandom, 10 October 2011 p. 9/27
33 Inversion We have to invert in some sense the operator A. Thus, we obtain the model : where σ k = b 1 k. X k = b 1 k y k = θ k +εσ k ξ k, k = 1,2,... YES, Eurandom, 10 October 2011 p. 9/27
34 Inversion We have to invert in some sense the operator A. Thus, we obtain the model : X k = b 1 k y k = θ k +εσ k ξ k, k = 1,2,... where σ k = b 1 k. In the case where the problem is ill-posed the variance term grows to infinity. YES, Eurandom, 10 October 2011 p. 9/27
35 Inversion We have to invert in some sense the operator A. Thus, we obtain the model : X k = b 1 k y k = θ k +εσ k ξ k, k = 1,2,... where σ k = b 1 k. In the case where the problem is ill-posed the variance term grows to infinity. In this model the aim is to estimate {θ k } by use of {X k }. When k is large the noise in X k may then be very large, making the estimation difficult. YES, Eurandom, 10 October 2011 p. 9/27
36 Inversion We have to invert in some sense the operator A. Thus, we obtain the model : X k = b 1 k y k = θ k +εσ k ξ k, k = 1,2,... where σ k = b 1 k. In the case where the problem is ill-posed the variance term grows to infinity. In this model the aim is to estimate {θ k } by use of {X k }. When k is large the noise in X k may then be very large, making the estimation difficult. (see Donoho (1995), Mair and Ruymgaart (1996), Johnstone (1999) and C. and Tsybakov (2002)...). YES, Eurandom, 10 October 2011 p. 9/27
37 Difficulty of inverse problems YES, Eurandom, 10 October 2011 p. 10/27
38 Difficulty of inverse problems σ k 1 : Direct problem. YES, Eurandom, 10 October 2011 p. 10/27
39 Difficulty of inverse problems σ k 1 : Direct problem. σ k k β, β > 0 : Mildly ill-posed problem. YES, Eurandom, 10 October 2011 p. 10/27
40 Difficulty of inverse problems σ k 1 : Direct problem. σ k k β, β > 0 : Mildly ill-posed problem. σ k exp(βk), β > 0 : Severely ill-posed problem. YES, Eurandom, 10 October 2011 p. 10/27
41 Difficulty of inverse problems σ k 1 : Direct problem. σ k k β, β > 0 : Mildly ill-posed problem. σ k exp(βk), β > 0 : Severely ill-posed problem. Parameter β is called degree of ill-posedness. YES, Eurandom, 10 October 2011 p. 10/27
42 Examples There exist many examples of operators for which the SVD is known : Standard Gaussian white noise (any basis). YES, Eurandom, 10 October 2011 p. 11/27
43 Examples There exist many examples of operators for which the SVD is known : Standard Gaussian white noise (any basis). Convolution (Fourier) blurred images. YES, Eurandom, 10 October 2011 p. 11/27
44 Examples There exist many examples of operators for which the SVD is known : Standard Gaussian white noise (any basis). Convolution (Fourier) blurred images. Tomography (difficult basis) X-Rays. YES, Eurandom, 10 October 2011 p. 11/27
45 Examples There exist many examples of operators for which the SVD is known : Standard Gaussian white noise (any basis). Convolution (Fourier) blurred images. Tomography (difficult basis) X-Rays. Instrumental variables (Fourier) econometrics. YES, Eurandom, 10 October 2011 p. 11/27
46 Direct model A very specific inverse problem since, in this case, the operator is A = I. YES, Eurandom, 10 October 2011 p. 12/27
47 Direct model A very specific inverse problem since, in this case, the operator is A = I. However, most of the results on inverse problems will apply in this framework. YES, Eurandom, 10 October 2011 p. 12/27
48 Direct model A very specific inverse problem since, in this case, the operator is A = I. However, most of the results on inverse problems will apply in this framework. Model often called a direct model, since we have at our disposal direct observations and not indirect ones. YES, Eurandom, 10 October 2011 p. 12/27
49 Direct model A very specific inverse problem since, in this case, the operator is A = I. However, most of the results on inverse problems will apply in this framework. Model often called a direct model, since we have at our disposal direct observations and not indirect ones. In this case, the sequence space model may be obtained by projection on any orthonormal basis {ψ k }. YES, Eurandom, 10 October 2011 p. 12/27
50 Equivalence with nonparametric regression This model is an idealized version of the standard nonparametric regression : Y i = f(x i )+ξ i, i = 1,...,n, where (X 1,Y 1 ),..,(X n,y n ) are observed (we may assume X i [0,1]), f is an unknown function in L 2 (0,1), and ξ i are i.i.d. zero-mean Gaussian random variables of variance σ 2. YES, Eurandom, 10 October 2011 p. 13/27
51 Equivalence with nonparametric regression This model is an idealized version of the standard nonparametric regression : Y i = f(x i )+ξ i, i = 1,...,n, where (X 1,Y 1 ),..,(X n,y n ) are observed (we may assume X i [0,1]), f is an unknown function in L 2 (0,1), and ξ i are i.i.d. zero-mean Gaussian random variables of variance σ 2. Equivalence between the models (Brown and Low (1996), Nussbaum (1996)). YES, Eurandom, 10 October 2011 p. 13/27
52 Equivalence with nonparametric regression This model is an idealized version of the standard nonparametric regression : Y i = f(x i )+ξ i, i = 1,...,n, where (X 1,Y 1 ),..,(X n,y n ) are observed (we may assume X i [0,1]), f is an unknown function in L 2 (0,1), and ξ i are i.i.d. zero-mean Gaussian random variables of variance σ 2. Equivalence between the models (Brown and Low (1996), Nussbaum (1996)). Noise level is related to number of observations by ε 1/ n. YES, Eurandom, 10 October 2011 p. 13/27
53 Circular convolution The framework of deconvolution is perhaps one of the most well-known inverse problem. It is used in many applications as econometrics, physics, astronomy, medical image processing. For example, it corresponds to the problem of a blurred signal that one wants to recover from indirect data. YES, Eurandom, 10 October 2011 p. 14/27
54 Circular convolution The framework of deconvolution is perhaps one of the most well-known inverse problem. It is used in many applications as econometrics, physics, astronomy, medical image processing. For example, it corresponds to the problem of a blurred signal that one wants to recover from indirect data. Consider the following convolution operator Af(t) = r f(t) = 1 0 r(t x)f(x)dx, x [0,1], where r is a known 1-periodic symetric real convolution kernel in L 2 [0,1]. In this model, A is a linear bounded self-adjoint operator from L 2 [0,1] to L 2 [0,1]. YES, Eurandom, 10 October 2011 p. 14/27
55 Hubble satellite One famous example of an inverse problem of deconvolution is the blurred images of the Hubble space telescope. YES, Eurandom, 10 October 2011 p. 15/27
56 Hubble satellite One famous example of an inverse problem of deconvolution is the blurred images of the Hubble space telescope. The Hubble satellite was launched into low-earth orbit outside of the disturbing atmosphere in order to provide images. YES, Eurandom, 10 October 2011 p. 15/27
57 Hubble satellite One famous example of an inverse problem of deconvolution is the blurred images of the Hubble space telescope. The Hubble satellite was launched into low-earth orbit outside of the disturbing atmosphere in order to provide images. Unfortunately, a manufacturing error in the main mirror was detected, causing severe spherical aberrations in the images. YES, Eurandom, 10 October 2011 p. 15/27
58 Hubble satellite One famous example of an inverse problem of deconvolution is the blurred images of the Hubble space telescope. The Hubble satellite was launched into low-earth orbit outside of the disturbing atmosphere in order to provide images. Unfortunately, a manufacturing error in the main mirror was detected, causing severe spherical aberrations in the images. Therefore, astronomers employed inverse problem techniques (Richardson-Lucy algorithm) to improve the blurred images (see Adorf (1995)). YES, Eurandom, 10 October 2011 p. 15/27
59 Images of the Hubble satellite YES, Eurandom, 10 October 2011 p. 16/27
60 Blurred cameraman (a) (b) YES, Eurandom, 10 October 2011 p. 17/27
61 Blurred kangaroo YES, Eurandom, 10 October 2011 p. 18/27
62 Convolution model Define then the following model Y(t) = r f(t)+ε ξ(t), x [0,1], where Y is observed, f is an unknown periodic function in L 2 [0,1] and ξ(t) is a white noise on L 2 [0,1]. YES, Eurandom, 10 October 2011 p. 19/27
63 Convolution model Define then the following model Y(t) = r f(t)+ε ξ(t), x [0,1], where Y is observed, f is an unknown periodic function in L 2 [0,1] and ξ(t) is a white noise on L 2 [0,1]. The SVD basis is then clearly here the Fourier basis {ϕ k (t)}. YES, Eurandom, 10 October 2011 p. 19/27
64 Convolution model Define then the following model Y(t) = r f(t)+ε ξ(t), x [0,1], where Y is observed, f is an unknown periodic function in L 2 [0,1] and ξ(t) is a white noise on L 2 [0,1]. The SVD basis is then clearly here the Fourier basis {ϕ k (t)}. We make the projection on {ϕ k (t)}, in the Fourier domain, and obtain y k = b k θ k +εξ k, where b k = r(x)cos(2πkx)dx for even k, θ k are the Fourier coefficients of f, and ξ k are i.i.d. N(0,1). YES, Eurandom, 10 October 2011 p. 19/27
65 Computerized tomography In medical X-ray tomography one tries to have an image of the internal structure of an object. This image is characterized by a function f. However, there is no direct observations of f. YES, Eurandom, 10 October 2011 p. 20/27
66 Computerized tomography In medical X-ray tomography one tries to have an image of the internal structure of an object. This image is characterized by a function f. However, there is no direct observations of f. Suppose that one observes the attenuation of the X-rays. Denote by I 0 and I 1 the initial and final intensity, x is the position on a given line L and I(x) is the attenuation for a small x. One has then I(x) = f(x)i(x) x. YES, Eurandom, 10 October 2011 p. 20/27
67 Computerized tomography This corresponds from a mathematical point of view to I (x) I(x) = f(x), YES, Eurandom, 10 October 2011 p. 21/27
68 Computerized tomography This corresponds from a mathematical point of view to I (x) I(x) = f(x), and then by integration ( I1 ) log(i 1 ) log(i 0 ) = log I 0 = L f(x)dx. YES, Eurandom, 10 October 2011 p. 21/27
69 Computerized tomography This corresponds from a mathematical point of view to I (x) I(x) = f(x), and then by integration ( I1 ) log(i 1 ) log(i 0 ) = log I 0 = L f(x)dx. Thus observing I 1 /I 0 is equivalent to the observation of exp( f(x)dx). By measuring attenuation of X-rays, one L observes cross section of the body. YES, Eurandom, 10 October 2011 p. 21/27
70 Tomography scan YES, Eurandom, 10 October 2011 p. 22/27
71 Tomography reconstruction YES, Eurandom, 10 October 2011 p. 23/27
72 Radon transform This problem corresponds to the reconstruction of an unknown function f in IR d based on observations of its Radon transform Rf, i.e. integrals over hyperplanes Rf(s,u) = f(v)dv,, v: v,s =u where u [ 1,+1], s S d 1,S d 1 = {v IR d, v = 1} is the unit sphere in IR d. YES, Eurandom, 10 October 2011 p. 24/27
73 Radon transform This problem corresponds to the reconstruction of an unknown function f in IR d based on observations of its Radon transform Rf, i.e. integrals over hyperplanes Rf(s,u) = f(v)dv,, v: v,s =u where u [ 1,+1], s S d 1,S d 1 = {v IR d, v = 1} is the unit sphere in IR d. The Radon transform is a suitable tool for the problem of tomography, because Rf(s, u) represents the integral of f over the hyperplane {v IR d, v,s = u}. YES, Eurandom, 10 October 2011 p. 24/27
74 Tomography model The model is the following Y(s,u) = Rf(s,u)+εξ(s,u), s S d 1, u [ 1,+1], where ξ is a white noise. YES, Eurandom, 10 October 2011 p. 25/27
75 Tomography model The model is the following Y(s,u) = Rf(s,u)+εξ(s,u), s S d 1, u [ 1,+1], where ξ is a white noise. In this case, the Radon operator R is linear bounded and compact. YES, Eurandom, 10 October 2011 p. 25/27
76 Tomography model The model is the following Y(s,u) = Rf(s,u)+εξ(s,u), s S d 1, u [ 1,+1], where ξ is a white noise. In this case, the Radon operator R is linear bounded and compact. The SVD basis is known for the Radon transform. However, this basis is very difficult to compute. YES, Eurandom, 10 October 2011 p. 25/27
77 Tomography model The model is the following Y(s,u) = Rf(s,u)+εξ(s,u), s S d 1, u [ 1,+1], where ξ is a white noise. In this case, the Radon operator R is linear bounded and compact. The SVD basis is known for the Radon transform. However, this basis is very difficult to compute. There exist many different models of tomography (X-rays tomography, positron emission tomography, attenuated tomography, tomography in quantum physics and so on). The model are then very different, but linked to the Radon operator. YES, Eurandom, 10 October 2011 p. 25/27
78 Instrumental variables An economic relationship between a response variable Y and a vector X of explanatory variables is represented by Y i = f(x i )+U i, i = 1,...,n, where f has to be estimated and U i are the errors. YES, Eurandom, 10 October 2011 p. 26/27
79 Instrumental variables An economic relationship between a response variable Y and a vector X of explanatory variables is represented by Y i = f(x i )+U i, i = 1,...,n, where f has to be estimated and U i are the errors. This model does not characterize the function f if U is not constrained. The problem is solved if E(U X) = 0. YES, Eurandom, 10 October 2011 p. 26/27
80 Instrumental variables An economic relationship between a response variable Y and a vector X of explanatory variables is represented by Y i = f(x i )+U i, i = 1,...,n, where f has to be estimated and U i are the errors. This model does not characterize the function f if U is not constrained. The problem is solved if E(U X) = 0. In many structural econometrics models some components of X are endogeneous. YES, Eurandom, 10 October 2011 p. 26/27
81 Instrumental variables An economic relationship between a response variable Y and a vector X of explanatory variables is represented by Y i = f(x i )+U i, i = 1,...,n, where f has to be estimated and U i are the errors. This model does not characterize the function f if U is not constrained. The problem is solved if E(U X) = 0. In many structural econometrics models some components of X are endogeneous. If Y denotes wages and X, level of education, among other variables. The error U includes, ability, not observed, but influences wages. YES, Eurandom, 10 October 2011 p. 26/27
82 Instrumental variables An economic relationship between a response variable Y and a vector X of explanatory variables is represented by Y i = f(x i )+U i, i = 1,...,n, where f has to be estimated and U i are the errors. This model does not characterize the function f if U is not constrained. The problem is solved if E(U X) = 0. In many structural econometrics models some components of X are endogeneous. If Y denotes wages and X, level of education, among other variables. The error U includes, ability, not observed, but influences wages. High ability tends to have high level of education, then education and ability are correlated, and thus X and U also. YES, Eurandom, 10 October 2011 p. 26/27
83 Instrumental variables Nevertheless, suppose that we observe another set of data, W i where W is called an instrumental variable for which E(U W) = E(Y f(x) W) = 0. YES, Eurandom, 10 October 2011 p. 27/27
84 Instrumental variables Nevertheless, suppose that we observe another set of data, W i where W is called an instrumental variable for which E(U W) = E(Y f(x) W) = 0. This equation characterizes f by a Fredholm equation of the first kind. Estimation of the function f is in fact an ill-posed inverse problems. YES, Eurandom, 10 October 2011 p. 27/27
85 Instrumental variables Nevertheless, suppose that we observe another set of data, W i where W is called an instrumental variable for which E(U W) = E(Y f(x) W) = 0. This equation characterizes f by a Fredholm equation of the first kind. Estimation of the function f is in fact an ill-posed inverse problems. Not exactly our model of Gaussian white noise, but closely related. YES, Eurandom, 10 October 2011 p. 27/27
86 Instrumental variables Nevertheless, suppose that we observe another set of data, W i where W is called an instrumental variable for which E(U W) = E(Y f(x) W) = 0. This equation characterizes f by a Fredholm equation of the first kind. Estimation of the function f is in fact an ill-posed inverse problems. Not exactly our model of Gaussian white noise, but closely related. Since the years 2000, the framework of inverse problems has been the topic of many articles in the econometrics literature, see Florens (2003) and Hall and Horowitz (2005). YES, Eurandom, 10 October 2011 p. 27/27
Inverse problems in statistics
Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) Yale, May 2 2011 p. 1/35 Introduction There exist many fields where inverse problems appear Astronomy (Hubble satellite).
More informationInverse problems in statistics
Inverse problems in statistics Laurent Cavalier (Université Aix-Marseille 1, France) YES, Eurandom, 10 October 2011 p. 1/32 Part II 2) Adaptation and oracle inequalities YES, Eurandom, 10 October 2011
More informationON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS
ON ILL-POSEDNESS OF NONPARAMETRIC INSTRUMENTAL VARIABLE REGRESSION WITH CONVEXITY CONSTRAINTS Olivier Scaillet a * This draft: July 2016. Abstract This note shows that adding monotonicity or convexity
More informationStatistical Inverse Problems and Instrumental Variables
Statistical Inverse Problems and Instrumental Variables Thorsten Hohage Institut für Numerische und Angewandte Mathematik University of Göttingen Workshop on Inverse and Partial Information Problems: Methodology
More informationLinear Inverse Problems
Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?
More informationEfficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends
Efficient Solution Methods for Inverse Problems with Application to Tomography Radon Transform and Friends Alfred K. Louis Institut für Angewandte Mathematik Universität des Saarlandes 66041 Saarbrücken
More informationUnbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods
Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,
More informationSingular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0
Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns
More informationSatellite image deconvolution using complex wavelet packets
Satellite image deconvolution using complex wavelet packets André Jalobeanu, Laure Blanc-Féraud, Josiane Zerubia ARIANA research group INRIA Sophia Antipolis, France CNRS / INRIA / UNSA www.inria.fr/ariana
More informationLECTURES ON MICROLOCAL CHARACTERIZATIONS IN LIMITED-ANGLE
LECTURES ON MICROLOCAL CHARACTERIZATIONS IN LIMITED-ANGLE TOMOGRAPHY Jürgen Frikel 4 LECTURES 1 Today: Introduction to the mathematics of computerized tomography 2 Mathematics of computerized tomography
More informationAn Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems
An Overview of Sparsity with Applications to Compression, Restoration, and Inverse Problems Justin Romberg Georgia Tech, School of ECE ENS Winter School January 9, 2012 Lyon, France Applied and Computational
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationMicrolocal Methods in X-ray Tomography
Microlocal Methods in X-ray Tomography Plamen Stefanov Purdue University Lecture I: Euclidean X-ray tomography Mini Course, Fields Institute, 2012 Plamen Stefanov (Purdue University ) Microlocal Methods
More informationD I S C U S S I O N P A P E R
I N S T I T U T D E S T A T I S T I Q U E B I O S T A T I S T I Q U E E T S C I E N C E S A C T U A R I E L L E S ( I S B A ) UNIVERSITÉ CATHOLIQUE DE LOUVAIN D I S C U S S I O N P A P E R 2014/06 Adaptive
More informationLecture 9 February 2, 2016
MATH 262/CME 372: Applied Fourier Analysis and Winter 26 Elements of Modern Signal Processing Lecture 9 February 2, 26 Prof. Emmanuel Candes Scribe: Carlos A. Sing-Long, Edited by E. Bates Outline Agenda:
More informationA Lower Bound Theorem. Lin Hu.
American J. of Mathematics and Sciences Vol. 3, No -1,(January 014) Copyright Mind Reader Publications ISSN No: 50-310 A Lower Bound Theorem Department of Applied Mathematics, Beijing University of Technology,
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationOptimal design for inverse problems
School of Mathematics and Statistical Sciences Research Institute University of Southampton, UK Joint work with Nicolai Bissantz, Holger Dette (both Ruhr-Universität Bochum) and Edmund Jones (University
More informationMinimax Goodness-of-Fit Testing in Ill-Posed Inverse Problems with Partially Unknown Operators
Minimax Goodness-of-Fit Testing in Ill-Posed Inverse Problems with Partially Unknown Operators Clément Marteau, Institut Camille Jordan, Université Lyon I - Claude Bernard, 43 boulevard du novembre 98,
More informationEmpirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods
Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,
More informationIs there an optimal weighting for linear inverse problems?
Is there an optimal weighting for linear inverse problems? Jean-Pierre FLORENS Toulouse School of Economics Senay SOKULLU University of Bristol October 9, 205 Abstract This paper considers linear equations
More informationThe Radon transform. Chris Stolk. December 18, 2014
The Radon transform Chris Stolk December 18, 2014 1 Introduction In two dimensions the Radon transform is an integral transform that maps a function to its integrals over lines. Let θ S 1 and s R then
More informationOn the FPA infrared camera transfer function calculation
On the FPA infrared camera transfer function calculation (1) CERTES, Université Paris XII Val de Marne, Créteil, France (2) LTM, Université de Bourgogne, Le Creusot, France by S. Datcu 1, L. Ibos 1,Y.
More informationIntroduction to the Mathematics of Medical Imaging
Introduction to the Mathematics of Medical Imaging Second Edition Charles L. Epstein University of Pennsylvania Philadelphia, Pennsylvania EiaJTL Society for Industrial and Applied Mathematics Philadelphia
More informationIll-Posedness of Backward Heat Conduction Problem 1
Ill-Posedness of Backward Heat Conduction Problem 1 M.THAMBAN NAIR Department of Mathematics, IIT Madras Chennai-600 036, INDIA, E-Mail mtnair@iitm.ac.in 1. Ill-Posedness of Inverse Problems Problems that
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationUncertainty Quantification for Inverse Problems. November 7, 2011
Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance
More informationFig. 2 The image will be in focus everywhere. It's size changes based on the position of the focal plane.
Instruments 1. Basic Optics 1. Rays of Light 2. Waves of light 3. Basic Imaging Systems 4. A Basic Telescope 5. Aberrations 6. Mirrors 2. Some Real Instruments 1. Galileo's Telescope 2. Keplerian Optics
More informationAn Introduction to Sparse Representations and Compressive Sensing. Part I
An Introduction to Sparse Representations and Compressive Sensing Part I Paulo Gonçalves CPE Lyon - 4ETI - Cours Semi-Optionnel Méthodes Avancées pour le Traitement des Signaux 2014 Objectifs Part I The
More informationModel Selection and Geometry
Model Selection and Geometry Pascal Massart Université Paris-Sud, Orsay Leipzig, February Purpose of the talk! Concentration of measure plays a fundamental role in the theory of model selection! Model
More informationThe Stein hull. Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F Toulouse Cedex 4, France
Journal of Nonparametric Statistics Vol. 22, No. 6, August 2010, 685 702 The Stein hull Clément Marteau* Institut de Mathématiques, Université de Toulouse, INSA - 135, Avenue de Rangueil, F-31 077 Toulouse
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 12, 2007 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationInstrumental Variables Estimation and Other Inverse Problems in Econometrics. February Jean-Pierre Florens (TSE)
Instrumental Variables Estimation and Other Inverse Problems in Econometrics February 2011 Jean-Pierre Florens (TSE) 2 I - Introduction Econometric model: Relation between Y, Z and U Y, Z observable random
More informationEXAM MATHEMATICAL METHODS OF PHYSICS. TRACK ANALYSIS (Chapters I-V). Thursday, June 7th,
EXAM MATHEMATICAL METHODS OF PHYSICS TRACK ANALYSIS (Chapters I-V) Thursday, June 7th, 1-13 Students who are entitled to a lighter version of the exam may skip problems 1, 8-11 and 16 Consider the differential
More informationInverse Statistical Learning
Inverse Statistical Learning Minimax theory, adaptation and algorithm avec (par ordre d apparition) C. Marteau, M. Chichignoud, C. Brunet and S. Souchet Dijon, le 15 janvier 2014 Inverse Statistical Learning
More informationLPA-ICI Applications in Image Processing
LPA-ICI Applications in Image Processing Denoising Deblurring Derivative estimation Edge detection Inverse halftoning Denoising Consider z (x) =y (x)+η (x), wherey is noise-free image and η is noise. assume
More informationFeature Reconstruction in Tomography
Feature Reconstruction in Tomography Alfred K. Louis Institut für Angewandte Mathematik Universität des Saarlandes 66041 Saarbrücken http://www.num.uni-sb.de louis@num.uni-sb.de Wien, July 20, 2009 Images
More informationConvergence rates of spectral methods for statistical inverse learning problems
Convergence rates of spectral methods for statistical inverse learning problems G. Blanchard Universtität Potsdam UCL/Gatsby unit, 04/11/2015 Joint work with N. Mücke (U. Potsdam); N. Krämer (U. München)
More informationHypothesis Testing via Convex Optimization
Hypothesis Testing via Convex Optimization Arkadi Nemirovski Joint research with Alexander Goldenshluger Haifa University Anatoli Iouditski Grenoble University Information Theory, Learning and Big Data
More informationEcon 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines
Econ 2148, fall 2017 Gaussian process priors, reproducing kernel Hilbert spaces, and Splines Maximilian Kasy Department of Economics, Harvard University 1 / 37 Agenda 6 equivalent representations of the
More informationEndogeneity in non separable models. Application to treatment models where the outcomes are durations
Endogeneity in non separable models. Application to treatment models where the outcomes are durations J.P. Florens First Draft: December 2004 This version: January 2005 Preliminary and Incomplete Version
More informationRecent progress on the explicit inversion of geodesic X-ray transforms
Recent progress on the explicit inversion of geodesic X-ray transforms François Monard Department of Mathematics, University of Washington. Geometric Analysis and PDE seminar University of Cambridge, May
More informationON A CLASS OF GENERALIZED RADON TRANSFORMS AND ITS APPLICATION IN IMAGING SCIENCE
ON A CLASS OF GENERALIZED RADON TRANSFORMS AND ITS APPLICATION IN IMAGING SCIENCE T.T. TRUONG 1 AND M.K. NGUYEN 2 1 University of Cergy-Pontoise, LPTM CNRS UMR 889, F-9532, France e-mail: truong@u-cergy.fr
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationFuncICA for time series pattern discovery
FuncICA for time series pattern discovery Nishant Mehta and Alexander Gray Georgia Institute of Technology The problem Given a set of inherently continuous time series (e.g. EEG) Find a set of patterns
More informationDiscussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan
Discussion of Regularization of Wavelets Approximations by A. Antoniadis and J. Fan T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania Professors Antoniadis and Fan are
More informationAn Introduction to Statistical Machine Learning - Theoretical Aspects -
An Introduction to Statistical Machine Learning - Theoretical Aspects - Samy Bengio bengio@idiap.ch Dalle Molle Institute for Perceptual Artificial Intelligence (IDIAP) CP 592, rue du Simplon 4 1920 Martigny,
More informationGeometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces
Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces (Recap) Announcement Room change: On Thursday, April 26th, room 024 is occupied. The lecture will be moved to room 021, E1 4 (the
More informationApproximation Theoretical Questions for SVMs
Ingo Steinwart LA-UR 07-7056 October 20, 2007 Statistical Learning Theory: an Overview Support Vector Machines Informal Description of the Learning Goal X space of input samples Y space of labels, usually
More informationInversions of ray transforms on simple surfaces
Inversions of ray transforms on simple surfaces François Monard Department of Mathematics, University of Washington. June 09, 2015 Institut Henri Poincaré - Program on Inverse problems 1 / 42 Outline 1
More informationInverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna
Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3
More informationWhat is Image Deblurring?
What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.
More informationReproducing Kernel Hilbert Spaces
Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More information1 Radon Transform and X-Ray CT
Radon Transform and X-Ray CT In this section we look at an important class of imaging problems, the inverse Radon transform. We have seen that X-ray CT (Computerized Tomography) involves the reconstruction
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationApplied/Numerical Analysis Qualifying Exam
Applied/Numerical Analysis Qualifying Exam Cover Sheet Applied Analysis Part Policy on misprints: The qualifying exam committee tries to proofread exams as carefully as possible. Nevertheless, the exam
More informationThe Learning Problem and Regularization
9.520 Class 02 February 2011 Computational Learning Statistical Learning Theory Learning is viewed as a generalization/inference problem from usually small sets of high dimensional, noisy data. Learning
More informationNumerical Methods for geodesic X-ray transforms and applications to open theoretical questions
Numerical Methods for geodesic X-ray transforms and applications to open theoretical questions François Monard Department of Mathematics, University of Washington. Nov. 13, 2014 UW Numerical Analysis Research
More informationGaussian Processes. Le Song. Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012
Gaussian Processes Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 01 Pictorial view of embedding distribution Transform the entire distribution to expected features Feature space Feature
More informationFast learning rates for plug-in classifiers under the margin condition
Fast learning rates for plug-in classifiers under the margin condition Jean-Yves Audibert 1 Alexandre B. Tsybakov 2 1 Certis ParisTech - Ecole des Ponts, France 2 LPMA Université Pierre et Marie Curie,
More informationDirect Learning: Linear Classification. Donglin Zeng, Department of Biostatistics, University of North Carolina
Direct Learning: Linear Classification Logistic regression models for classification problem We consider two class problem: Y {0, 1}. The Bayes rule for the classification is I(P(Y = 1 X = x) > 1/2) so
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationGAUSSIAN PROCESS REGRESSION
GAUSSIAN PROCESS REGRESSION CSE 515T Spring 2015 1. BACKGROUND The kernel trick again... The Kernel Trick Consider again the linear regression model: y(x) = φ(x) w + ε, with prior p(w) = N (w; 0, Σ). The
More informationECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II
ECON 4160: Econometrics-Modelling and Systems Estimation Lecture 9: Multiple equation models II Ragnar Nymoen Department of Economics University of Oslo 9 October 2018 The reference to this lecture is:
More informationOne Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017
One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationECE531 Lecture 8: Non-Random Parameter Estimation
ECE531 Lecture 8: Non-Random Parameter Estimation D. Richard Brown III Worcester Polytechnic Institute 19-March-2009 Worcester Polytechnic Institute D. Richard Brown III 19-March-2009 1 / 25 Introduction
More informationLinear Diffusion and Image Processing. Outline
Outline Linear Diffusion and Image Processing Fourier Transform Convolution Image Restoration: Linear Filtering Diffusion Processes for Noise Filtering linear scale space theory Gauss-Laplace pyramid for
More informationA Brief Introduction to Medical Imaging. Outline
A Brief Introduction to Medical Imaging Outline General Goals Linear Imaging Systems An Example, The Pin Hole Camera Radiations and Their Interactions with Matter Coherent vs. Incoherent Imaging Length
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationAIR FORCE RESEARCH LABORATORY Directed Energy Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM
AFRL-DE-PS-JA-2007-1004 AFRL-DE-PS-JA-2007-1004 Noise Reduction in support-constrained multi-frame blind-deconvolution restorations as a function of the number of data frames and the support constraint
More informationCurve learning. p.1/35
Curve learning Gérard Biau UNIVERSITÉ MONTPELLIER II p.1/35 Summary The problem The mathematical model Functional classification 1. Fourier filtering 2. Wavelet filtering Applications p.2/35 The problem
More informationUser s Guide to Compressive Sensing
A Walter B. Richardson, Jr. University of Texas at San Antonio walter.richardson@utsa.edu Engineering November 18, 2011 Abstract During the past decade, Donoho, Candes, and others have developed a framework
More informationBlind Image Deconvolution Using The Sylvester Matrix
Blind Image Deconvolution Using The Sylvester Matrix by Nora Abdulla Alkhaldi A thesis submitted to the Department of Computer Science in conformity with the requirements for the degree of PhD Sheffield
More informationRegularization methods for large-scale, ill-posed, linear, discrete, inverse problems
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola
More information3 Compact Operators, Generalized Inverse, Best- Approximate Solution
3 Compact Operators, Generalized Inverse, Best- Approximate Solution As we have already heard in the lecture a mathematical problem is well - posed in the sense of Hadamard if the following properties
More informationM. Holschneider 1 A. Eicker 2 R. Schachtschneider 1 T. Mayer-Guerr 2 K. Ilk 2
SPP project TREGMAT: Tailored Gravity Field Models for Mass Distributions and Mass Transport Phenomena in the Earth System by by M. 1 A. Eicker 2 R. Schachtschneider 1 T. Mayer-Guerr 2 K. Ilk 2 1 University
More informationAn Introduction to Parameter Estimation
Introduction Introduction to Econometrics An Introduction to Parameter Estimation This document combines several important econometric foundations and corresponds to other documents such as the Introduction
More informationTotal Least Squares Approach in Regression Methods
WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics
More informationFunctional linear instrumental regression under second order stationarity.
Functional linear instrumental regression under second order stationarity. Jan Johannes May 13, 2008 Abstract We consider the problem of estimating the slope parameter in functional linear instrumental
More informationGI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis. Massimiliano Pontil
GI07/COMPM012: Mathematical Programming and Research Methods (Part 2) 2. Least Squares and Principal Components Analysis Massimiliano Pontil 1 Today s plan SVD and principal component analysis (PCA) Connection
More informationLecture 6: September 19
36-755: Advanced Statistical Theory I Fall 2016 Lecture 6: September 19 Lecturer: Alessandro Rinaldo Scribe: YJ Choe Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More informationIntroductory Econometrics
Based on the textbook by Wooldridge: : A Modern Approach Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies Vienna October 16, 2013 Outline Introduction Simple
More informationThe algorithm of noisy k-means
Noisy k-means The algorithm of noisy k-means Camille Brunet LAREMA Université d Angers 2 Boulevard Lavoisier, 49045 Angers Cedex, France Sébastien Loustau LAREMA Université d Angers 2 Boulevard Lavoisier,
More informationNonparametric Estimation of Distributions in a Large-p, Small-n Setting
Nonparametric Estimation of Distributions in a Large-p, Small-n Setting Jeffrey D. Hart Department of Statistics, Texas A&M University Current and Future Trends in Nonparametrics Columbia, South Carolina
More informationFoundations of Image Science
Foundations of Image Science Harrison H. Barrett Kyle J. Myers 2004 by John Wiley & Sons,, Hoboken, 0-471-15300-1 1 VECTORS AND OPERATORS 1 1.1 LINEAR VECTOR SPACES 2 1.1.1 Vector addition and scalar multiplication
More informationarxiv: v2 [math.st] 18 Oct 2018
Bayesian inverse problems with partial observations Shota Gugushvili a,, Aad W. van der Vaart a, Dong Yan a a Mathematical Institute, Faculty of Science, Leiden University, P.O. Box 9512, 2300 RA Leiden,
More informationTHIS work addresses a class of inverse problems that are
494 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 3, MARCH 2004 Stochastic Deconvolution Over Groups Birsen Yazici, Member, IEEE Abstract In this paper, we address a class of inverse problems that
More informationECO Class 6 Nonparametric Econometrics
ECO 523 - Class 6 Nonparametric Econometrics Carolina Caetano Contents 1 Nonparametric instrumental variable regression 1 2 Nonparametric Estimation of Average Treatment Effects 3 2.1 Asymptotic results................................
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationThe Mathematics of Computerized Tomography
The Mathematics of Computerized Tomography The Mathematics of Computerized Tomography F. Natterer University of Münster Federal Republic of Germany B. G. TEUBNER Stuttgart @) JOHN WILEY & SONS Chichester.
More informationQuadrature Formula for Computed Tomography
Quadrature Formula for Computed Tomography orislav ojanov, Guergana Petrova August 13, 009 Abstract We give a bivariate analog of the Micchelli-Rivlin quadrature for computing the integral of a function
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationKernel-based Approximation. Methods using MATLAB. Gregory Fasshauer. Interdisciplinary Mathematical Sciences. Michael McCourt.
SINGAPORE SHANGHAI Vol TAIPEI - Interdisciplinary Mathematical Sciences 19 Kernel-based Approximation Methods using MATLAB Gregory Fasshauer Illinois Institute of Technology, USA Michael McCourt University
More informationInverse Theory Methods in Experimental Physics
Inverse Theory Methods in Experimental Physics Edward Sternin PHYS 5P10: 2018-02-26 Edward Sternin Inverse Theory Methods PHYS 5P10: 2018-02-26 1 / 29 1 Introduction Indirectly observed data A new research
More informationAbstract. 1 Introduction. Cointerpretation of Flow Rate-Pressure-Temperature Data from Permanent Downhole Gauges. Deconvolution. Breakpoint detection
Cointerpretation of Flow Rate-Pressure-Temperature Data from Permanent Downhole Gauges CS 229 Course Final Report Chuan Tian chuant@stanford.edu Yue Li yuel@stanford.edu Abstract This report documents
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More information