Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional
|
|
- Chrystal Adams
- 5 years ago
- Views:
Transcription
1 Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and NIH Arizona State University August 2007 Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 1 / 56
2 Outline 1 PET Images 2 Image Restoration Optimization for Regularized Least Squares Blind deconvolution Regularized TLS A Newton Algorithm Scaled Total Least Squares Numerical Results 3 Examples 4 Chi squared Method Background Algorithm Single Variable Newton Method Extend for General D: Generalized Tikhonov 5 Conclusions and Future Work Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 2 / 56
3 Outline 1 PET Images 2 Image Restoration Optimization for Regularized Least Squares Blind deconvolution Regularized TLS A Newton Algorithm Scaled Total Least Squares Numerical Results 3 Examples 4 Chi squared Method Background Algorithm Single Variable Newton Method Extend for General D: Generalized Tikhonov 5 Conclusions and Future Work Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 2 / 56
4 Outline 1 PET Images 2 Image Restoration Optimization for Regularized Least Squares Blind deconvolution Regularized TLS A Newton Algorithm Scaled Total Least Squares Numerical Results 3 Examples 4 Chi squared Method Background Algorithm Single Variable Newton Method Extend for General D: Generalized Tikhonov 5 Conclusions and Future Work Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 2 / 56
5 Outline 1 PET Images 2 Image Restoration Optimization for Regularized Least Squares Blind deconvolution Regularized TLS A Newton Algorithm Scaled Total Least Squares Numerical Results 3 Examples 4 Chi squared Method Background Algorithm Single Variable Newton Method Extend for General D: Generalized Tikhonov 5 Conclusions and Future Work Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 2 / 56
6 Outline 1 PET Images 2 Image Restoration Optimization for Regularized Least Squares Blind deconvolution Regularized TLS A Newton Algorithm Scaled Total Least Squares Numerical Results 3 Examples 4 Chi squared Method Background Algorithm Single Variable Newton Method Extend for General D: Generalized Tikhonov 5 Conclusions and Future Work Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 2 / 56
7 Example of typical PET scan Typical PET Images show High noise content (non Gaussian) High blurring Partial volume Effects Reconstruction artifacts Reconstruction using filtered backprojection Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 3 / 56
8 Long Term Goals/Methods of the Study Improved Sensitivity for identifying features in images: Signal restoration one dimensional Signal restoration two dimensional Deblurring Use information extracted from images for assisting with Alzheimer s Disease diagnosis. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 4 / 56
9 More About PET images Recall blurring of standard PET images: Blurring Model Spatially invariant blur: g = f h + n g - recorded image h - point spread function(psf) n - noise f - desired image Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 5 / 56
10 Inverse Problem Goal Given g find f Problem is ill-posed. We cannot invert to find f. Regularization is needed. ˆf = arg min { g f h λr(f )} f R(f ) is a regularization term which picks a best image according to desired properties. Difficulty Changing the contrast can be a difficulty for quantification from images. Benefit May show anatomical change without quantification. Important to identify edges. Difficulty What is λ? Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 6 / 56
11 Regularization Methods (short overview) Common methods are Tikhonov (TK). R(f ) = TK(f ) = f (x) 2 dx. Total Variation (TV) R(f ) = TV(f ) = Ω Ω f (x) dx. Sparse deconvolution (L 1 ) (not relevant for PET images) R(f ) = f 1 = f (x) dx. Ω Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 7 / 56
12 Optimization Objective function of the data fit term is convex TK is a linear least squares (LS) problem ˆf = arg min f { g Hf λ f 2 2 )} The TV objective function is non differentiable J(f ) = g Hf λ f 1 Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 8 / 56
13 Differentiability of TV - 1D (tensor product in 2D) R(f ) = i f i+1 f i For a small β define R β = i (f i+1 f i ) 2 + β choose β in 10 5 to 10 9 Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 9 / 56
14 Parameter Dependence: characteristics of solution ˆf = arg min { g f h λr(f )} f λ Governs the trade off between the fit to the data and the smoothness of the reconstruction and can be picked by the L-curve approach, (see Hansen, Inverse Problems) : PERHAPS? TK yields a smooth reconstruction. But only depends on λ. PRO : TV yields a piece wise constant reconstruction and preserves the edges of the image. CON But depends on λ and β L1 yields spike trains: also depends on λ and β. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 10 / 56
15 Comparing TV and TK Regularization Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 11 / 56
16 Numerical optimization scheme: to handle regularization To find the minimum we use a limited memory BFGS (see Vogel, Computational Methods for Inverse Problems and Nocedal) A quasi Newton Method where the estimated Hessian in each step is updated by a rank 2 update. Only a limited number of update vectors are kept, e.g. 10. Evaluation of the OF and its gradient is cheap (some FFTs and sparse matrix-vector multiplications) Problems are usually large and many iterations are needed. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 12 / 56
17 Point Spread Function The PSF is usually unknown or only estimated Estimates exist for PET scanners from phantom scans Actually PSF is spatially variant and also depends on the scanned object Hence even if provided PSF is always only an estimate For the PET scans presented here, a 6mm half width Gaussian was assumed. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 13 / 56
18 Simulated PET On the left simulated PET from blurred segmented MRI scan using Gaussian PSF and noise added. On the right deblurred PET with TV and known PSF. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 14 / 56
19 Recover real PET image Reconstruction done using Filtered Back Projection PSF estimated by a Gaussian TV regularization Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 15 / 56
20 Observations Image improvement is possible even with a rough estimation of the PSF (non-blind deconvolution) Total Variation regularization (piecewise constant solution) is appropriate: intensity levels depend on the tissue type. Improvement requires better approximation of the PSF Blind Deconvolution? Total Least Squares Increased Artifacts and noise. (More post processing can improve this) Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 16 / 56
21 Blind deconvolution h is usually unknown Blind deconvolution solves min f,h f h g λ 1 L 1 (f f 0 ) p1 + λ 2 L 2 (h h 0 ) p2 very limited uniqueness results for p 1 = p 2 = 2 and L 1 = L 2 = I e.g. by Scherzer and Justen For general L and p, no uniqueness, In practical applications p = 1 is often better, Stefan Practical applications - we should know something about h Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 17 / 56
22 Total least squares (TLS) - when we know something about the PSF Rewrite convolution as matrix vector product: g = Hf + n H is a Toeplitz matrix of Point Spread Function TLS assumes error in H and g i.e. g = (H + E)f + n Total least squares solution f TLS solves min E n 2 F subject to g = (H + E)f + n Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 18 / 56
23 History: Total Least Squares Statistics: Errors in variables regression(gleser, 1981) Measurement error models (Fuller, 1987) Orthogonal distance regression (Adcock, 1878) Generalized least squares (Madansky, 1959,... ) Numerical Analysis: Golub & van Loan, System Identification: Global linear least squares (Staar, 1981) eigenvector method (Levin, 1964) Koopmans-Levin method (Fernando & Nicholson, 1985) Compensated least squares (Stoica & Soderstrom, 1982) Signal Processing: Minimum norm method (Kumaresan &Tufts, 1983) Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 19 / 56
24 min (E, f ) F subject to (A + E)x = b + f Classical Algorithm Golub Reinsch Van-Loan [ ] xtls Solve [Â ˆb] = 0, Â = A + E, 1 ˆb = b + f. SVD Solution uses the SVD of augmented matrix [A b]. [ ] xtls = 1 v 1 n+1. v n+1,n+1 Direct Compute the SVD and solve. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 20 / 56
25 Rayleigh Quotient Formulation for TLS An Iterative Approach: x TLS = argmin x φ(x) = argmin x Ax b x 2, φ is the Rayleigh quotient of matrix [A, b]. TLS minimizes the sum of squared normalized residuals. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 21 / 56
26 Tikhonov TLS Regularization Stabilize TLS with realistic constraint on data Lx δ. δ is prescribed from known information on the solution, and L is typically a differential operator. Reformulation with Lagrange Multiplier If the constraint is active the solution x satisfies normal equations (A T A + λ I I + λ L L T L)x = A T b, (x ) T L T Lx δ 2 = 0, λ I = Ax b x 2, λ L = λ(1 + x 2 ), λ = 1 δ 2 ( b T (Ax b) 1 + x 2 + Ax b 2 (1 + x 2 ) 2 ), Golub, Hansen and O Leary, Solution technique is parameter dependent: λ L, λ I, δ, λ Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 22 / 56
27 Rayleigh Quotient Formulation for RTLS Recall x TLS = argmin x φ(x) = argmin x Ax b x 2, Formulate regularization for TLS in RQ form min x φ(x) subject to Lx δ. Augmented Lagrangian L(x, λ) = φ(x) + λ( Lx 2 δ 2 ). Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 23 / 56
28 Eigenvalue Problem RTLS: Renaut & Guo (SIAM 2004) ( B(x x ) 1 ) = λ I ( x 1 ), ( B(x A ) = T A + λ L (x )L T L, A T b b T A, λ L (x )γ + b T b ), λ L = λ(1 + x 2 ), Seek the minimal eigenpair for B. Note B is not constant. Specify constraint Lx 2 δ 2, use γ = δ 2 or Lx 2 This is a Parameter Dependent Eigenvalue Problem Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 24 / 56
29 Development At a solution constraint is active: measure normalized discrepancy Denote B(θ) = M + θn, N = g(x) = ( Lx 2 δ 2 )/(1 + x 2 ) ( L T L 0 0 δ 2 Denote eigenpair for smallest eigenvalue of B(θ) as ϱ n+1, (x T θ, 1)T Reformulation: ). For a constant δ, find a θ such that g(x θ ) = 0. A Newton iteration. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 25 / 56
30 The Update Equation for θ = λ L λ (k+1) L = λ (k) (k) λ(k) L L + ι δ 2 g(x λ (k) ), L 0 < ι (k) 1 such that g(x (k+1) λ )g(x (0) L λ ) > 0. L Theorem: Given λ (0) L > 0 iteration converges to unique solution, λ L. Observe: B(λ (k) L ) is always positive definite. For 0 < λ(0) L < θ c iteration is monotically increasing or decreasing dependent on λ (0) L >< λ L. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 26 / 56
31 EXACT RTLS: Alternating Iteration on λ L and x. Algorithm Given δ, λ (0) L > 0, calculate eigenpair (ϱ (0) n+1, x (0) ). Set k = 0. Update λ (k) L and x (k) until convergence. 1 While not converged Do ι (k) = 1 1 Inner Iteration: Until g(x (k+1) )g(x (0) ) > 0 1 Update λ (k+1) L., and corresponding eigenvector, 2 Calculate smallest eigenvalue, ϱ (k) n+1 [(x (k+1) ) T, 1] T, of matrix B(λ (k) L ). 3 If g(x (k+1) )g(x (0) ) < 0, set ι (k) = ι (k) /2 else Break. 2 Test for convergence. If converged Break else k = k End Do. x RTLS = x (k). Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 27 / 56
32 Computational Considerations: Generalized SVD Choice of γ: The theory is developed for γ = δ 2 but the algorithm can use γ = Lx k,j 2. If blow up does not occur, the algorithm converges more quickly. All approaches need to solve normal equations Jw = f, (A T A + λ L L T L)w = f GSVD For the augmented matrix [A, L], 2m 2 n + 15n 3 flops. ( ) Σ 0 A = U X 1, L = V ( M 0 ) X 1. 0 I n p U, V, m n and p p, resp., orthonormal. X, n n nonsingular.σ, M diag., p p, entries σ i λ i, resp. Then [( ) ( )] Σ 2 0 M λ 0 I L X 1 w = X T f. n p 0 0 Efficient Direct solution can be found efficiently 8n 2 flops. Solve triangular systems for each λ L. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 28 / 56
33 Generalization: Scaled TLS Theory (Paige and Strakos, Numerische Mathematik) min E n 2 F subject to g = (H + E)f + n γ Minimum is obtained as the minimum singular value of [H, γg] For flexibility use Rayleigh quotient formulation γ = 0 is the LS problem min f Hf g γ 2 f 2 2 γ = 1 is the standard TLS problem γ accounts for different noise levels in H and g. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 29 / 56
34 Scaled Total Least Squares with Regularization Regularize scaled RQ: Hf g 2 2 min f 1 + γ 2 f 2 + λr(f ) 2 Permits careful investigation of effect of noise levels in H and g. Which is greater, the error in the PSF or the error in the measured data? Can be solved using linear algebra methods, but only if TK regularization. Use BFGS methods Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 30 / 56
35 Test Problem Noisy Shepp Logan Phantom Shepp Logan Phantom Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 31 / 56
36 Test Problem Noisy Shepp Logan Phantom x 2 e 2σ 2 Blur with Gaussian h(x) = 1 σ 2π width) Take forward Radon transform with 45 angles Add Poisson Noise to sinogram Transform back, with filtered back projection with σ = 1.5 (6mm half Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 31 / 56
37 Deconvolving the Shepp-Logan Phantom Gauss PSF with σ = 2 and TV regularization γ 2 = 0 γ 2 = 7.3e 9 γ 2 = 3.7e 7 Scaling shows how to improve impact of badly chosen PSF. Scaling Hf = g = 1 (Notice for given image g ) For scaled problem γ 2 is 1, 50, resp. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 32 / 56
38 Real PET data Real PET data Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 33 / 56
39 Numerical Results Scaled RTVTLS Use a PSF with 6mm half width Gaussian and then restore γ = 0 Least Squares Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 34 / 56
40 Numerical Results Scaled RTVTLS Use a PSF with 6mm half width Gaussian and then restore γ = 3.7e 7 Total Least Squares Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 34 / 56
41 Numerical Results Scaled RTVTLS Use a PSF with 6mm half width Gaussian and then restore γ = 1.5e 5 Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 34 / 56
42 Numerical Results Scaled RTVTLS Use a PSF with 6mm half width Gaussian and then restore γ = 1e 4 Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 34 / 56
43 Observations RTLS with TV handles inexact PSFs better then simple RLS Scaled RTVTLS with parameter γ in Hf g 2 2 min f 1 + γ 2 f 2 + λr(f ) 2 allows further tuning in case of an unknown PSF Iterations are expensive. All methods are very parameter dependent. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 35 / 56
44 Example : Regularization parameter by L-curve Notice optimal λ by L-curve is similar for LS and Scaled TLS. But the corner of the L-curve is not well defined. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 36 / 56
45 The L-curve Calculating L-curve is expensive. Other methods for finding the regularization parameter; GCV, UPRE, A new method: Use Statistical Information Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 37 / 56
46 Regularized Least Squares: Using statistical information Rewrite regularized weighted least squares (Tikhonov) ˆx = argmin J(x) = argmin{ Ax b W 2 b + x x 0 W 2 x }. (1) Typically W b is inverse covariance matrix for data b. Generalized Tikhonov ˆx = argmin J(x) = argmin{ Ax b W 2 b + D(x x 0 ) 2 W x }. (2) Assume N (A) N (D) = Typically W x = λ 2 I, λ unknown penalty parameter Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 38 / 56
47 Standard Methods: Generalized Cross-Validation (GCV) 1 Estimates predictive risk. 2 No statistical information 3 Minimize GCV function b Ãx(λ) 2 2 [trace(i m A(W x ))] 2, W x = λ 2 I n 4 Expensive - need to find a range of λ values. 5 Uses GSVD to make calculations more efficient. 6 Minimum needed Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 39 / 56
48 Standard Methods: Generalized Cross-Validation (GCV) 1 Estimates predictive risk. 2 No statistical information 3 Minimize GCV function b Ãx(λ) 2 2 [trace(i m A(W x ))] 2, W x = λ 2 I n 4 Expensive - need to find a range of λ values. 5 Uses GSVD to make calculations more efficient. 6 Multiple minima Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 39 / 56
49 Standard Methods: Generalized Cross-Validation (GCV) 1 Estimates predictive risk. 2 No statistical information 3 Minimize GCV function b Ãx(λ) 2 2 [trace(i m A(W x ))] 2, W x = λ 2 I n 4 Expensive - need to find a range of λ values. 5 Uses GSVD to make calculations more efficient. 6 Sometimes flat Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 39 / 56
50 Standard Methods Unbiased Predictive Risk Estimation (UPRE) 1 Minimize expected value of predictive risk 2 Uses statistical information 3 Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(w x )) m 4 Expensive - need to find a range of λ values. 5 Uses GSVD to make calculations more efficient. 6 Minimum needed Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 40 / 56
51 phillips Fredholm integral equation (Hansen) Example Error 10% 1 Add noise to b 2 Standard deviation σ bi =.01 b i +.1b max 3 Covariance matrix C b = σ 2 b I m 4 σ 2 b average of σ2 b i Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 41 / 56
52 phillips Fredholm integral equation (Hansen) L-curve Solution 1 Add noise to b 2 Standard deviation σ bi =.01 b i +.1b max 3 Covariance matrix C b = σ 2 b I m 4 σ 2 b average of σ2 b i 5 Uncertainty estimates shown Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 41 / 56
53 phillips Fredholm integral equation (Hansen) GCV Solution 1 Add noise to b 2 Standard deviation σ bi =.01 b i +.1b max 3 Covariance matrix C b = σ 2 b I m 4 σ 2 b average of σ2 b i 5 Uncertainty estimates shown Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 41 / 56
54 phillips Fredholm integral equation (Hansen) UPRE Solution 1 Add noise to b 2 Standard deviation σ bi =.01 b i +.1b max 3 Covariance matrix C b = σ 2 b I m 4 σ 2 b average of σ2 b i 5 Uncertainty estimates shown Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 41 / 56
55 phillips Fredholm integral equation (Hansen) χ 2 curve Solution 1 Add noise to b 2 Standard deviation σ bi =.01 b i +.1b max 3 Covariance matrix C b = σ 2 b I m 4 σ 2 b average of σ2 b i 5 Uncertainty estimates shown Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 41 / 56
56 phillips Fredholm integral equation (Hansen) Comparison Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 42 / 56
57 blur Atmospheric (Gaussian PSF) (Hansen): Again with noise Solution on Left and Degraded on the Right Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 43 / 56
58 blur Atmospheric (Gaussian PSF) (Hansen): Again with noise Solutions using x 0 = 0, Generalized Tikhonov Second Derivative 5% noise Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 43 / 56
59 blur Atmospheric (Gaussian PSF) (Hansen): Again with noise olutions using x 0 = 0, Generalized Tikhonov Second Derivative 10% noise Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 43 / 56
60 General Result: Tikhonov Cost functional is χ 2 r.v. Theorem (Rao:73, Tarantola, Mead (2007)) J(x) = (b Ax) T C b 1 (b Ax) + (x x 0 ) T C x 1 (x x 0 ), x and b are stochastic (need not be normal) r = b Ax 0 are iid. Matrices C b and C x are SPD - are considered as covariance matrices but need not be Then for large m, minimium value of J is a random variable it follows a χ 2 distribution with m degrees of freedom. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 44 / 56
61 Implication: Find W x such that J is χ 2 rv. Theorem implies m 2z α/2 < J(ˆx) < m + 2z α/2 for confidence interval (1 α). Equivalently m 2z α/2 < r T (AC x A T + C b ) 1 r < m + 2z α/2. Having found W x posterior inverse covariance matrix is W x = A T W b A + W x Note that W x is completely general Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 45 / 56
62 A New Algorithm for Estimating Model Covariance Algorithm (Mead 07) Given confidence interval parameter α, initial residual r = b Ax 0 and estimate of the data covariance C b, find L x which solves the nonlinear optimization. Minimize L x L T x 2 F Subject to m 2z α/2 < r T (AL x L T x A T + C b ) 1 r < m + 2z α/2 AL x L T x A T + C b well-conditioned. Use Matlab: expensive Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 46 / 56
63 Single Variable Approach 1 Seek to make an efficient and practical algorithm 2 First consider the single variable case W x = σx 2 I, where note regularization parameter λ = 1/σ x. 3 Use SVD to implement U b Σ b Vb T = W b 1/2 A, svs σ 1 σ 2... σ p and define s = U b W 1/2 b r: 4 Find σ x such that m 2z α/2 < s T 1 diag( σi 2 σx )s < m + 2z α/2. 5 Equivalently, find σ 2 x such that F (σ x ) = s T 1 diag( 1 + σxσ 2 i 2 )s m = 0. Scalar Root Finding: Newton s Method Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 47 / 56
64 First: Extension to Generalized Tikhonov Define ˆx GTik = argminj D (x) = argmin{ Ax b 2 W b + D(x x 0 ) 2 W x }, (3) Theorem For large m, the minimium value of J D is a random variable which follows a χ 2 distribution with m n + p degrees of freedom. Proof. Use the Generalized Singular Value Decomposition for [W b 1/2 A, W x 1/2 D] Find W x to satisfy J D is χ 2 with m n + p degrees of freedom. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 48 / 56
65 Newton Root Finding W x = σx 2 I p GSVD of [W 1/2 b A, D] [ Υ A = U 0 m n n ] X T D = V [M, 0 p n p ]X T, γ i are the generalized singular values m = m n + p p i=1 s2 i δ γ i 0 m i=n+1 s2 i, s i = s i /(γi 2 σx 2 + 1), i = 1,..., p t i = s i γ i. Solve F = 0, where F (σ x ) = s T s m and F (σ x ) = 2σ x t 2 2. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 49 / 56
66 Observations Example F Initialization cost GCV, UPRE, L-curve, χ 2 all use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. F is monotonic decreasing, even Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 50 / 56
67 Observations Example F Initialization cost GCV, UPRE, L-curve, χ 2 all use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. Solution either exists and is unique for positive σ Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 50 / 56
68 Observations Insufficient degrees of freedom Initialization cost GCV, UPRE, L-curve, χ 2 all use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. Or no solution exists F (0) < 0. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 50 / 56
69 Some Generalized Tikhonov Solutions: First Order Derivative No Statistical Information C b is constant multiple of I C b = diag(σ 2 b i ) Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 51 / 56
70 Some Generalized Tikhonov Solutions: x 0 = 0: Exponential noise No Statistical Information C b is constant multiple of I C b = diag(σ 2 b i ) Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 52 / 56
71 Conclusions χ 2 Newton algorithm is cost effective- converges in 5 10 iterations. It performs as well ( or better) than GCV and UPRE when statistical information is available. Should be method of choice when statistical information is provided Method can be adapted to find W b if W x is provided. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 53 / 56
72 Conclusions χ 2 Newton algorithm is cost effective- converges in 5 10 iterations. It performs as well ( or better) than GCV and UPRE when statistical information is available. Should be method of choice when statistical information is provided Method can be adapted to find W b if W x is provided. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 53 / 56
73 Conclusions χ 2 Newton algorithm is cost effective- converges in 5 10 iterations. It performs as well ( or better) than GCV and UPRE when statistical information is available. Should be method of choice when statistical information is provided Method can be adapted to find W b if W x is provided. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 53 / 56
74 Conclusions χ 2 Newton algorithm is cost effective- converges in 5 10 iterations. It performs as well ( or better) than GCV and UPRE when statistical information is available. Should be method of choice when statistical information is provided Method can be adapted to find W b if W x is provided. Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 53 / 56
75 Future Work Further investigation of RTVTLS and relation to RTVLS (also with scaling) Improve efficiency of algorithms (methods of Guo and Renaut) Extend the χ 2 method for RTLS schemes, to make methods more efficient. Extend to use with TV regularization? But the theory does not apply? What can be done? Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 54 / 56
76 collaborators Hongbin Guo - Total Least Squares Kewei Chen for the data and discussions on PET imaging Wolfgang Stefan - Total Variation Minimization Jodi Mead - parameter estimation. Supported by: Arizona Alzheimer s Research Center, NIH NIBIB and NSF Renaut, Guo, Mead and Stefan (ASU) Signal Restoration Bulgaria 55 / 56
77 THANK YOU!
Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution
Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar
More informationNewton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve
Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods
More informationNewton s method for obtaining the Regularization Parameter for Regularized Least Squares
Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and
More informationThe Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation
The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS
More informationStatistically-Based Regularization Parameter Estimation for Large Scale Problems
Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational
More informationThe Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation
The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS
More informationA Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems
A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:
More informationSIGNAL AND IMAGE RESTORATION: SOLVING
1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation
More informationTHE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION
THE χ -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION JODI MEAD AND ROSEMARY A RENAUT Abstract. We discuss algorithms for estimating least squares regularization parameters
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationLecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)
Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo
More informationA Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems
Boise State University ScholarWorks Mathematics Faculty Publications and Presentations Department of Mathematics 1-1-2009 A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For
More informationUPRE Method for Total Variation Parameter Selection
UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National
More information1. Introduction. We consider the solution of the ill-posed model dependent problem
SIAM J. MATRIX ANA. APP. Vol. 26, No. 2, pp. 457 476 c 25 Society for Industrial and Applied Mathematics EFFICIENT AGORITHMS FOR SOUTION OF REGUARIZED TOTA EAST SQUARES ROSEMARY A. RENAUT AND HONGBIN GUO
More informationEE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6)
EE 367 / CS 448I Computational Imaging and Display Notes: Image Deconvolution (lecture 6) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement to the material discussed in
More informationRapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016
More informationRegularization methods for large-scale, ill-posed, linear, discrete, inverse problems
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola
More informationAdvanced Numerical Linear Algebra: Inverse Problems
Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring
More informationDUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee
J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM
More informationSolving Regularized Total Least Squares Problems
Solving Regularized Total Least Squares Problems Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation Joint work with Jörg Lampe TUHH Heinrich Voss Total
More informationLaboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale
Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione
More informationNumerical Linear Algebra and. Image Restoration
Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,
More informationNumerical Methods I: Eigenvalues and eigenvectors
1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they
More informationInverse problem and optimization
Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationCOMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?
COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve
More informationRegularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion
Regularization parameter estimation for underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute
More informationc 1999 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE
More informationThe Global Krylov subspace methods and Tikhonov regularization for image restoration
The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationNumerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank
Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial
More informationInverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology
Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline
More informationLow Rank Approximation Lecture 7. Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL
Low Rank Approximation Lecture 7 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL daniel.kressner@epfl.ch 1 Alternating least-squares / linear scheme General setting:
More informationDeconvolution. Parameter Estimation in Linear Inverse Problems
Image Parameter Estimation in Linear Inverse Problems Chair for Computer Aided Medical Procedures & Augmented Reality Department of Computer Science, TUM November 10, 2006 Contents A naive approach......with
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationRegularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion
Regularization Parameter Estimation for Underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute
More informationPARALLEL VARIABLE DISTRIBUTION FOR TOTAL LEAST SQUARES
PARALLEL VARIABLE DISTRIBUTION FOR TOTAL LEAST SQUARES HONGBIN GUO AND ROSEMARY A. RENAUT Abstract. A novel parallel method for determining an approximate total least squares (TLS) solution is introduced.
More informationKrylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration
Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,
More informationGolub-Kahan iterative bidiagonalization and determining the noise level in the data
Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech
More informationTotal least squares. Gérard MEURANT. October, 2008
Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares
More informationParameter estimation: A new approach to weighting a priori information
c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach
More informationA fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring
A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke
More informationInverse Ill Posed Problems in Image Processing
Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics
More informationComputational Methods. Eigenvalues and Singular Values
Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationLinear Inverse Problems
Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?
More informationUncertainty Quantification for Inverse Problems. November 7, 2011
Uncertainty Quantification for Inverse Problems November 7, 2011 Outline UQ and inverse problems Review: least-squares Review: Gaussian Bayesian linear model Parametric reductions for IP Bias, variance
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationRecovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm
Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University
More informationECE295, Data Assimila0on and Inverse Problems, Spring 2015
ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal
More informationGeometric Modeling Summer Semester 2010 Mathematical Tools (1)
Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric
More informationLecture 9: Numerical Linear Algebra Primer (February 11st)
10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template
More informationModifying A Linear Support Vector Machine for Microarray Data Classification
Modifying A Linear Support Vector Machine for Microarray Data Classification Prof. Rosemary A. Renaut Dr. Hongbin Guo & Wang Juh Chen Department of Mathematics and Statistics, Arizona State University
More informationThe Inversion Problem: solving parameters inversion and assimilation problems
The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016
More informationOne Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017
One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary
More informationBackward Error Estimation
Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles
More informationZero-Order Methods for the Optimization of Noisy Functions. Jorge Nocedal
Zero-Order Methods for the Optimization of Noisy Functions Jorge Nocedal Northwestern University Simons Institute, October 2017 1 Collaborators Albert Berahas Northwestern University Richard Byrd University
More informationChapter 2. Optimization. Gradients, convexity, and ALS
Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve
More informationITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS
BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics
More informationMCMC Sampling for Bayesian Inference using L1-type Priors
MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling
More informationIterative Methods for Ill-Posed Problems
Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization
More informationDiscrete ill posed problems
Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction
More informationKarhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques
Institut für Numerische Mathematik und Optimierung Karhunen-Loève Approximation of Random Fields Using Hierarchical Matrix Techniques Oliver Ernst Computational Methods with Applications Harrachov, CR,
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationGeometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces
Geometric Modeling Summer Semester 2012 Linear Algebra & Function Spaces (Recap) Announcement Room change: On Thursday, April 26th, room 024 is occupied. The lecture will be moved to room 021, E1 4 (the
More informationIntroduction. Chapter One
Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and
More informationImage restoration: numerical optimisation
Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context
More informationStochastic Optimization Algorithms Beyond SG
Stochastic Optimization Algorithms Beyond SG Frank E. Curtis 1, Lehigh University involving joint work with Léon Bottou, Facebook AI Research Jorge Nocedal, Northwestern University Optimization Methods
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 7: Matrix completion Yuejie Chi The Ohio State University Page 1 Reference Guaranteed Minimum-Rank Solutions of Linear
More informationInverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna
Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationLINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING
LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived
More informationLinear Regression and Its Applications
Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start
More informationA Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator
A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 9. Alternating Direction Method of Multipliers
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 9 Alternating Direction Method of Multipliers Shiqian Ma, MAT-258A: Numerical Optimization 2 Separable convex optimization a special case is min f(x)
More informationSolving discrete ill posed problems with Tikhonov regularization and generalized cross validation
Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov
More information2 Regularized Image Reconstruction for Compressive Imaging and Beyond
EE 367 / CS 448I Computational Imaging and Display Notes: Compressive Imaging and Regularized Image Reconstruction (lecture ) Gordon Wetzstein gordon.wetzstein@stanford.edu This document serves as a supplement
More informationCS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares
CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search
More informationLanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact
Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationOn nonstationary preconditioned iterative regularization methods for image deblurring
On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology
More informationMIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications. Class 19: Data Representation by Design
MIT 9.520/6.860, Fall 2017 Statistical Learning Theory and Applications Class 19: Data Representation by Design What is data representation? Let X be a data-space X M (M) F (M) X A data representation
More informationSlide05 Haykin Chapter 5: Radial-Basis Function Networks
Slide5 Haykin Chapter 5: Radial-Basis Function Networks CPSC 636-6 Instructor: Yoonsuck Choe Spring Learning in MLP Supervised learning in multilayer perceptrons: Recursive technique of stochastic approximation,
More informationTHE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR
THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},
More informationEE 381V: Large Scale Optimization Fall Lecture 24 April 11
EE 381V: Large Scale Optimization Fall 2012 Lecture 24 April 11 Lecturer: Caramanis & Sanghavi Scribe: Tao Huang 24.1 Review In past classes, we studied the problem of sparsity. Sparsity problem is that
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationBlind image restoration as a convex optimization problem
Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi
More informationBasic Math for
Basic Math for 16-720 August 23, 2002 1 Linear Algebra 1.1 Vectors and Matrices First, a reminder of a few basic notations, definitions, and terminology: Unless indicated otherwise, vectors are always
More informationAugmented Tikhonov Regularization
Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More informationSPARSE SIGNAL RESTORATION. 1. Introduction
SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful
More informationFinal Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University
Final Examination CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University The exam runs for 3 hours. The exam contains eight problems. You must complete the first
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationA Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator
A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.
More informationestimator for determining the regularization parameter in
submitted to Geophys. J. Int. Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Saeed Vatankhah, Vahid
More information