The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

Size: px
Start display at page:

Download "The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation"

Transcription

1 The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS 1 / 40

2 Outline Introduction and Motivation Some Standard (or NOT) Methods for Regularization Parameter Estimation Statistical Results for Least Squares Implications of Statistical Results for Regularized Least Squares Newton algorithm Algorithm with LSQR Results Conclusions and Future Work Other Detailed Results Results for Validating LSQR Implementation with GSVD MATHEMATICS AND STATISTICS 2 / 40

3 Least Squares for Ax = b Consider discrete systems: A R m n, b R m, x R n Ax = b + e, Classical Approach Linear Least Squares x LS = arg min x Ax b 2 2 Difficulty x LS is sensitive to changes in the right hand side b when A is ill-conditioned. System is numerically ill-posed. MATHEMATICS AND STATISTICS 3 / 40

4 Handling the ill-posedness: Given multiple measurements of data Include some additional information about the solution and/or the data Usually error in b, e is an m vector of random measurement errors with mean 0 and positive definite covariance matrix C b = E(ee T ). Suppose C b is known. (Calculate if given multiple b) For uncorrelated measurements C b is diagonal matrix of standard deviations of the errors. (Colored noise) Perhaps the fit to data can be calculated in a weighted norm. Let W b = C b 1 and L b L b T = W b be the Choleski factorization of W b and weight the equation: L b Ax = L b b + ẽ, ie x WLS = arg min{ b Ax 2 x W b Then, theoretically, ẽ are uncorrelated. (White noise). But system ill-conditioning is usually deteriorated if noise is far from white (see Hansen). MATHEMATICS AND STATISTICS 4 / 40

5 Alternative: Introduce a Mechanism for Regularization Weighted Fidelity with Regularization Regularize x LS = arg min{ b Ax 2 x W b + λ 2 R(x)}, where R(x) is a regularization term λ is a regularization parameter which is unknown. Notice that the solution is x LS (λ), dependent on λ. It also depends on choice of R. Requirements Depends on R - what to chose? Depends on λ - what to chose? MATHEMATICS AND STATISTICS 5 / 40

6 A Specific Choice R(x) = D(x x 0 ) 2 : Tikhonov Regularized Generalized Tikhonov regularization: Given matrix D that is suitable. ˆx = argmin J(x) = argmin{ Ax b 2 W b + λ 2 D(x x 0 ) 2 }. (1) Assume N (A) N (D) = Weighting matrix W b is inverse covariance matrix for data b. x 0 is a reference solution, often x 0 = 0. Solution ˆx(λ) = argmin J(x) = argmin{ Ax b 2 W b + λ 2 D(x x 0 ) 2 }. (2) Question Given D, how do we find λ? MATHEMATICS AND STATISTICS 6 / 40

7 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A+λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Find corner No corner MATHEMATICS AND STATISTICS 7 / 40

8 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Sometimes flat MATHEMATICS AND STATISTICS 8 / 40

9 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed MATHEMATICS AND STATISTICS 9 / 40

10 Hybrid LSQR Methods to Project out the Noise Iterate LSQR to find solution and stop when noise starts to dominate (Hnetynkova and others) Solve the reduced system. Hybrid method - solve reduced system with additional regularization. Cost of regularization of reduced system is minimal Any regularization method may be used ( Nagy) Talk to the local experts! MATHEMATICS AND STATISTICS 10 / 40

11 A More general formulation: Maximum A Posteriori Method Formulation: ˆx = argmin J(x) = argmin{ Ax b 2 W b + (x x 0 ) 2 W D }. (3) Notice the regularization term includes a new weighting matrix, but no mapping D. (It is hidden in W D ) Standard: W D = λ 2 D T D, λ unknown penalty parameter. Ideally, statistically, W D is inverse covariance matrix for the mapped model Dx i.e. λ = 1/σ x, σ 2 x the common variance in Dx. Assumes the resulting estimates for Dx uncorrelated. ˆx is the standard maximum a posteriori (MAP) estimate of the solution, when all a priori information is provided. Can this provide additional information? MATHEMATICS AND STATISTICS 11 / 40

12 Background: Statistics of the Least Squares Problem Theorem (Rao73: First Fundamental Theorem) Let r be the rank of A and for b N(Ax, σb 2 I), (errors in measurements are normally distributed with mean 0 and covariance σb 2 I), then J = min Ax b 2 σ 2 x bχ 2 (m r). J follows a χ 2 distribution with m r degrees of freedom: Basically the Discrepancy Principle Corollary (Weighted Least Squares) For b N(Ax, C b ), and W b = C b 1 then J = min Ax b 2 x W b χ 2 (m r). MATHEMATICS AND STATISTICS 12 / 40

13 Extension: Statistics of the Regularized Least Squares Problem Two New Results to Help Find the Regularization parameter: Theorem: χ 2 distribution of the regularized functional ˆx = argmin J D (x) = argmin{ Ax b 2 W b + (x x 0 ) 2 W D }, W D = D T W x D. (4) Assume W b and W x are symmetric positive definite. Problem is uniquely solvable N (A) N (D) 0. Moore-Penrose generalized inverse of W D is C D Statistics: (b Ax) = e N(0, C b ), (x x 0 ) = f N(0, C D ), x0 is the mean vector of the model parameters. Then J D χ 2 (m + p n) MATHEMATICS AND STATISTICS 13 / 40

14 Corollary: a-priori information not mean value, e.g. x 0 = 0 Corollary: non-central χ 2 distribution of the regularized functional ˆx = argmin J D (x) = argmin{ Ax b 2 W b + (x x 0 ) 2 W D }, W D = D T W x D. (5) Assume all assumptions as before, but x 1 x 0 is the mean vector of the model parameters. Let c = c 2 2 = QU T W b 1/2 A(x 1 x 0 ) 2 2 Then J D χ 2 (m + p n, c) MATHEMATICS AND STATISTICS 14 / 40

15 Implications of the Result Statistical Distribution of the Functional Mean and Variance are prescribed E(J D ) = m + p n + c E(J D J T D) = 2(m + p n) + 4c Can we use this? YES Try to find W D so that E(J) = m n + p + 2c (Mead 2007) Mead algorithm finds W D but is expensive Our proposal - find λ only. MATHEMATICS AND STATISTICS 15 / 40

16 What do we need to apply the Theory? Requirements Covariance information C b on data parameters b ( or on model parameters x!) A priori information either x 0 is the mean, or mean value x 1. But x 1 and x 0 are not known. For repeated data measurements C b can be calculated. Also b 1 can be found, the mean of b. But E(b) = AE(x) implies b 1 = Ax 1. Hence c = c 2 2 = QU T W b 1/2 (b 1 Ax 0 ) 2 2 E(J D ) = E( QU T W b 1/2 (b Ax 0 ) 2 2) = m+p n+ QU T W b 1/2 (b 1 Ax 0 ) 2 2 Then we can use E(J) to find λ MATHEMATICS AND STATISTICS 16 / 40

17 Assume x 0 is the mean (experimentalists do know something about the model parameters) DESIGNING THE ALGORITHM: I Recall: if C b and C x are good estimates of covariance should be small. J D (ˆx) (m + p n) Thus, let m = m + p n then we want m 2 mz α/2 < J(x(W D )) < m + 2 mz α/2. (6) z α/2 is the relevant z-value for a χ 2 -distribution with m degrees GOAL Find W D to make (6) tight: Single Variable case find λ J D (ˆx(λ)) m MATHEMATICS AND STATISTICS 17 / 40

18 A Newton-line search Algorithm to find λ.(basic algebra) Newton to Solve F(σ) = J D (σ) m = 0 We use σ = 1/λ, and y(σ (k) ) is the current solution for which Then x(σ (k) ) = y(σ (k) ) + x 0 σ J(σ) = 2 σ 3 Dy(σ) 2 < 0 Hence we have a basic Newton Iteration Add a line search σ (k+1) = σ (k) ( ( σ(k) Dy )2 (J D (σ (k) ) m)). σ (k+1) = σ (k) (1 + α(k) 2 ( σ(k) Dy )2 (J D (σ (k) ) m)). MATHEMATICS AND STATISTICS 18 / 40

19 Algorithm Using the GSVD GSVD Use GSVD of [W 1/2 b A, D] For γ i the generalized singular values, and s = U T W 1/2 b r m = m n + p s i = s i /(γi 2σ2 x + 1), i = 1,..., p, t i = s i γ i. Find root of F(σ x ) = p 1 ( γi 2σ2 x + 1 )s2 i + i=1 m s 2 i m = 0 i=n+1 Equivalently: solve F = 0, where F(σ x ) = s T s m and F (σ x ) = 2σ x t 2 2. MATHEMATICS AND STATISTICS 19 / 40

20 Discussion on Convergence F is monotonic decreasing (F (σ x ) = 2σ x t 2 2 ) Solution either exists and is unique for positive σ Or no solution exists F(0) < 0. implies incorrect statistics of the model Theoretically, lim σ F > 0 possible. Equivalent to λ = 0. No regularization needed. MATHEMATICS AND STATISTICS 20 / 40

21 Practical Details of Algorithm Find the parameter Step 1: Bracket the root by logarithmic search on σ to handle the asymptotes: yields sigmamax and sigmamin Step 2: Calculate step, with steepness controlled by told. Let t = Dy/σ (k), where y is the current update, given from the GSVD, then step = 1 2 ( 1 max { t, told} )2 (J D (σ (k) ) m) Step 3: Introduce line search α (k) in Newton sigmanew = σ (k) (1 + α (k) step) α (k) chosen such that sigmanew within bracket. MATHEMATICS AND STATISTICS 21 / 40

22 Practical Details of Algorithm: Large Scale problems Algorithm Initialization Convert generalized Tikhonov problem to standard form. Use LSQR algorithm to find the bidiagonal matrix spanning appropriate space of solution Obtain a solution of the bidiagonal problem for given σ. Resuse bidiagonalization in update of σ for Newton. Each σ calculation of algorithm reuses saved information from the Lancos bidiagonalization. The system is augmented if needed. MATHEMATICS AND STATISTICS 22 / 40

23 Comparison with Standard LSQR hybrid Algorithm Algorithm concurrently regularizes and solves the system. In contrast, standard hybrid LSQR solves projected system with regularization. Needs only cost of standard LSQR algorithm with some updates for solution solves for iterated σ. The regularization introduced by LSQR projection may be useful for preventing problems with GSVD expansion. Makes algorithm viable for large scale problems. MATHEMATICS AND STATISTICS 23 / 40

24 Recall: Implementation Assumptions Covariance of Error: Statistics of Measurement Errors Information on the covariance structure of errors in b needed. Use C b = σb 2 I for common covariance, white noise. Use C b = diag(σ1 2, σ2 2,..., σ2 m) for colored uncorrelated noise. With no noise information C b = I. Use b 1 as the mean of measured b, when implemented with centrality parameter, x 0 = 0. Tolerance on Convergence The convergence tolerance depends on the noise structure. Use TOL = 2 mz α/2. No noise structure use α =.001, generates large TOL Good noise information use α =.95, generates small TOL MATHEMATICS AND STATISTICS 24 / 40

25 Recall: Implementation Assumptions Covariance of Error: Statistics of Measurement Errors Information on the covariance structure of errors in b needed. Use C b = σb 2 I for common covariance, white noise. Use C b = diag(σ1 2, σ2 2,..., σ2 m) for colored uncorrelated noise. With no noise information C b = I. Use b 1 as the mean of measured b, when implemented with centrality parameter, x 0 = 0. Tolerance on Convergence The convergence tolerance depends on the noise structure. Use TOL = 2 mz α/2. No noise structure use α =.001, generates large TOL Good noise information use α =.95, generates small TOL MATHEMATICS AND STATISTICS 24 / 40

26 An example of the method: Seismic Signal Restoration The Data Set and Goal Real data set of 48 signals of length The point spread function is derived from the signals. Calculate the signal variance pointwise over all 48 signals. Goal: restore the signal x from Ax = b, where A is psf matrix and b is given blurred signal. Method of Comparison- no exact solution known No exact solution. Downsample the signal and restore for different resolutions Resolution 2 : 1 5 : 1 10 : 1 20 : : 1 Points Do results converge? Compare with UPRE and L-Curve. MATHEMATICS AND STATISTICS 25 / 40

27 An example of the method: Seismic Signal Restoration The Data Set and Goal Real data set of 48 signals of length The point spread function is derived from the signals. Calculate the signal variance pointwise over all 48 signals. Goal: restore the signal x from Ax = b, where A is psf matrix and b is given blurred signal. Method of Comparison- no exact solution known No exact solution. Downsample the signal and restore for different resolutions Resolution 2 : 1 5 : 1 10 : 1 20 : : 1 Points Do results converge? Compare with UPRE and L-Curve. MATHEMATICS AND STATISTICS 25 / 40

28 Comparison High Resolution White noise (left) and Colored Noise (right) Greater contrast with χ 2. UPRE is insufficiently regularized. L-curve severely undersmooths (not shown). Parameters not consistent across resolutions MATHEMATICS AND STATISTICS 26 / 40

29 THE UPRE SOLUTION: White Noise and Colored Noise x 0 = 0 Regularization Parameters are consistent: σ = all resolutions MATHEMATICS AND STATISTICS 27 / 40

30 THE GSVD SOLUTION: White Noise (left) and Colored Noise (right) x 0 = 0 Regularization Parameters are consistent: σ = (left), σ = (right) all resolutions MATHEMATICS AND STATISTICS 28 / 40

31 THE NONCENTRAL GSVD SOLUTION: White Noise (left) and Colored Noise (right) x 0 = 0 Regularization quite consistent resolution 2 to 100 σ = , , , , (left) σ = ,.00007,.00007,.00007, (right). Notice that colored noise eliminates second arrival of signal but excellent contrast to identify primary arrival. MATHEMATICS AND STATISTICS 29 / 40

32 More Signals! UPRE and L-curve exhibit under regularization MATHEMATICS AND STATISTICS 30 / 40

33 Conclusions For the Case when x 0 known Observations A new statistical method for estimating regularization parameter Compares favorably with UPRE with respect to performance and compared to L-curve. (GCV is not competitive). Method can be used for large scale problems. Method is very efficient, Newton method is robust and fast. But x 0 is the mean of x is needed. MATHEMATICS AND STATISTICS 31 / 40

34 Difficulties when central parameter is required What are the issues? Function need not be monotonic More problematic for NonCentral version with x 0 not the mean. (ie x 0 = 0. σ can be bounded by result of central case. Range of σ given by range of γ i. May oversmooth the solution if good range of σ not found. MATHEMATICS AND STATISTICS 32 / 40

35 Future Work Other Results and Future Work Degrees of freedom reduced when using the GSVD. How to apply Picard condition for GSVD to handle problems with robustness due to conditioning of C b Image deblurring. (Implementation to use minimal storage) Diagonal Weighting Schemes Edge preserving regularization Constraint implementation ( with Mead submitted). MATHEMATICS AND STATISTICS 33 / 40

36 Some Small Scale Experiments: Verify Robustness of LSQR/GSVD Details Take example from Hansen s toolbox, eg shaw, phillips, heat, ilaplace. Generate 500 copies for each noise level, here.005,.01,.05,.1. Solve for 500 cases using GSVD and LSQR Newton. Pairwise t test on obtained σ: verify equivalence GSVD and LSQR. Compare results with statistical technique : UPRE Errors - relative least squares, and max error. Calculate over all errors less than.5. Regularization parameter (not given). MATHEMATICS AND STATISTICS 34 / 40

37 Example of the Colored Noise distribution The pointwise variance for each noise level MATHEMATICS AND STATISTICS 35 / 40

38 Noise and Exact Right Hand Side Vector Actual Noise Levels MATHEMATICS AND STATISTICS 36 / 40

39 Noise and Exact Right Hand Side Vector Actual Noise Levels MATHEMATICS AND STATISTICS 36 / 40

40 Noise and Exact Right Hand Side Vector Actual Noise Levels MATHEMATICS AND STATISTICS 36 / 40

41 Noise and Exact Right Hand Side Vector Actual Noise Levels MATHEMATICS AND STATISTICS 36 / 40

42 Problem size 64, Regularization First Order Derivative, Colored Noise shaw Bidiagonal Newton Steps P value ɛ Average GSVD LSQR Iteration σ 5e e e e e e 01 ilaplace 5e e e 01 1e e e 01 heat 5e e e e e e + 00 phillips 5e e e e e e + 00 Table: Convergence characteristics and P-values comparing GSVD and LSQR iteration steps and σ values MATHEMATICS AND STATISTICS 37 / 40

43 Problem size 64, Regularization First Order Derivative, Colored Noise Least Squares Error shaw Max Error ɛ GSVD LSQR UPRE GSVD LSQR UPRE.05.34(207).34(207).34(146).31(111).31(111).31(84).1.39(105).39(105).35(29).35(43).35(43).35(28) ilaplace.05.23(361).23(361).16(398).12(495).12(495).07(425).1.25(317).25(317).21(370).15(490).15(490).10(428) heat.05.27(500).27(500).28(479).19(500).19(500).20(489).1.35(497).35(497).36(461).27(500).27(500).27(489) phillips.05.10(500).10(500).10(455).10(500).10(500).09(455).1.12(500).12(500).11(431).12(500).12(500).11(431) Table: Comparison with UPRE, Relative Least Squares and max error, in parentheses the number of accepted values, large values (>.5) excluded from the average MATHEMATICS AND STATISTICS 38 / 40

44 Summary of Results Major Observations GSVD-LSQR UPRE High correlation between σ for both χ 2 (GSVD and LSQR) obtained. p values at or near 1. Algorithms converge with very few regularization calculations, average 7. The bidiagonalized system is on average much smaller than the system size. Errors distributions equivalent. Total cost of LSQR is the cost of bidiagonalization plus use of bidiagonalization to obtain a solution, eg column 2 plus column 4 solves. Errors of UPRE are similar to χ 2 approaches. UPRE fails more often ( more discarded solutions with high noise). MATHEMATICS AND STATISTICS 39 / 40

45 THANK YOU! MATHEMATICS AND STATISTICS 40 / 40

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems Boise State University ScholarWorks Mathematics Faculty Publications and Presentations Department of Mathematics 1-1-2009 A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For

More information

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION THE χ -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION JODI MEAD AND ROSEMARY A RENAUT Abstract. We discuss algorithms for estimating least squares regularization parameters

More information

Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion Regularization parameter estimation for underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute

More information

Regularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

Regularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion Regularization Parameter Estimation for Underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.

More information

Stochastic Analogues to Deterministic Optimizers

Stochastic Analogues to Deterministic Optimizers Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured

More information

arxiv: v2 [math.na] 8 Feb 2018

arxiv: v2 [math.na] 8 Feb 2018 MODIFIED TRUNCATED RANDOMIZED SINGULAR VALUE DECOMPOSITION (MTRSVD) ALGORITHMS FOR LARGE SCALE DISCRETE ILL-POSED PROBLEMS WITH GENERAL-FORM REGULARIZATION ZHONGXIAO JIA AND YANFEI YANG arxiv:1708.01722v2

More information

Old and new parameter choice rules for discrete ill-posed problems

Old and new parameter choice rules for discrete ill-posed problems Old and new parameter choice rules for discrete ill-posed problems Lothar Reichel Giuseppe Rodriguez Dedicated to Claude Brezinski and Sebastiano Seatzu on the Occasion of Their 70th Birthdays. Abstract

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

On nonstationary preconditioned iterative regularization methods for image deblurring

On nonstationary preconditioned iterative regularization methods for image deblurring On nonstationary preconditioned iterative regularization methods for image deblurring Alessandro Buccini joint work with Prof. Marco Donatelli University of Insubria Department of Science and High Technology

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS JODI MEAD AND ROSEMARY A RENAUT Abstract. Linear least squares problems with box constraints are commonly solved to find model

More information

On the regularization properties of some spectral gradient methods

On the regularization properties of some spectral gradient methods On the regularization properties of some spectral gradient methods Daniela di Serafino Department of Mathematics and Physics, Second University of Naples daniela.diserafino@unina2.it contributions from

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

unbiased predictive risk estimator for regularization

unbiased predictive risk estimator for regularization submitted to Geophys. J. Int. 3-D Projected L inversion of gravity data using truncated unbiased predictive risk estimator for regularization parameter estimation Saeed Vatankhah, Rosemary A. Renaut 2

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Hua Xiang Jun Zou July 20, 2013 Abstract In this paper we propose an algorithm for solving the large-scale discrete ill-conditioned

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods

Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Unbiased Risk Estimation as Parameter Choice Rule for Filter-based Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3

More information

Backward Error Estimation

Backward Error Estimation Backward Error Estimation S. Chandrasekaran E. Gomez Y. Karant K. E. Schubert Abstract Estimation of unknowns in the presence of noise and uncertainty is an active area of study, because no method handles

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems

Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Lanczos Bidiagonalization with Subspace Augmentation for Discrete Inverse Problems Per Christian Hansen Technical University of Denmark Ongoing work with Kuniyoshi Abe, Gifu Dedicated to Dianne P. O Leary

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS JODI MEAD AND ROSEMARY A RENAUT. Introduction. The linear least squares problems discussed here are often used to incorporate

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N.

MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. MATH 3795 Lecture 13. Numerical Solution of Nonlinear Equations in R N. Dmitriy Leykekhman Fall 2008 Goals Learn about different methods for the solution of F (x) = 0, their advantages and disadvantages.

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

estimator for determining the regularization parameter in

estimator for determining the regularization parameter in submitted to Geophys. J. Int. Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Saeed Vatankhah, Vahid

More information

Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion

Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Application of the χ principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion arxiv:48.7v [math.na] 4 Aug 4 Saeed Vatankhah, Vahid

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

1 Cricket chirps: an example

1 Cricket chirps: an example Notes for 2016-09-26 1 Cricket chirps: an example Did you know that you can estimate the temperature by listening to the rate of chirps? The data set in Table 1 1. represents measurements of the number

More information

Elaine T. Hale, Wotao Yin, Yin Zhang

Elaine T. Hale, Wotao Yin, Yin Zhang , Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2

More information

10. Unconstrained minimization

10. Unconstrained minimization Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method self-concordant functions implementation

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy

Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy Jakob Hansen a, Jarom Hogue a, Grant Sander a, Rosemary

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

Linear Regression Linear Regression with Shrinkage

Linear Regression Linear Regression with Shrinkage Linear Regression Linear Regression ith Shrinkage Introduction Regression means predicting a continuous (usually scalar) output y from a vector of continuous inputs (features) x. Example: Predicting vehicle

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

2017 Society for Industrial and Applied Mathematics

2017 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 39, No. 5, pp. S24 S46 2017 Society for Industrial and Applied Mathematics GENERALIZED HYBRID ITERATIVE METHODS FOR LARGE-SCALE BAYESIAN INVERSE PROBLEMS JULIANNE CHUNG AND ARVIND

More information

Deconvolution. Parameter Estimation in Linear Inverse Problems

Deconvolution. Parameter Estimation in Linear Inverse Problems Image Parameter Estimation in Linear Inverse Problems Chair for Computer Aided Medical Procedures & Augmented Reality Department of Computer Science, TUM November 10, 2006 Contents A naive approach......with

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

Linear Algebra and its Applications

Linear Algebra and its Applications Linear Algebra and its Applications 436 (2012) 2845 2865 Contents lists available at SciVerse ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Large-scale

More information

Discontinuous parameter estimates with least squares estimators

Discontinuous parameter estimates with least squares estimators Discontinuous parameter estimates with least squares estimators J.L. Mead Boise State University, Department of Mathematicss, Boise, ID 8375-1555. Tel: 846-43, Fax: 8-46-1354 Email jmead@boisestate.edu.

More information

Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake

Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake Seung Keun Park and Hae Sung Lee ABSTRACT This paper presents a system identification (SI) scheme

More information

Choosing the Regularization Parameter

Choosing the Regularization Parameter Choosing the Regularization Parameter At our disposal: several regularization methods, based on filtering of the SVD components. Often fairly straightforward to eyeball a good TSVD truncation parameter

More information

Regularization and Inverse Problems

Regularization and Inverse Problems Regularization and Inverse Problems Caroline Sieger Host Institution: Universität Bremen Home Institution: Clemson University August 5, 2009 Caroline Sieger (Bremen and Clemson) Regularization and Inverse

More information

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory

Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory Statistical Inference Parametric Inference Maximum Likelihood Inference Exponential Families Expectation Maximization (EM) Bayesian Inference Statistical Decison Theory IP, José Bioucas Dias, IST, 2007

More information

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2

σ 11 σ 22 σ pp 0 with p = min(n, m) The σ ii s are the singular values. Notation change σ ii A 1 σ 2 HE SINGULAR VALUE DECOMPOSIION he SVD existence - properties. Pseudo-inverses and the SVD Use of SVD for least-squares problems Applications of the SVD he Singular Value Decomposition (SVD) heorem For

More information

Normed & Inner Product Vector Spaces

Normed & Inner Product Vector Spaces Normed & Inner Product Vector Spaces ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 27 Normed

More information

Iterative regularization of nonlinear ill-posed problems in Banach space

Iterative regularization of nonlinear ill-posed problems in Banach space Iterative regularization of nonlinear ill-posed problems in Banach space Barbara Kaltenbacher, University of Klagenfurt joint work with Bernd Hofmann, Technical University of Chemnitz, Frank Schöpfer and

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

Ill Posed Inverse Problems in Image Processing

Ill Posed Inverse Problems in Image Processing Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,

More information

Mathematical Methods in Engineering and Science Prof. Bhaskar Dasgupta Department of Mechanical Engineering Indian Institute of Technology, Kanpur

Mathematical Methods in Engineering and Science Prof. Bhaskar Dasgupta Department of Mechanical Engineering Indian Institute of Technology, Kanpur Mathematical Methods in Engineering and Science Prof. Bhaskar Dasgupta Department of Mechanical Engineering Indian Institute of Technology, Kanpur Module - I Solution of Linear Systems Lecture - 05 Ill-Conditioned

More information

Sparse Approximation and Variable Selection

Sparse Approximation and Variable Selection Sparse Approximation and Variable Selection Lorenzo Rosasco 9.520 Class 07 February 26, 2007 About this class Goal To introduce the problem of variable selection, discuss its connection to sparse approximation

More information

Least squares: the big idea

Least squares: the big idea Notes for 2016-02-22 Least squares: the big idea Least squares problems are a special sort of minimization problem. Suppose A R m n where m > n. In general, we cannot solve the overdetermined system Ax

More information

1. Introduction. We consider the solution of the ill-posed model dependent problem

1. Introduction. We consider the solution of the ill-posed model dependent problem SIAM J. MATRIX ANA. APP. Vol. 26, No. 2, pp. 457 476 c 25 Society for Industrial and Applied Mathematics EFFICIENT AGORITHMS FOR SOUTION OF REGUARIZED TOTA EAST SQUARES ROSEMARY A. RENAUT AND HONGBIN GUO

More information

CS 195-5: Machine Learning Problem Set 1

CS 195-5: Machine Learning Problem Set 1 CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 5 Nonlinear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS PROJECED IKHONOV REGULARIZAION OF LARGE-SCALE DISCREE ILL-POSED PROBLEMS DAVID R. MARIN AND LOHAR REICHEL Abstract. he solution of linear discrete ill-posed problems is very sensitive to perturbations

More information

AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION

AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION Michigan Technological University Digital Commons @ Michigan Tech Dissertations, Master's Theses and Master's Reports - Open Dissertations, Master's Theses and Master's Reports 2015 AN ANALYSIS OF MULTIPLICATIVE

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Vector and Matrix Norms. Vector and Matrix Norms

Vector and Matrix Norms. Vector and Matrix Norms Vector and Matrix Norms Vector Space Algebra Matrix Algebra: We let x x and A A, where, if x is an element of an abstract vector space n, and A = A: n m, then x is a complex column vector of length n whose

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information

Automatic stopping rule for iterative methods in discrete ill-posed problems

Automatic stopping rule for iterative methods in discrete ill-posed problems Comp. Appl. Math. DOI 10.1007/s40314-014-0174-3 Automatic stopping rule for iterative methods in discrete ill-posed problems Leonardo S. Borges Fermín S. Viloche Bazán Maria C. C. Cunha Received: 20 December

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

8 The SVD Applied to Signal and Image Deblurring

8 The SVD Applied to Signal and Image Deblurring 8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion

An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion An Extended Frank-Wolfe Method, with Application to Low-Rank Matrix Completion Robert M. Freund, MIT joint with Paul Grigas (UC Berkeley) and Rahul Mazumder (MIT) CDC, December 2016 1 Outline of Topics

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725 Consider Last time: proximal Newton method min x g(x) + h(x) where g, h convex, g twice differentiable, and h simple. Proximal

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information