NON-NEGATIVELY CONSTRAINED LEAST

Size: px
Start display at page:

Download "NON-NEGATIVELY CONSTRAINED LEAST"

Transcription

1 NON-NEGATVELY CONSTRANED LEAST SQUARES AND PARAMETER CHOCE BY THE RESDUAL PERODOGRAM FOR THE NVERSON OF ELECTROCHEMCAL MPEDANCE SPECTROSCOPY DATA Rosemary Renaut, Mathematics, ASU Jakob Hansen, Jarom Hogue and Grant Sander, Math Undergraduates ASU Sudeep Popat Biodesign nstitute ASU OCTOBER 4, 23

2 Outline Application: Electrochemical Spectroscopy for Biofuels Mathematical Model Nonlinear Fitting Linear Fitting Numerical Quadrature ntegration by t ntegration by s Right Preconditioner: Stability Example Models for Simulation Regularization Parameter Choice Nonnegative Least Squares Parameter Choice: NNLS, Stability Augmented Residual Conclusions

3 Physical Experiment: Microbial Electrolysis Cell for anaerobic bacteria (a) Fuel cell (b) Geobacter (c) Microbial electrolysis cell Figure: Geobacter, fuel cells, and waste water: Single-chamber microbial electrolysis cell consisting of a carbon fiber anode with a stainless steel current collector and a stainless steel cathode is used to convert food wastes into hydrogen in a single step.

4 (b) Potential Loss inside ARB cell Motivation: ARB in fuel cell should be electrically self-sufficient (a) Cyclic Voltammograms (CVs) Figure: (a) CVs of an ARB biofilm during its growth phase. Nernst-Monod relationship compared against the CVs (dotted lines). (b) CV explaining the potential profile for an ARB biofilm. The Nernst-Monod relationship (red dotted line) shows that most potential losses occur inside the ARB s cell.

5 Data Collection: Electrochemical mpedance Spectroscopy Figure: Schematic of the microbial electrochemical cells used for Electrochemical mpedance Spectroscopy (ES)

6 Nyquist Plot of ES impedance measurements Figure: Nyquist plot and fitting by equivalent circuit model (ECM) for an ARB biofilm tested at -.35 vs Ag/AgCl.

7 Goals: challenges Given ES measurements can we find. The underlying total resistance R pol 2. The equivalent circuit model - so as to understand the inefficiencies 3. Number and processes and sizes of their resistances R k such that k R k = R pol. 4. Chemical environment that minimizes ARB potential loss The aim is for a self sustaining electrochemical environment Challenges Limited sampling, perhaps 7 measurements No information on the noise in the data Measurements over a broad frequency range

8 Mathematical Model Fredholm ntegral Equation Small input current and measured output voltage leads to mpedance measurements Z(ω) for angular frequency ω Z(ω) = R + R pol g(t) dt, () + i ωt Kernel h(ω, t) = +i ωt - is not square integrable. R and R pol are unknown parameters. Distribution function of relaxation times (DRT) g(t) g(t)dt = and g(t) > is unknown

9 The DRT g(t) Simple Resistance - Capacitance Circuit Cole-Cole or ZARC single element. (literature dependent) g RQ (t) = 2πt The impedance is analytically given sin βπ ( ( )) cosh β ln t t + cos βπ Z RQ (ω) = < β < and distribution of time constants t., (2) + (iωt ) β, (3) Constant Phase element Q and resistance R

10 The DRT g(t) Simple network of resistance - capacitance circuits May also be realized by a lognormal distribution for the DRT ) g LN (t) = ( tσ 2π exp (ln(t) µ)2 2σ 2. (4) No analytic form for the impedance is given. Figure: Three lognormals and a weighted linear combination

11 Comparison of LN and RQ for one process chosen to align 8 7 LN RQ LN RQ.35 LN RQ (a) DRTs: t space 5 5 (b) DRTs: s space (c) Nyquist plot LN RQ LN RQ (d) Components Z (e) Components Z 2 Figure: Simulated exact data measured at 65 logarithmically spaced points in ω. n each case the solid line indicates the RQ functions and the symbols the LN functions. s = log(t).

12 Nonlinear Fitting f analytic Z(ω) is known fitting to Z is possible No analytic form for Z nonlinear fitting integrates the DRT What error is incurred by a wrong choice of model? The residuals are small Mean Fit by RQ Upper 95% bound Lower 95% bound Mean Fit by LN Upper 95% bound Lower 95% bound 6 8 Mean Fit by RQ Upper 95% bound Lower 95% bound Mean Fit by LN Upper 95% bound Lower 95% bound (a) Fitting to the RQ (b) Fitting to the LN Figure: Residual norms fitting one process to the impedance spectrum of a DRT consisting of one process with white noise.

13 The impact 8 7 LN RQ 8 7 LN RQ 8 7 LNfit RQ LN RQfit (a) RQ by LN 2 (b) LN by RQ 5 5 (c) RQ by LN: s space 5 5 (d) LN by RQ: s space Figure: Fitting functions with the consistent mean values obtained over 5 noisy selections.. True RQ β =.72, t =., true LN σ = 3, t =.. RQ to LN gives β =.86 and t = LN to RQ gives σ = and t =.3 Location and height of peak is incorrect Conclude: knowledge of the model is highly relevant for NLS.

14 Linear Formulation: ntegration in t Z(ω) = Z (ω) + iz 2 (ω) ) ( = (R + R pol g(t) ) + ω 2 t 2 dt ir pol ωtg(t) + ω 2 t 2 dt, ) ( ) = (R + R pol h (ω, t)g(t) dt ir pol h 2 (ω, t)g(t) dt. Given N quadrature points and [T min, T max ], Tmax N g(t)h k (ω, t) dt a n g(t n )h k (ω, t) (5) T min n= Matrices (A k ) mn = a n h k (ω m, t n ) m M, n N. Measurements (b ) m Z (ω m ) R and (b 2 ) m Z 2 (ω m ) (x) n g(t n ) satisfies the discrete linear systems A k x b k, k =,..., 3, A 3 = [A ; A 2 ] (6) Definining finer resolution also yields square A 4 = [A e, Ae 2 ]

15 Systems are ll-conditioned whether Simpson s or trapezium 3 N = 65 3 N = 25 2 Atrap Asimp A2trap A2simp A3trap A3simp A4trap A4simp 25 2 Atrap Asimp A2trap A2simp A3trap A3simp A4trap A4simp [,] [,2] [,3] [2,] [2,2] [2,3] [3,] [3,2] [3,3] ndices of Tmin and Tmax (a) N = 65 [,] [,2] [,3] [2,] [2,2] [2,3] [3,] [3,2] [3,3] ndices of Tmin and Tmax (b) N = Figure: The condition of matrices A k, k =... 4 (cond(a k )) plotted against the pairs (l, j) indicating [T min (l), T max (j)] with T min = [5e 2, e 2, 3e 3] and T max = [8e3, 4e4, 2e5] for both Simpson s and trapezium quadrature rules. Values for ω = 2πφ are based on logarithmic spacing on the interval (φ min, φ max ) = (.7, 5 ), consistent with the measured data.

16 Model Error for the LN g(t): integrating in t Truncation error occurs due to semi-infinite integral Quadrature error occurs due to limited sampling Exact measurements ˆb k (ω, t, σ) are contaminated ˆb k (ω, t, σ) = g(t, t, σ)h i (t, ω)dt + E trunc k (ω, T min, T max, t, σ) + E quad k (ω, T min, T max, t, σ, N), Figure: Larger error for h than h 2. Depends on location of peaks g(t) N = 65 N = 5 5 h actual (max, ω) = (.4,.) h estim. (max, ω) = (.37,.) h2 actual (max, ω) = (.5,.585) h2 estim. (max, ω) = (.7,.585) linear omega /omega /omega 2 h actual (max, ω) = (.6,.) h estim. (max, ω) = (.3,.) h2 actual (max, ω) = (.2,.674) h2 estim. (max, ω) = (.6,.674) linear omega /omega /omega 2 5 ω 5 ω

17 Transformation s = ln(t) Let s = ln(t) Z(ω) = h(ω, t)g(t) dt = h(ω, e s )f(s) ds, For the DRTs (2)-(4) we obtain the functions f(s) := tg(t). (7) f RQ (s) = 2π f LN (s) = σ 2π exp sin(βπ) (cosh(β(s ln(t ))) + cos(βπ)) (s µ)2 ( 2σ 2 Quadrature uses constant step in s h(ω, e s )f(s) ds smax s min (8) ), ln(t ) = µ σ 2, (9) h(ω, e s )f(s) ds s N n= h(ω, e sn )f(s n ). ()

18 Reduced Quadrature Error for integration by s than by t h (s) h 2 (s) h (t) h 2 (t) 6 h (s) h 2 (s) h (t) h 2 (t) 6 7 h (s) h 2 (s) h (t) h 2 (t) (c) t =., β =.5. (d) t =., β = (e) t =., β =.5. h (s) h (s) 5 h 2 (s) h (t) 5 h 2 (s) h (t) 5 h 2 (t) h 2 (t) h (s) h (s) 2 h (t) h 2 (t) (f) σ = ln(3) (g) σ = ln(3) (h) σ = ln(3). Figure: Quadrature error for a single RQ (above) LN(below) process with N = 65 for quadrature in s and t as a function of ω, plotted on a log-log scale.

19 mproved Quadrature : End conditions by integrating the kernels h k (ω, e s )f(s) ds s N h k (ω, e sn )f(s n )+ n= f(s N )r k,n (ω, e s N ) + f(s )r k, (ω, e s ). () r k,n (ω, e s N ) := r k, (ω, e s ) := { 2 ln( + (ωes N ) 2 ) k = π 2 tan (ωe s N )) k = 2, { 2 s h (ω, e s ) k = tan (ωe s ). k = 2. yields with H k (s) = h k (ω, s)f(s) and s s f(s) ds δ(f) { ɛ E k (ω) 2 ( s + ln(2)) + δ(f) + (s N s ) 3 (N + ) H 2N 3 (ζ) k = ɛπ + (s N s ) 3 (ζ), k = 2. H 2N 2 2

20 mpact on Conditioning Quad Equation A A 2 A 3 A 4 t (5).5e + 3.4e e e + 2 s () 2.9e e e e + 8 s () 2.8e e e e + 8 Table: Comparing condition numbers with different quadratures for optimal selection of nodes for t n, using t = /ω. Transformation acts as a right preconditioner on the system. Overdetermined system with A 3. Use matrix A 3 if sufficient data for resolution of the solution.

21 Right preconditioning separates frequencies of the basis (a) A : U above and V below (b) A s : U above and V below. Figure: Normalized Cumulative Periodograms (NCPs) and Kolmogorov-Smirnov 95% confidence bounds for white noise for spectral decomposition of matrices A and A s. NCPs for A 2 and A s 2 show a similar separation of frequency content for basis vectors. NCP outside KS bounds is with 95% confidence not white noise

22 Why is the spectral decomposition important? Solution is expressible in terms of the singular value decomposition of matrix A = UΣV T x = n i= u T i b v i σ i. if u i and/or v i are noise contaminated, so is x 2. Picard condition looks at the ratios ut i b σ i but depends on right hand side b to see the impact of noise 3. NCP looks only at the model matrix A. 4. Spectral decomposition is more informative than the condition number

23 Systems are ll-conditioned- the discrete problem is ill-posed [Dataset Rpol R] = [ ] x 5 A x 5 A2 x 4 A3 3 A4 x Figure: An example solution for real data provide no useful information- the solution is sensitive to noise in the data and the floating point arithmetic calculations

24 Consider mapping A which takes solution x to data b. Ax = b. Definition (Well - Posed (Hadamard in 923)) nverse problem of finding x from b is called well-posed if all Existence a solution exists for any data b in data space, Uniqueness the solution x is unique Stability continuous dependence of x on b : the inverse mapping b x is continuous. Definition (ll-posed: according to Hadamard) A problem is ill-posed if it does not satisfy all three conditions for well-posedness. Alternatively. b / range(a) 2. inverse is not unique because more than one image is mapped to the same data, 3. an arbitrarily small change in b can cause an arbitrarily large change in x.

25 Simulations: RQ simulation solid line, LN simulation. LN LN LN LN LN RQ 5 RQ RQ.9.7 RQ RQ (a) A: g(t) (b) tg(t) (c) Nyquist (d) Z (ω) (e) Z 2(ω) LN RQ.35 LN RQ.3 LN RQ LN RQ.3 LN RQ (f) B: g(t) (g) tg(t) (h) Nyquist (i) Z (ω) (j) Z 2(ω) LN RQ LN RQ 5 LN RQ LN RQ 5 LN RQ (k) C: g(t) (l) tg(t) (m) Nyquist (n) Z (ω) (o) Z 2(ω)

26 Regularization mproved matrices are still ill-conditioned: regularization is required. x = arg min{ Ax b 2 + λ 2 Lx 2 } (2) Tikhonov regularization controls smoothness of x λ: regularization parameter, many techniques exist to find λ - some require statistical properties of measurements L - standard choices or L or - derivative operators. An explicit solution is given with respect to basis vectors for GSVD of [A, L] = [UΣ A Z T, V Σ L Z T ], γ i = σi A/σL i n u T i x = b λ 2 γ i λ 2 + γi 2 z i i= and solves the normal equations (A T A + λ 2 L T L)x = A T x

27 Example Solutions and Residuals - range of λ: a real example Solutions : Solutions: L Solutions: Residuals : Residuals : L Residuals :

28 Parameter Choice: L-Curve (e.g. Hansen, Regularization Tools) For a given residual norm there does not exist a solution with smaller semi-norm than the one provided by the L-Curve, in that sense the solution is optimal.

29 Residual Periodogram - uses time series to detect noise,( e.g. Rust and O Leary, Hansen, Kilmer and Kjeldsen) Suppose vector y is a time series indexed by component i. Diagnostic Does histogram of entries of y generate histogram consistent with y N(, )? (i.e. independent normally distributed with mean and variance ) Not practical to automatically look at a histogram and make an assessment Diagnostic 2 Test expectation that y i are selected from white noise time series. Take Fourier transform of y and form cumulative periodogram z from power spectrum c (q = floor(m/2)) c j = (dft(y) j 2, z j = j i= c j q i= c, j =,..., q, i Automatic: Test is line (z j, j/q) close to straight line with slope and length 5/2?

30 Measure Deviation from Straight Line: Residual Vector : optimal black L true L Figure: Low noise: RQA A 4 Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual and is picked automatically

31 Measure Deviation from Straight Line: Residual Vector : optimal black L true L Figure: High noise: LNB A 4 Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual. Nonnegativity is violated

32 s the approach robust? Solutions by LLS using matrix A 4 : Minimum error red line, LC dotted blue and NCP green line (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example LLS solutions..% noise, RQ-B data set, matrix A 4.

33 Observations. Nonnegativity of g(t) not maintained. 2. Boundary effects are observed 3. However RP and LC are robust methods to estimate λ. They find solutions with error near the minimum. 4. Minimal error curve has a flat region, solutions relatively insensitive when picked within minimal error region. 5. But solutions with high noise are lacking features - over smoothing Can we impose additional constraints?

34 Nonnegative LS with regularization The DRT satisfies g(t) >. The transformed DRT, f(s) = tg(t) > also. Hence nonnegative least squares: x = arg min{ Ax b 2 + λ 2 Lx 2, s.t. x }, (3) { ( ) } Ax b 2 = arg min, s.t. x (4) λlx 2 Depends on regularization parameter λ Can be solved using a standard solver - e.g. Matlab lsqnonneg mpose bounds on solution x >. Parameter choice - solve for multiple λ and pick solution as for the LS problem. Residual Periodogram is new in context of NN LS

35 Measure Deviation from Straight Line: Residual Vector : optimal black L true L Figure: Low noise: RQA A 4 Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

36 Measure Deviation from Straight Line: Residual Vector : optimal black L true L Figure: High noise: LNB A 4 Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but smooth

37 Robustness of Parameter Choice: Solutions by NNLS using matrix A 4 : Minimum error red line, LC dotted blue and NCP green line (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example NNLS solutions..% noise, RQ-B data set, matrix A 4.

38 Observations: Results from NNLS with Parameter Choice. Nonnegativity of g(t) is maintained. 2. LN models are more robust for recovery by both LLS and NNLS 3. RP and LC are just as robust for NNLS as for LLS. 4. With high noise all solutions are smoothed (both by LLS and NNLS) Can we use overdetermined system with less resolution: A 3?

39 Measure Deviation from Straight Line: Residual Vector : optimal black L true L Figure: High noise: LNB A 3 Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but smooth

40 Results with matrix A (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example NNLS solutions..% noise. RQ-B data set matrix A 3

41 Observations on matrix A 3. Nonnegativity of g(t) is maintained. 2. Solutions are more noisy - potentially less ability to locate peaks in low noise data. 3. Matrix A 3 would be good for larger data sets that can offer greater resolution 4. Robustness again for determining the parameters by RP and LC 5. High noise is over smoothed

42 Augmented discrepancy principle Morozov discrepancy principle. For b = ˆb + η where noise η has noise variance σ 2. Discrepancy principle finds Ax b 2 = r m σ 2 RP looks at the entire vector r Augmented residual is defined for the entire system (but should be weighted) R = [Ax b; λlx] Discrepancy analysis again finds R 2 follows a χ 2 distribution The RP applies to vector R as for r.

43 Applying the RP for the augmented problem: High noise LNB L true L Figure: A 4 LS

44 Applying the RP for the augmented problem: High noise LNB L true L Figure: A 3 LS

45 Applying the RP for the augmented problem: High noise LNB L true L Figure: A 4 NNLS

46 Applying the RP for the augmented problem: High noise LNB L true L Figure: A 3 NNLS

47 Conclusions Overall ES Small Scale - standard parameter choice applies for NNLS Residual Periodogram extends to NN Residual Periodogram applies to augmented residual Augmented residual generates less smoothing Potential to use RP for stopping rule in iterative methods NL fitting insufficient, exact DRT form must be known Analysis of quadrature demonstrates minimal model error Transformation is a right preconditioning LLS : nformation on noise level would be helpful - allow other methods UPRE, χ 2 etc High noise all solutions over smoothed. Ability to detect multiple processes in presence of noise is challenging

48 Acknowledgements Authors Hansen, Hogue and Sander were supported by NSF CSUMS grant DMS 73587: CSUMS: Undergraduate Research Experiences for Computational Math Sciences Majors at ASU. Renaut was supported by NSF MCTP grant DMS 4877: MCTP: Mathematics Mentoring Partnership Between Arizona State University and the Maricopa County Community College District, NSF grant DMS 2655: Novel Numerical Approximation Techniques for Non-Standard Sampling Regimes, and AFOSR grant 2577 Development and Analysis of Non-Classical Numerical Approximation Methods. Popat was supported by ONR grant N42344: Characterizing electron transport resistances from anode-respiring bacteria using electrochemical techniques. Professor César Torres for discussions concerning the ES modeling.

49 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: Low noise: A 4 LS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

50 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: High noise: A 4 LS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but smooth

51 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: Low noise: A 3 LS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

52 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: High noise: A 3 LS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but smooth

53 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: Low noise: A 4 NNLS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

54 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: High noise: A 4 NNLS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but

55 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: Low noise: A 3 NNLS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

56 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: High noise: A 3 NNLS LNB Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but

57 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: Low noise: A 4 LS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

58 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: High noise: A 4 LS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but smooth

59 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: Low noise: A 3 LS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

60 RP for Augmented Residual Vector and LLS: optimal black L true L Figure: High noise: A 3 LS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but smooth

61 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: Low noise: A 4 NNLS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

62 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: High noise: A 4 NNLS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but

63 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: Low noise: A 3 NNLS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual Solutions are nonnegative

64 RP for Augmented Residual Vector and NNLS: optimal black L true L Figure: High noise: A 3 NNLS RQA Testing for white noise: Calculate the NCP and measure the deviation from the white noise line for several λ. The vector with highest white noise content indicates transfer of noise to the residual: Solutions are nonnegative but

65 Results with matrix A 4 LLS (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example LLS solutions..% noise, LN-B data set, matrix A 4.

66 Results with matrix A 4 NNLS (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example NNLS solutions..% noise, LN-B data set, matrix A 4.

67 Results with matrix A 3 NNLS (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example NNLS solutions..% noise. LN-B data set matrix A 3

68 s the approach robust? Solutions by LLS using matrix A 4 : Minimum error red line, LC dotted blue and NCP green line (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example LLS solutions..% noise, RQ-A data set, matrix A 4.

69 Robustness of Parameter Choice: Solutions by NNLS using matrix A 4 : Minimum error red line, LC dotted blue and NCP green line (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example NNLS solutions..% noise, RQ-A data set, matrix A 4.

70 s the approach robust? Solutions by LLS using matrix A 4 : Minimum error red line, LC dotted blue and NCP green line (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example LLS solutions..% noise, LN-A data set, matrix A 4.

71 Robustness of Parameter Choice: Solutions by NNLS using matrix A 4 : Minimum error red line, LC dotted blue and NCP green line (a) L = (b) L = L (c) L = (d) L = (e) L = L (f) L = Figure: Mean error and example NNLS solutions..% noise, LN-A data set, matrix A 4.

INVERSION OF ELECTROCHEMICAL IMPEDANCE SPECTROSCOPIC MEASUREMENTS

INVERSION OF ELECTROCHEMICAL IMPEDANCE SPECTROSCOPIC MEASUREMENTS BIOFUEL CELL POLARIZATION ESTIMATION INVERSION OF ELECTROCHEMICAL IMPEDANCE SPECTROSCOPIC MEASUREMENTS IMPORTANCE OF MODEL FORMULATION Rosemary Renaut, Mathematics, ASU Jakob Hansen, Jarom Hogue and Grant

More information

Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy

Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy Jakob Hansen a, Jarom Hogue a, Grant Sander a, Rosemary

More information

Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells.

Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells. Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells. R. A. Renaut 1, R. Baker 1, M. Horst 1, C. Johnson 1, D. Nasir 1 1 School of Mathematical & Statistical

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar

More information

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion Regularization parameter estimation for underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS

COMPUTATION OF FOURIER TRANSFORMS FOR NOISY B FOR NOISY BANDLIMITED SIGNALS COMPUTATION OF FOURIER TRANSFORMS FOR NOISY BANDLIMITED SIGNALS October 22, 2011 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Regularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

Regularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion Regularization Parameter Estimation for Underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM

IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM IMAGE RECONSTRUCTION FROM UNDERSAMPLED FOURIER DATA USING THE POLYNOMIAL ANNIHILATION TRANSFORM Anne Gelb, Rodrigo B. Platte, Rosie Renaut Development and Analysis of Non-Classical Numerical Approximation

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

A study on regularization parameter choice in Near-field Acoustical Holography

A study on regularization parameter choice in Near-field Acoustical Holography Acoustics 8 Paris A study on regularization parameter choice in Near-field Acoustical Holography J. Gomes a and P.C. Hansen b a Brüel & Kjær Sound and Vibration Measurement A/S, Skodsborgvej 37, DK-285

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Electrochemical Impedance Spectroscopy (EIS)

Electrochemical Impedance Spectroscopy (EIS) CHEM465/865, 24-3, Lecture 26-28, 19 th Nov., 24 Please, note the following error in the notes lecture19+2 (Hydrodynamic electrodes and Microelectrodes: on page two, 3 rd line, the correct expression for

More information

A Parameter-Choice Method That Exploits Residual Information

A Parameter-Choice Method That Exploits Residual Information A Parameter-Choice Method That Exploits Residual Information Per Christian Hansen Section for Scientific Computing DTU Informatics Joint work with Misha E. Kilmer Tufts University Inverse Problems: Image

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

estimator for determining the regularization parameter in

estimator for determining the regularization parameter in submitted to Geophys. J. Int. Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Saeed Vatankhah, Vahid

More information

Inverse Theory Methods in Experimental Physics

Inverse Theory Methods in Experimental Physics Inverse Theory Methods in Experimental Physics Edward Sternin PHYS 5P10: 2018-02-26 Edward Sternin Inverse Theory Methods PHYS 5P10: 2018-02-26 1 / 29 1 Introduction Indirectly observed data A new research

More information

Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion

Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Application of the χ principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion arxiv:48.7v [math.na] 4 Aug 4 Saeed Vatankhah, Vahid

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Inverse scattering problem from an impedance obstacle

Inverse scattering problem from an impedance obstacle Inverse Inverse scattering problem from an impedance obstacle Department of Mathematics, NCKU 5 th Workshop on Boundary Element Methods, Integral Equations and Related Topics in Taiwan NSYSU, October 4,

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Numerical Methods for Random Matrices

Numerical Methods for Random Matrices Numerical Methods for Random Matrices MIT 18.95 IAP Lecture Series Per-Olof Persson (persson@mit.edu) January 23, 26 1.9.8.7 Random Matrix Eigenvalue Distribution.7.6.5 β=1 β=2 β=4 Probability.6.5.4.4

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 12, 2007 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

DS-GA 1002 Lecture notes 10 November 23, Linear models

DS-GA 1002 Lecture notes 10 November 23, Linear models DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Viktoria Taroudaki Dianne P. O Leary May 1, 2015 Abstract We consider regularization methods for numerical solution of

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Time Series. Anthony Davison. c

Time Series. Anthony Davison. c Series Anthony Davison c 2008 http://stat.epfl.ch Periodogram 76 Motivation............................................................ 77 Lutenizing hormone data..................................................

More information

A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition

A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition F Bozzoli 1,2, L Cattani 1,2, A Mocerino 1, S Rainieri 1,2, F S V Bazán 3

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information.

An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. An example of Bayesian reasoning Consider the one-dimensional deconvolution problem with various degrees of prior information. Model: where g(t) = a(t s)f(s)ds + e(t), a(t) t = (rapidly). The problem,

More information

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems Boise State University ScholarWorks Mathematics Faculty Publications and Presentations Department of Mathematics 1-1-2009 A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For

More information

Estimation of Chargeability and Permeability using Spectral Induced Polarization Data

Estimation of Chargeability and Permeability using Spectral Induced Polarization Data P-448 Summary Estimation of Chargeability and Permeability using Spectral Induced Polarization Data Anand Prakash Gokula*, IIT Roorkee Induced polarization (IP) method has been successfully implemented

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

Mitigating the Effect of Noise in Iterative Projection Phase Retrieval

Mitigating the Effect of Noise in Iterative Projection Phase Retrieval Mitigating the Effect of Noise in Iterative Projection Phase Retrieval & David Hyland Imaging and Applied Optics: Optics and Photonics Congress Introduction Filtering Results Conclusion Outline Introduction

More information

Choosing the Regularization Parameter

Choosing the Regularization Parameter Choosing the Regularization Parameter At our disposal: several regularization methods, based on filtering of the SVD components. Often fairly straightforward to eyeball a good TSVD truncation parameter

More information

Solving Corrupted Quadratic Equations, Provably

Solving Corrupted Quadratic Equations, Provably Solving Corrupted Quadratic Equations, Provably Yuejie Chi London Workshop on Sparse Signal Processing September 206 Acknowledgement Joint work with Yuanxin Li (OSU), Huishuai Zhuang (Syracuse) and Yingbin

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature17653 Supplementary Methods Electronic transport mechanism in H-SNO In pristine RNO, pronounced electron-phonon interaction results in polaron formation that dominates the electronic

More information

Slide05 Haykin Chapter 5: Radial-Basis Function Networks

Slide05 Haykin Chapter 5: Radial-Basis Function Networks Slide5 Haykin Chapter 5: Radial-Basis Function Networks CPSC 636-6 Instructor: Yoonsuck Choe Spring Learning in MLP Supervised learning in multilayer perceptrons: Recursive technique of stochastic approximation,

More information

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization

Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization Shuyang Ling Department of Mathematics, UC Davis Oct.18th, 2016 Shuyang Ling (UC Davis) 16w5136, Oaxaca, Mexico Oct.18th, 2016

More information

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems

Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Dynamical Systems Method for Solving Ill-conditioned Linear Algebraic Systems Sapto W. Indratno Department of Mathematics Kansas State University, Manhattan, KS 6656-6, USA sapto@math.ksu.edu A G Ramm

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

MATH 3795 Lecture 10. Regularized Linear Least Squares.

MATH 3795 Lecture 10. Regularized Linear Least Squares. MATH 3795 Lecture 10. Regularized Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Understanding the regularization. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011

Name: INSERT YOUR NAME HERE. Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 AMath 584 Name: INSERT YOUR NAME HERE Take-home Final UWNetID: INSERT YOUR NETID Due to dropbox by 6pm PDT, Wednesday, December 14, 2011 The main part of the assignment (Problems 1 3) is worth 80 points.

More information

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Applied Mathematics, Applied Statistics, & Scientific Computation University of

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Dedicated to Gérard Meurant on the occasion of his 60th birthday.

Dedicated to Gérard Meurant on the occasion of his 60th birthday. SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to Gérard Meurant on the occasion of his 60th birthday. Abstract. Tikhonov regularization of linear discrete ill-posed

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools:

Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: CS 322 Final Exam Friday 18 May 2007 150 minutes Problem 1: Toolbox (25 pts) For all of the parts of this problem, you are limited to the following sets of tools: (A) Runge-Kutta 4/5 Method (B) Condition

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 11, 2009 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION THE χ -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION JODI MEAD AND ROSEMARY A RENAUT Abstract. We discuss algorithms for estimating least squares regularization parameters

More information

Contributors and Resources

Contributors and Resources for Introduction to Numerical Methods for d-bar Problems Jennifer Mueller, presented by Andreas Stahel Colorado State University Bern University of Applied Sciences, Switzerland Lexington, Kentucky, May

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

SECTION 7: CURVE FITTING. MAE 4020/5020 Numerical Methods with MATLAB

SECTION 7: CURVE FITTING. MAE 4020/5020 Numerical Methods with MATLAB SECTION 7: CURVE FITTING MAE 4020/5020 Numerical Methods with MATLAB 2 Introduction Curve Fitting 3 Often have data,, that is a function of some independent variable,, but the underlying relationship is

More information

UPRE Method for Total Variation Parameter Selection

UPRE Method for Total Variation Parameter Selection UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces Reproducing Kernel Hilbert Spaces Lorenzo Rosasco 9.520 Class 03 February 9, 2011 About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert

More information

PACS. Fitting PACS ramps with analytical models. Part III The IMEC model. Herschel. Martin Groenewegen (ICC KUL)

PACS. Fitting PACS ramps with analytical models. Part III The IMEC model. Herschel. Martin Groenewegen (ICC KUL) Page 1 Fitting PACS ramps with analytical models. Part III The IMEC model Martin Groenewegen (ICC KUL) Page 2 DOCUMENT CHANGE RECORD Version Date Changes Remarks Draft 0 22-September-2004 New document.

More information

Some Aspects of Band-limited Extrapolations

Some Aspects of Band-limited Extrapolations October 3, 2009 I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt, ω R (1) I. Introduction Definition of Fourier transform: F [f ](ω) := ˆf (ω) := + f (t)e iωt dt,

More information

Fourier Analysis of Stationary and Non-Stationary Time Series

Fourier Analysis of Stationary and Non-Stationary Time Series Fourier Analysis of Stationary and Non-Stationary Time Series September 6, 2012 A time series is a stochastic process indexed at discrete points in time i.e X t for t = 0, 1, 2, 3,... The mean is defined

More information

Optimal control as a regularization method for. an ill-posed problem.

Optimal control as a regularization method for. an ill-posed problem. Optimal control as a regularization method for ill-posed problems Stefan Kindermann, Carmeliza Navasca Department of Mathematics University of California Los Angeles, CA 995-1555 USA {kinder,navasca}@math.ucla.edu

More information

Two-parameter regularization method for determining the heat source

Two-parameter regularization method for determining the heat source Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 8 (017), pp. 3937-3950 Research India Publications http://www.ripublication.com Two-parameter regularization method for

More information

Numerical Approximation Methods for Non-Uniform Fourier Data

Numerical Approximation Methods for Non-Uniform Fourier Data Numerical Approximation Methods for Non-Uniform Fourier Data Aditya Viswanathan aditya@math.msu.edu 2014 Joint Mathematics Meetings January 18 2014 0 / 16 Joint work with Anne Gelb (Arizona State) Guohui

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

Inverse Problems and Optimal Design in Electricity and Magnetism

Inverse Problems and Optimal Design in Electricity and Magnetism Inverse Problems and Optimal Design in Electricity and Magnetism P. Neittaanmäki Department of Mathematics, University of Jyväskylä M. Rudnicki Institute of Electrical Engineering, Warsaw and A. Savini

More information

A fast and well-conditioned spectral method: The US method

A fast and well-conditioned spectral method: The US method A fast and well-conditioned spectral method: The US method Alex Townsend University of Oxford Leslie Fox Prize, 24th of June 23 Partially based on: S. Olver & T., A fast and well-conditioned spectral method,

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Fourier Reconstruction from Non-Uniform Spectral Data

Fourier Reconstruction from Non-Uniform Spectral Data School of Electrical, Computer and Energy Engineering, Arizona State University aditya.v@asu.edu With Profs. Anne Gelb, Doug Cochran and Rosemary Renaut Research supported in part by National Science Foundation

More information

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1. A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE THOMAS CHEN AND NATAŠA PAVLOVIĆ Abstract. We prove a Beale-Kato-Majda criterion

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 33, pp. 63-83, 2009. Copyright 2009,. ISSN 1068-9613. ETNA SIMPLE SQUARE SMOOTHING REGULARIZATION OPERATORS LOTHAR REICHEL AND QIANG YE Dedicated to

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Reproducing Kernel Hilbert Spaces

Reproducing Kernel Hilbert Spaces 9.520: Statistical Learning Theory and Applications February 10th, 2010 Reproducing Kernel Hilbert Spaces Lecturer: Lorenzo Rosasco Scribe: Greg Durrett 1 Introduction In the previous two lectures, we

More information

FDTD for 1D wave equation. Equation: 2 H Notations: o o. discretization. ( t) ( x) i i i i i

FDTD for 1D wave equation. Equation: 2 H Notations: o o. discretization. ( t) ( x) i i i i i FDTD for 1D wave equation Equation: 2 H = t 2 c2 2 H x 2 Notations: o t = nδδ, x = iδx o n H nδδ, iδx = H i o n E nδδ, iδx = E i discretization H 2H + H H 2H + H n+ 1 n n 1 n n n i i i 2 i+ 1 i i 1 = c

More information

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

AN ITERATIVE METHOD WITH ERROR ESTIMATORS

AN ITERATIVE METHOD WITH ERROR ESTIMATORS AN ITERATIVE METHOD WITH ERROR ESTIMATORS D. CALVETTI, S. MORIGI, L. REICHEL, AND F. SGALLARI Abstract. Iterative methods for the solution of linear systems of equations produce a sequence of approximate

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information