Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution

Size: px
Start display at page:

Download "Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution"

Transcription

1 Regularization Parameter Estimation for Least Squares: A Newton method using the χ 2 -distribution Rosemary Renaut, Jodi Mead Arizona State and Boise State September 2007 Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

2 Outline 1 Introduction- Ill-posed least squares Some Standard Methods 2 A Statistically based method: Chi squared Method Background Algorithm Single Variable Newton Method Extend for General D: Generalized Tikhonov Observations 3 Results 4 Conclusions 5 References Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

3 Regularized Least Squares for Ax = b Assume: A R m n, b R m, x R n, and the system is ill-posed. Generalized Tikhonov regularization, operator D acts on x. ˆx = argmin J(x) = argmin{ Ax b W 2 b + D(x x 0 ) 2 W x }. (1) Assume N (A) N (D) = Statistically W b is inverse covariance matrix for data b. Standard: W x = λ 2 I, λ unknown penalty parameter: (1) is ˆx(λ) = argmin J(x) = argmin{ Ax b 2 W b + λ 2 D(x x 0 ) 2 }. (2) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

4 Regularized Least Squares for Ax = b Assume: A R m n, b R m, x R n, and the system is ill-posed. Generalized Tikhonov regularization, operator D acts on x. ˆx = argmin J(x) = argmin{ Ax b W 2 b + D(x x 0 ) 2 W x }. (1) Assume N (A) N (D) = Statistically W b is inverse covariance matrix for data b. Standard: W x = λ 2 I, λ unknown penalty parameter: (1) is ˆx(λ) = argmin J(x) = argmin{ Ax b 2 W b + λ 2 D(x x 0 ) 2 }. (2) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

5 Regularized Least Squares for Ax = b Assume: A R m n, b R m, x R n, and the system is ill-posed. Generalized Tikhonov regularization, operator D acts on x. ˆx = argmin J(x) = argmin{ Ax b W 2 b + D(x x 0 ) 2 W x }. (1) Assume N (A) N (D) = Statistically W b is inverse covariance matrix for data b. Standard: W x = λ 2 I, λ unknown penalty parameter: (1) is ˆx(λ) = argmin J(x) = argmin{ Ax b 2 W b + λ 2 D(x x 0 ) 2 }. (2) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

6 Regularized Least Squares for Ax = b Assume: A R m n, b R m, x R n, and the system is ill-posed. Generalized Tikhonov regularization, operator D acts on x. ˆx = argmin J(x) = argmin{ Ax b W 2 b + D(x x 0 ) 2 W x }. (1) Assume N (A) N (D) = Statistically W b is inverse covariance matrix for data b. Standard: W x = λ 2 I, λ unknown penalty parameter: (1) is ˆx(λ) = argmin J(x) = argmin{ Ax b 2 W b + λ 2 D(x x 0 ) 2 }. (2) Question: What is the correct λ? Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

7 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

8 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

9 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Find corner Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

10 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Find corner No corner Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

11 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Find corner No corner Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

12 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Find corner No corner Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

13 Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) A)b: Influence Matrix A(λ) = A(A T W b A + λ 2 D T D) 1 A T Plot log( Dx ), log( r(λ) ) Trade off contributions. Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based Find corner No corner Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

14 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

15 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

16 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Sometimes flat Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

17 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Sometimes flat Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

18 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Sometimes flat Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

19 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Sometimes flat Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

20 Some standard approaches II: Generalized Cross-Validation (GCV) Minimizes GCV function b Ax(λ) 2 W b [trace(i m A(λ))] 2, which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Requires minimum Multiple minima Sometimes flat Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

21 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

22 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

23 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

24 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

25 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

26 Some standard approaches III: Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function b Ax(λ) 2 W b +2 trace(a(λ)) m Expensive - requires range of λ. GSVD makes calculations efficient. Statistically based Minimum needed Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

27 Development A New Statistically based method: The Chi squared Method Its Background A Newton algorithm Some Examples Future Work Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

28 Development A New Statistically based method: The Chi squared Method Its Background A Newton algorithm Some Examples Future Work Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

29 Development A New Statistically based method: The Chi squared Method Its Background A Newton algorithm Some Examples Future Work Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

30 Development A New Statistically based method: The Chi squared Method Its Background A Newton algorithm Some Examples Future Work Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

31 Development A New Statistically based method: The Chi squared Method Its Background A Newton algorithm Some Examples Future Work Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

32 General Result: Tikhonov (D = I) Cost functional at min is χ 2 r.v. Theorem (Rao:73, Tarantola, Mead (2007)) J(x) = (b Ax) T C b 1 (b Ax) + (x x 0 ) T C x 1 (x x 0 ), x and b are stochastic (need not be normal) r = b Ax 0 are iid. (Assume no components are zero) Matrices C b = W b 1 and C x = W x 1 are SPD - Then for large m, minimium value of J is a random variable it follows a χ 2 distribution with m degrees of freedom. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

33 General Result: Tikhonov (D = I) Cost functional at min is χ 2 r.v. Theorem (Rao:73, Tarantola, Mead (2007)) J(x) = (b Ax) T C b 1 (b Ax) + (x x 0 ) T C x 1 (x x 0 ), x and b are stochastic (need not be normal) r = b Ax 0 are iid. (Assume no components are zero) Matrices C b = W b 1 and C x = W x 1 are SPD - Then for large m, minimium value of J is a random variable it follows a χ 2 distribution with m degrees of freedom. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

34 General Result: Tikhonov (D = I) Cost functional at min is χ 2 r.v. Theorem (Rao:73, Tarantola, Mead (2007)) J(x) = (b Ax) T C b 1 (b Ax) + (x x 0 ) T C x 1 (x x 0 ), x and b are stochastic (need not be normal) r = b Ax 0 are iid. (Assume no components are zero) Matrices C b = W b 1 and C x = W x 1 are SPD - Then for large m, minimium value of J is a random variable it follows a χ 2 distribution with m degrees of freedom. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

35 General Result: Tikhonov (D = I) Cost functional at min is χ 2 r.v. Theorem (Rao:73, Tarantola, Mead (2007)) J(x) = (b Ax) T C b 1 (b Ax) + (x x 0 ) T C x 1 (x x 0 ), x and b are stochastic (need not be normal) r = b Ax 0 are iid. (Assume no components are zero) Matrices C b = W b 1 and C x = W x 1 are SPD - Then for large m, minimium value of J is a random variable it follows a χ 2 distribution with m degrees of freedom. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

36 General Result: Tikhonov (D = I) Cost functional at min is χ 2 r.v. Theorem (Rao:73, Tarantola, Mead (2007)) J(x) = (b Ax) T C b 1 (b Ax) + (x x 0 ) T C x 1 (x x 0 ), x and b are stochastic (need not be normal) r = b Ax 0 are iid. (Assume no components are zero) Matrices C b = W b 1 and C x = W x 1 are SPD - Then for large m, minimium value of J is a random variable it follows a χ 2 distribution with m degrees of freedom. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

37 Implications: Theorem implies m 2z α/2 < J(ˆx) < m + 2z α/2 for confidence interval (1 α), ˆx the solution. Equivalently, when D = I, m 2z α/2 < r T (AC x A T + C b ) 1 r < m + 2z α/2. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

38 Implications: Theorem implies m 2z α/2 < J(ˆx) < m + 2z α/2 for confidence interval (1 α), ˆx the solution. Equivalently, when D = I, m 2z α/2 < r T (AC x A T + C b ) 1 r < m + 2z α/2. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

39 Implications: Theorem implies m 2z α/2 < J(ˆx) < m + 2z α/2 for confidence interval (1 α), ˆx the solution. Equivalently, when D = I, m 2z α/2 < r T (AC x A T + C b ) 1 r < m + 2z α/2. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

40 Implications: Theorem implies m 2z α/2 < J(ˆx) < m + 2z α/2 for confidence interval (1 α), ˆx the solution. Equivalently, when D = I, m 2z α/2 < r T (AC x A T + C b ) 1 r < m + 2z α/2. Note no assumptions on W x : it is completely general Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

41 Implications: Theorem implies m 2z α/2 < J(ˆx) < m + 2z α/2 for confidence interval (1 α), ˆx the solution. Equivalently, when D = I, m 2z α/2 < r T (AC x A T + C b ) 1 r < m + 2z α/2. Note no assumptions on W x : it is completely general Can we use the result to obtain an efficient algorithm? Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

42 First attempt: a New Algorithm for Estimating Model Covariance Algorithm (Mead 07) Given confidence interval parameter α, initial residual r = b Ax 0 and estimate of the data covariance C b, find L x which solves the nonlinear optimization. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

43 First attempt: a New Algorithm for Estimating Model Covariance Algorithm (Mead 07) Given confidence interval parameter α, initial residual r = b Ax 0 and estimate of the data covariance C b, find L x which solves the nonlinear optimization. Minimize L x L T x 2 F Subject to m 2z α/2 < r T (AL x L T x A T + C b ) 1 r < m + 2z α/2 AL x L T x A T + C b well-conditioned. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

44 First attempt: a New Algorithm for Estimating Model Covariance Algorithm (Mead 07) Given confidence interval parameter α, initial residual r = b Ax 0 and estimate of the data covariance C b, find L x which solves the nonlinear optimization. Minimize L x L T x 2 F Subject to m 2z α/2 < r T (AL x L T x A T + C b ) 1 r < m + 2z α/2 AL x L T x A T + C b well-conditioned. Expensive Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

45 Single Variable Approach: Seek efficient, practical algorithm Let W x = σ 2 x I, where regularization parameter λ = 1/σ x. Use SVD to implement U b Σ b Vb T and define s = U b W 1/2 b r: Find σ x such that = W b 1/2 A, svs σ 1 σ 2... σ p m 2z α/2 < s T 1 diag( σi 2 σx )s < m + 2z α/2. Equivalently, find σx 2 such that F (σ x ) = s T 1 diag( 1 + σxσ 2 i 2 )s m = 0. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

46 Single Variable Approach: Seek efficient, practical algorithm Let W x = σ 2 x I, where regularization parameter λ = 1/σ x. Use SVD to implement U b Σ b Vb T and define s = U b W 1/2 b r: Find σ x such that = W b 1/2 A, svs σ 1 σ 2... σ p m 2z α/2 < s T 1 diag( σi 2 σx )s < m + 2z α/2. Equivalently, find σx 2 such that F (σ x ) = s T 1 diag( 1 + σxσ 2 i 2 )s m = 0. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

47 Single Variable Approach: Seek efficient, practical algorithm Let W x = σ 2 x I, where regularization parameter λ = 1/σ x. Use SVD to implement U b Σ b Vb T and define s = U b W 1/2 b r: Find σ x such that = W b 1/2 A, svs σ 1 σ 2... σ p m 2z α/2 < s T 1 diag( σi 2 σx )s < m + 2z α/2. Equivalently, find σx 2 such that F (σ x ) = s T 1 diag( 1 + σxσ 2 i 2 )s m = 0. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

48 Single Variable Approach: Seek efficient, practical algorithm Let W x = σ 2 x I, where regularization parameter λ = 1/σ x. Use SVD to implement U b Σ b Vb T and define s = U b W 1/2 b r: Find σ x such that = W b 1/2 A, svs σ 1 σ 2... σ p m 2z α/2 < s T 1 diag( σi 2 σx )s < m + 2z α/2. Equivalently, find σx 2 such that F (σ x ) = s T 1 diag( 1 + σxσ 2 i 2 )s m = 0. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

49 Single Variable Approach: Seek efficient, practical algorithm Let W x = σ 2 x I, where regularization parameter λ = 1/σ x. Use SVD to implement U b Σ b Vb T and define s = U b W 1/2 b r: Find σ x such that = W b 1/2 A, svs σ 1 σ 2... σ p m 2z α/2 < s T 1 diag( σi 2 σx )s < m + 2z α/2. Equivalently, find σx 2 such that F (σ x ) = s T 1 diag( 1 + σxσ 2 i 2 )s m = 0. Scalar Root Finding: Newton s Method Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

50 Extension to Generalized Tikhonov Define ˆx GTik = argminj D (x) = argmin{ Ax b 2 W b + D(x x 0 ) 2 W x }, (3) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

51 Extension to Generalized Tikhonov Define ˆx GTik = argminj D (x) = argmin{ Ax b 2 W b + D(x x 0 ) 2 W x }, (3) Theorem For large m, the minimium value of J D is a random variable which follows a χ 2 distribution with m n + p degrees of freedom. (Assuming that no components of r are zero) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

52 Extension to Generalized Tikhonov Define ˆx GTik = argminj D (x) = argmin{ Ax b 2 W b + D(x x 0 ) 2 W x }, (3) Theorem For large m, the minimium value of J D is a random variable which follows a χ 2 distribution with m n + p degrees of freedom. (Assuming that no components of r are zero) Proof. Use the Generalized Singular Value Decomposition for [W b 1/2 A, W x 1/2 D] Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

53 Extension to Generalized Tikhonov Define ˆx GTik = argminj D (x) = argmin{ Ax b 2 W b + D(x x 0 ) 2 W x }, (3) Theorem For large m, the minimium value of J D is a random variable which follows a χ 2 distribution with m n + p degrees of freedom. (Assuming that no components of r are zero) Proof. Use the Generalized Singular Value Decomposition for [W b 1/2 A, W x 1/2 D] Find W x such that J D is χ 2 with m n + p d.o.f. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

54 Newton Root Finding W x = σx 2 I p Let GSVD of [W 1/2 b A, D] [ Υ A = U 0 m n n ] X T D = V [M, 0 p n p ]X T, γ i are the generalized singular values m = m n + p p i=1 s2 i δ γ i 0 m i=n+1 s2 i, s i = s i /(γi 2 σx 2 + 1), i = 1,..., p t i = s i γ i. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

55 Newton Root Finding W x = σx 2 I p Let GSVD of [W 1/2 b A, D] [ Υ A = U 0 m n n ] X T D = V [M, 0 p n p ]X T, γ i are the generalized singular values m = m n + p p i=1 s2 i δ γ i 0 m i=n+1 s2 i, s i = s i /(γi 2 σx 2 + 1), i = 1,..., p t i = s i γ i. Find root of p i=1 ( 1 γi 2 σ 2 +1 )s2 i + m i=n+1 s2 i Solve F = 0, where = m F(σ x ) = s T s m and F (σ x ) = 2σ x t 2 2. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

56 An Illustrative Example: phillips Fredholm integral equation (Hansen) Add noise to b Standard deviation σ bi =.01 b i +.1b max Covariance matrix C b = σb 2I m = W 1 b Example Error 10% σ 2 b average of σ2 b i is the original b and noisy data. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

57 An Illustrative Example: phillips Fredholm integral equation (Hansen) Compare Solutions: + is reference x 0. is exact. L-Curve o Three other solutions: UPRE, GCV and χ 2 method (blue, magenta, black) Each method gives different solution - but UPRE, GCV and χ 2 are comparable. Comparison with new method Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

58 Observations: Example F Initialization GCV, UPRE, L-curve, χ 2 use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. F is monotonic decreasing, even Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

59 Observations: Example F Initialization GCV, UPRE, L-curve, χ 2 use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. F is monotonic decreasing, even Solution either exists and is unique for positive σ Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

60 Observations: Example F Initialization GCV, UPRE, L-curve, χ 2 use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. F is monotonic decreasing, even Solution either exists and is unique for positive σ Or no solution exists F(0) < 0. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

61 Observations: Example F Initialization GCV, UPRE, L-curve, χ 2 use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. F is monotonic decreasing, even Solution either exists and is unique for positive σ Or no solution exists F(0) < 0. Theoretically, lim σ F > 0 possible. Equivalent to λ = 0. No regularization needed. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

62 Observations: Example F Initialization GCV, UPRE, L-curve, χ 2 use GSVD (or SVD). Algorithm is cheap as compared to GCV, UPRE, L-curve. F is monotonic decreasing, even Solution either exists and is unique for positive σ Or no solution exists F(0) < 0. Theoretically, lim σ F > 0 possible. Equivalent to λ = 0. No regularization needed. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

63 Remark on F (0) < 0 Notice, when F (0) < 0, m is too big relative to J. Equivalently, there are insufficient degrees of freedom. Notice J(ˆx) = P 1/2 s 2 2, P = diag(1/((γ iσ) 2 + 1), 0 n p, I m n ) In particular J(ˆx(0)) = P 1/2 (0)s 2 2 = y, for some y. If y < m, set m = floor(y) Theorem is revised to: m = min{floor(j(0)), m n + p}. In example J(0) 39, F (0) 461. On right m = 38. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

64 Remark on F (0) < 0 Notice, when F (0) < 0, m is too big relative to J. Equivalently, there are insufficient degrees of freedom. Notice J(ˆx) = P 1/2 s 2 2, P = diag(1/((γ iσ) 2 + 1), 0 n p, I m n ) In particular J(ˆx(0)) = P 1/2 (0)s 2 2 = y, for some y. If y < m, set m = floor(y) Theorem is revised to: m = min{floor(j(0)), m n + p}. In example J(0) 39, F (0) 461. On right m = 38. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

65 Remark on F (0) < 0 Notice, when F (0) < 0, m is too big relative to J. Equivalently, there are insufficient degrees of freedom. Notice J(ˆx) = P 1/2 s 2 2, P = diag(1/((γ iσ) 2 + 1), 0 n p, I m n ) In particular J(ˆx(0)) = P 1/2 (0)s 2 2 = y, for some y. If y < m, set m = floor(y) Theorem is revised to: m = min{floor(j(0)), m n + p}. In example J(0) 39, F (0) 461. On right m = 38. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

66 Remark on F (0) < 0 Notice, when F (0) < 0, m is too big relative to J. Equivalently, there are insufficient degrees of freedom. Notice J(ˆx) = P 1/2 s 2 2, P = diag(1/((γ iσ) 2 + 1), 0 n p, I m n ) In particular J(ˆx(0)) = P 1/2 (0)s 2 2 = y, for some y. If y < m, set m = floor(y) Theorem is revised to: m = min{floor(j(0)), m n + p}. In example J(0) 39, F (0) 461. On right m = 38. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

67 Remark on F (0) < 0 Notice, when F (0) < 0, m is too big relative to J. Equivalently, there are insufficient degrees of freedom. Notice J(ˆx) = P 1/2 s 2 2, P = diag(1/((γ iσ) 2 + 1), 0 n p, I m n ) In particular J(ˆx(0)) = P 1/2 (0)s 2 2 = y, for some y. If y < m, set m = floor(y) Theorem is revised to: m = min{floor(j(0)), m n + p}. In example J(0) 39, F (0) 461. On right m = 38. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

68 Example: Seismic Signal Restoration Real data set of 48 signals of length 500. The point spread function is derived from the signals Solve Pf = g, where P is psf matrix, g is signal and restore f. Calculate the signal variance pointwise over all 48 signals. Compare restoration of S-wave with derivative orders 0, 1, 2 Weighting matrices are I, σg 2 I, and diag(σg 2 i ), cases 1, 2, and 3. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

69 Tikhonov Regularization Observations Reduced Degrees of Freedom Relevant! Degrees of Freedom found automatically Case 2 and 1 have different solutions Case 3 leaves the features of the signal Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

70 First and Second Order Derivative Restoration Observations Here derivative smoothing is not desirable Case 3 preserves signal characterist Given value is λ, th regularization parameter. λ increases with derivative order - m smoothing. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

71 Comparison with L-curve and UPRE Solutions Observations L-curve underestimates λ severely. UPRE and χ 2 are similar when DOF limited on χ 2. UPRE underestima for case 2 and 3 weighting. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

72 Newton s Method converges in 5 10 Iterations l cb Iterations k mean std e e e e e e e e e e e e e e e e e e + 00 Table: Convergence characteristics for problem phillips with n = 40 over 500 runs Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

73 Newton s Method converges in 5 10 Iterations l cb Iterations k mean std e e e e e e e e e e e e e e e e e e 01 Table: Convergence characteristics for problem blur with n = 36 over 500 runs Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

74 Estimating The Error and Predictive Risk Error l cb χ 2 L GCV UPRE mean mean mean mean e e e e e e e e e e e e e e e e e e e e e e e e 03 Table: Error characteristics for problem phillips with n = 60 over 500 runs with error contaminated x 0. Relative errors larger than.009 removed. Results are comparable Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

75 Estimating The Error and Predictive Risk Risk l cb χ 2 L GCV UPRE mean mean mean mean e e e e e e e e e e e e e e e e e e e e e e e e 02 Table: Error characteristics for problem phillips with n = 60 over 500 runs χ 2 method does not give best estimate of risk Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

76 Estimating The Error and Predictive Risk Error Histogram Normal noise on rhs, first order derivative, C b = σ 2 I Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

77 Estimating The Error and Predictive Risk Error Histogram Exponential noise on rhs, first order derivative, C b = σ 2 I Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

78 Conclusions χ 2 Newton algorithm is cost effective It performs as well ( or better) than GCV and UPRE when statistical information is available. Should be method of choice when statistical information is provided Method can be adapted to find W b if W x is provided. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

79 Future Work Analyse for truncated expansions (TSVD and TGSVD) -reduce the degrees of freedom. Further theoretical analysis and simulations with other noise distributions. Comparison new work of Rust & O Leary Can it be extended for nonlinear regularization terms? (TV?) Development of the nonlinear least squares for general diagonal W x. Efficient calculation of uncertainty information, covariance matrix. Nonlinear problems? Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

80 Some Solutions: with no prior information x 0 Illustrated are solutions and error bars No Statistical Information Solution is Smoothed With statistical information C b = diag(σ 2 b i ) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

81 Some Generalized Tikhonov Solutions: First Order Derivative No Statistical Information C b = diag(σ 2 b i ) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

82 Some Generalized Tikhonov Solutions: Prior x 0 : Solution not smoothed No Statistical Information C b = diag(σ 2 b i ) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

83 Some Generalized Tikhonov Solutions: x 0 = 0: Exponential noise No Statistical Information C b = diag(σ 2 b i ) Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

84 Relationship to Discrepancy Principle The discrepancy principle can be implemented by a Newton method. Finds σ x such that the regularized residual satisfies Consistent with our notation σ 2 b = 1 m b Ax(σ) 2 2. (4) p 1 ( γi 2 σ )2 si 2 + i=1 m i=n+1 s 2 i = m, (5) Weight in the first sum is squared here, otherwise functional is the same. But discrepancy principle often oversmooths. What happens here? Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

85 Major References Bennett A, 2005 Inverse Modeling of the Ocean and Atmosphere (Cambridge University Press) Hansen, P. C., 1994, Regularization Tools: A Matlab Package for Analysis and Solution of Discrete Ill-posed Problems, Numerical Algorithms Mead J., 2007, A priori weighting for parameter estimation, J. Inv. Ill-posed Problems, to appear. Rao, C. R., 1973, Linear Statistical Inference and its applications, Wiley, New York. Tarantola A 2005 Inverse Problem Theory and Methods for Model Parameter Estimation (SIAM). Vogel, C. R., Computational Methods for Inverse Problems, (SIAM), Frontiers in Applied Mathematics. Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

86 blur Atmospheric (Gaussian PSF) (Hansen): Again with noise Solution on Left and Degraded on the Right Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

87 blur Atmospheric (Gaussian PSF) (Hansen): Again with noise Solutions using x 0 = 0, Generalized Tikhonov Second Derivative 5% noise Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

88 blur Atmospheric (Gaussian PSF) (Hansen): Again with noise olutions using x 0 = 0, Generalized Tikhonov Second Derivative 10% noise Renaut and Mead (ASU/Boise) Scalar Newton method September / 31

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve

Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Newton s Method for Estimating the Regularization Parameter for Least Squares: Using the Chi-curve Rosemary Renaut, Jodi Mead Arizona State and Boise State Copper Mountain Conference on Iterative Methods

More information

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares

Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Newton s method for obtaining the Regularization Parameter for Regularized Least Squares Rosemary Renaut, Jodi Mead Arizona State and Boise State International Linear Algebra Society, Cancun Renaut and

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut Collaborators: Jodi Mead and Iveta Hnetynkova DEPARTMENT OF MATHEMATICS

More information

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation

The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation The Chi-squared Distribution of the Regularized Least Squares Functional for Regularization Parameter Estimation Rosemary Renaut DEPARTMENT OF MATHEMATICS AND STATISTICS Prague 2008 MATHEMATICS AND STATISTICS

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional

Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Towards Improved Sensitivity in Feature Extraction from Signals: one and two dimensional Rosemary Renaut, Hongbin Guo, Jodi Mead and Wolfgang Stefan Supported by Arizona Alzheimer s Research Center and

More information

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION

THE χ 2 -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION THE χ -CURVE METHOD OF PARAMETER ESTIMATION FOR GENERALIZED TIKHONOV REGULARIZATION JODI MEAD AND ROSEMARY A RENAUT Abstract. We discuss algorithms for estimating least squares regularization parameters

More information

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems

A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems A Hybrid LSQR Regularization Parameter Estimation Algorithm for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova SIAM Annual Meeting July 10, 2009 National Science Foundation:

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems

A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For Solving Ill- Conditioned Least Squares Problems Boise State University ScholarWorks Mathematics Faculty Publications and Presentations Department of Mathematics 1-1-2009 A Newton Root-Finding Algorithm For Estimating the Regularization Parameter For

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

Preconditioning. Noisy, Ill-Conditioned Linear Systems

Preconditioning. Noisy, Ill-Conditioned Linear Systems Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information c de Gruyter 27 J. Inv. Ill-Posed Problems 5 (27), 2 DOI.55 / JIP.27.xxx Parameter estimation: A new approach to weighting a priori information J.L. Mead Communicated by Abstract. We propose a new approach

More information

Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion Regularization parameter estimation for underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute

More information

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems

Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola

More information

Regularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

Regularization Parameter Estimation for Underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion Regularization Parameter Estimation for Underdetermined problems by the χ principle with application to D focusing gravity inversion Saeed Vatankhah, Rosemary A Renaut and Vahid E Ardestani, Institute

More information

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS JODI MEAD AND ROSEMARY A RENAUT. Introduction. The linear least squares problems discussed here are often used to incorporate

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS

LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS LEAST SQUARES PROBLEMS WITH INEQUALITY CONSTRAINTS AS QUADRATIC CONSTRAINTS JODI MEAD AND ROSEMARY A RENAUT Abstract. Linear least squares problems with box constraints are commonly solved to find model

More information

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE

REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE Int. J. Appl. Math. Comput. Sci., 007, Vol. 17, No., 157 164 DOI: 10.478/v10006-007-0014-3 REGULARIZATION PARAMETER SELECTION IN DISCRETE ILL POSED PROBLEMS THE USE OF THE U CURVE DOROTA KRAWCZYK-STAŃDO,

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale

Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Laboratorio di Problemi Inversi Esercitazione 2: filtraggio spettrale Luca Calatroni Dipartimento di Matematica, Universitá degli studi di Genova Aprile 2016. Luca Calatroni (DIMA, Unige) Esercitazione

More information

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna Inverse Theory COST WaVaCS Winterschool Venice, February 2011 Stefan Buehler Luleå University of Technology Kiruna Overview Inversion 1 The Inverse Problem 2 Simple Minded Approach (Matrix Inversion) 3

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

estimator for determining the regularization parameter in

estimator for determining the regularization parameter in submitted to Geophys. J. Int. Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Saeed Vatankhah, Vahid

More information

Frequentist-Bayesian Model Comparisons: A Simple Example

Frequentist-Bayesian Model Comparisons: A Simple Example Frequentist-Bayesian Model Comparisons: A Simple Example Consider data that consist of a signal y with additive noise: Data vector (N elements): D = y + n The additive noise n has zero mean and diagonal

More information

Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion

Application of the χ 2 principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion Application of the χ principle and unbiased predictive risk estimator for determining the regularization parameter in 3D focusing gravity inversion arxiv:48.7v [math.na] 4 Aug 4 Saeed Vatankhah, Vahid

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Augmented Tikhonov Regularization

Augmented Tikhonov Regularization Augmented Tikhonov Regularization Bangti JIN Universität Bremen, Zentrum für Technomathematik Seminar, November 14, 2008 Outline 1 Background 2 Bayesian inference 3 Augmented Tikhonov regularization 4

More information

UPRE Method for Total Variation Parameter Selection

UPRE Method for Total Variation Parameter Selection UPRE Method for Total Variation Parameter Selection Youzuo Lin School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ 85287 USA. Brendt Wohlberg 1, T-5, Los Alamos National

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank

Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank Taewon Cho Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial

More information

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1

Discrete Ill Posed and Rank Deficient Problems. Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Discrete Ill Posed and Rank Deficient Problems Alistair Boyle, Feb 2009, SYS5906: Directed Studies Inverse Problems 1 Definitions Overview Inversion, SVD, Picard Condition, Rank Deficient, Ill-Posed Classical

More information

Regularizing inverse problems. Damping and smoothing and choosing...

Regularizing inverse problems. Damping and smoothing and choosing... Regularizing inverse problems Damping and smoothing and choosing... 141 Regularization The idea behind SVD is to limit the degree of freedom in the model and fit the data to an acceptable level. Retain

More information

Advanced Numerical Linear Algebra: Inverse Problems

Advanced Numerical Linear Algebra: Inverse Problems Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring

More information

Mathematical Beer Goggles or The Mathematics of Image Processing

Mathematical Beer Goggles or The Mathematics of Image Processing How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How

More information

1. Introduction. We are concerned with the approximate solution of linear least-squares problems

1. Introduction. We are concerned with the approximate solution of linear least-squares problems FRACTIONAL TIKHONOV REGULARIZATION WITH A NONLINEAR PENALTY TERM SERENA MORIGI, LOTHAR REICHEL, AND FIORELLA SGALLARI Abstract. Tikhonov regularization is one of the most popular methods for solving linear

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems

Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Near-Optimal Spectral Filtering and Error Estimation for Solving Ill-Posed Problems Viktoria Taroudaki Dianne P. O Leary May 1, 2015 Abstract We consider regularization methods for numerical solution of

More information

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms)

Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Lecture 1: Numerical Issues from Inverse Problems (Parameter Estimation, Regularization Theory, and Parallel Algorithms) Youzuo Lin 1 Joint work with: Rosemary A. Renaut 2 Brendt Wohlberg 1 Hongbin Guo

More information

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam

Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Regularization of the Inverse Laplace Transform with Applications in Nuclear Magnetic Resonance Relaxometry Candidacy Exam Applied Mathematics, Applied Statistics, & Scientific Computation University of

More information

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems

Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear

More information

ECE295, Data Assimila0on and Inverse Problems, Spring 2015

ECE295, Data Assimila0on and Inverse Problems, Spring 2015 ECE295, Data Assimila0on and Inverse Problems, Spring 2015 1 April, Intro; Linear discrete Inverse problems (Aster Ch 1 and 2) Slides 8 April, SVD (Aster ch 2 and 3) Slides 15 April, RegularizaFon (ch

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 28, pp. 49-67, 28. Copyright 28,. ISSN 68-963. A WEIGHTED-GCV METHOD FOR LANCZOS-HYBRID REGULARIZATION JULIANNE CHUNG, JAMES G. NAGY, AND DIANNE P.

More information

Discontinuous parameter estimates with least squares estimators

Discontinuous parameter estimates with least squares estimators Discontinuous parameter estimates with least squares estimators J.L. Mead Boise State University, Department of Mathematicss, Boise, ID 8375-1555. Tel: 846-43, Fax: 8-46-1354 Email jmead@boisestate.edu.

More information

Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells.

Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells. Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells. R. A. Renaut 1, R. Baker 1, M. Horst 1, C. Johnson 1, D. Nasir 1 1 School of Mathematical & Statistical

More information

Deconvolution. Parameter Estimation in Linear Inverse Problems

Deconvolution. Parameter Estimation in Linear Inverse Problems Image Parameter Estimation in Linear Inverse Problems Chair for Computer Aided Medical Procedures & Augmented Reality Department of Computer Science, TUM November 10, 2006 Contents A naive approach......with

More information

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS

PROJECTED TIKHONOV REGULARIZATION OF LARGE-SCALE DISCRETE ILL-POSED PROBLEMS PROJECED IKHONOV REGULARIZAION OF LARGE-SCALE DISCREE ILL-POSED PROBLEMS DAVID R. MARIN AND LOHAR REICHEL Abstract. he solution of linear discrete ill-posed problems is very sensitive to perturbations

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS

ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS BIT Numerical Mathematics 6-3835/3/431-1 $16. 23, Vol. 43, No. 1, pp. 1 18 c Kluwer Academic Publishers ITERATIVE REGULARIZATION WITH MINIMUM-RESIDUAL METHODS T. K. JENSEN and P. C. HANSEN Informatics

More information

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR

THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR THE SINGULAR VALUE DECOMPOSITION MARKUS GRASMAIR 1. Definition Existence Theorem 1. Assume that A R m n. Then there exist orthogonal matrices U R m m V R n n, values σ 1 σ 2... σ p 0 with p = min{m, n},

More information

Inverse problem and optimization

Inverse problem and optimization Inverse problem and optimization Laurent Condat, Nelly Pustelnik CNRS, Gipsa-lab CNRS, Laboratoire de Physique de l ENS de Lyon Decembre, 15th 2016 Inverse problem and optimization 2/36 Plan 1. Examples

More information

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017

One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 One Picture and a Thousand Words Using Matrix Approximtions October 2017 Oak Ridge National Lab Dianne P. O Leary c 2017 1 One Picture and a Thousand Words Using Matrix Approximations Dianne P. O Leary

More information

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems

Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Regularization with Randomized SVD for Large-Scale Discrete Inverse Problems Hua Xiang Jun Zou July 20, 2013 Abstract In this paper we propose an algorithm for solving the large-scale discrete ill-conditioned

More information

Uncertainty quantification for inverse problems with a weak wave-equation constraint

Uncertainty quantification for inverse problems with a weak wave-equation constraint Uncertainty quantification for inverse problems with a weak wave-equation constraint Zhilong Fang*, Curt Da Silva*, Rachel Kuske** and Felix J. Herrmann* *Seismic Laboratory for Imaging and Modeling (SLIM),

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

MATH 3795 Lecture 10. Regularized Linear Least Squares.

MATH 3795 Lecture 10. Regularized Linear Least Squares. MATH 3795 Lecture 10. Regularized Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Understanding the regularization. D. Leykekhman - MATH 3795 Introduction to Computational Mathematics Linear Least

More information

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0

Singular value decomposition. If only the first p singular values are nonzero we write. U T o U p =0 Singular value decomposition If only the first p singular values are nonzero we write G =[U p U o ] " Sp 0 0 0 # [V p V o ] T U p represents the first p columns of U U o represents the last N-p columns

More information

MARN 5898 Regularized Linear Least Squares.

MARN 5898 Regularized Linear Least Squares. MARN 5898 Regularized Linear Least Squares. Dmitriy Leykekhman Spring 2010 Goals Understanding the regularization. D. Leykekhman - MARN 5898 Parameter estimation in marine sciences Linear Least Squares

More information

Linear inversion methods and generalized cross-validation

Linear inversion methods and generalized cross-validation Online supplement to Using generalized cross-validation to select parameters in inversions for regional caron fluxes Linear inversion methods and generalized cross-validation NIR Y. KRAKAUER AND TAPIO

More information

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods

Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Empirical Risk Minimization as Parameter Choice Rule for General Linear Regularization Methods Frank Werner 1 Statistical Inverse Problems in Biophysics Group Max Planck Institute for Biophysical Chemistry,

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

The Global Krylov subspace methods and Tikhonov regularization for image restoration

The Global Krylov subspace methods and Tikhonov regularization for image restoration The Global Krylov subspace methods and Tikhonov regularization for image restoration Abderrahman BOUHAMIDI (joint work with Khalide Jbilou) Université du Littoral Côte d Opale LMPA, CALAIS-FRANCE bouhamidi@lmpa.univ-littoral.fr

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

NON-NEGATIVELY CONSTRAINED LEAST

NON-NEGATIVELY CONSTRAINED LEAST NON-NEGATVELY CONSTRANED LEAST SQUARES AND PARAMETER CHOCE BY THE RESDUAL PERODOGRAM FOR THE NVERSON OF ELECTROCHEMCAL MPEDANCE SPECTROSCOPY DATA Rosemary Renaut, Mathematics, ASU Jakob Hansen, Jarom Hogue

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

unbiased predictive risk estimator for regularization

unbiased predictive risk estimator for regularization submitted to Geophys. J. Int. 3-D Projected L inversion of gravity data using truncated unbiased predictive risk estimator for regularization parameter estimation Saeed Vatankhah, Rosemary A. Renaut 2

More information

On the regularization properties of some spectral gradient methods

On the regularization properties of some spectral gradient methods On the regularization properties of some spectral gradient methods Daniela di Serafino Department of Mathematics and Physics, Second University of Naples daniela.diserafino@unina2.it contributions from

More information

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology

Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 2007 Technische Universiteit Eindh ove n University of Technology Inverse problems Total Variation Regularization Mark van Kraaij Casa seminar 23 May 27 Introduction Fredholm first kind integral equation of convolution type in one space dimension: g(x) = 1 k(x x )f(x

More information

Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake

Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake Structural Damage Detection Using Time Windowing Technique from Measured Acceleration during Earthquake Seung Keun Park and Hae Sung Lee ABSTRACT This paper presents a system identification (SI) scheme

More information

Choosing the Regularization Parameter

Choosing the Regularization Parameter Choosing the Regularization Parameter At our disposal: several regularization methods, based on filtering of the SVD components. Often fairly straightforward to eyeball a good TSVD truncation parameter

More information

Linear Inverse Problems

Linear Inverse Problems Linear Inverse Problems Ajinkya Kadu Utrecht University, The Netherlands February 26, 2018 Outline Introduction Least-squares Reconstruction Methods Examples Summary Introduction 2 What are inverse problems?

More information

Singular value analysis of Joint Inversion

Singular value analysis of Joint Inversion Singular value analysis of Joint Inversion Jodi Mead Department of Mathematics Diego Domenzain Department of Geosciences James Ford Clearwater Analytics John Bradford Department of Geophysics Colorado

More information

Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy

Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy Non-negatively constrained least squares and parameter choice by the residual periodogram for the inversion of electrochemical impedance spectroscopy Jakob Hansen a, Jarom Hogue a, Grant Sander a, Rosemary

More information

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization

Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Extension of GKB-FP algorithm to large-scale general-form Tikhonov regularization Fermín S. Viloche Bazán, Maria C. C. Cunha and Leonardo S. Borges Department of Mathematics, Federal University of Santa

More information

Terascale Statistics Tools School Spring 2010 Tuesday 23 March Friday 26 March Data unfolding. Volker Blobel Universität Hamburg

Terascale Statistics Tools School Spring 2010 Tuesday 23 March Friday 26 March Data unfolding. Volker Blobel Universität Hamburg Terascale Statistics Tools School Spring 21 Tuesday 23 March Friday 26 March 21 Data unfolding Volker Blobel Universität Hamburg Unfolding is required, due to migration effects, for the measurement of

More information

Old and new parameter choice rules for discrete ill-posed problems

Old and new parameter choice rules for discrete ill-posed problems Old and new parameter choice rules for discrete ill-posed problems Lothar Reichel Giuseppe Rodriguez Dedicated to Claude Brezinski and Sebastiano Seatzu on the Occasion of Their 70th Birthdays. Abstract

More information

Unfolding Methods in Particle Physics

Unfolding Methods in Particle Physics Unfolding Methods in Particle Physics Volker Blobel University of Hamburg, Hamburg, Germany 1 Inverse problems Abstract Measured distributions in particle physics are distorted by the finite resolution

More information

2 Tikhonov Regularization and ERM

2 Tikhonov Regularization and ERM Introduction Here we discusses how a class of regularization methods originally designed to solve ill-posed inverse problems give rise to regularized learning algorithms. These algorithms are kernel methods

More information

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation

Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Regularization Parameter Selection Methods for Ill-Posed Poisson Maximum Likelihood Estimation Johnathan M. Bardsley and John Goldes Department of Mathematical Sciences University of Montana Missoula,

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1)

Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Advanced Research Intelligent Embedded Systems Uncertainty, Information and Learning Mechanisms (Part 1) Intelligence for Embedded Systems Ph. D. and Master Course Manuel Roveri Politecnico di Milano,

More information

A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition

A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition A novel method for estimating the distribution of convective heat flux in ducts: Gaussian Filtered Singular Value Decomposition F Bozzoli 1,2, L Cattani 1,2, A Mocerino 1, S Rainieri 1,2, F S V Bazán 3

More information

Introduction to Bayesian methods in inverse problems

Introduction to Bayesian methods in inverse problems Introduction to Bayesian methods in inverse problems Ville Kolehmainen 1 1 Department of Applied Physics, University of Eastern Finland, Kuopio, Finland March 4 2013 Manchester, UK. Contents Introduction

More information

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n,

SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS (1.1) x R Rm n, x R n, b R m, m n, SIMPLIFIED GSVD COMPUTATIONS FOR THE SOLUTION OF LINEAR DISCRETE ILL-POSED PROBLEMS L. DYKES AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) often is used to solve Tikhonov

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

10. Multi-objective least squares

10. Multi-objective least squares L Vandenberghe ECE133A (Winter 2018) 10 Multi-objective least squares multi-objective least squares regularized data fitting control estimation and inversion 10-1 Multi-objective least squares we have

More information

Spectral Regularization

Spectral Regularization Spectral Regularization Lorenzo Rosasco 9.520 Class 07 February 27, 2008 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Non-linear least-squares inversion with data-driven

Non-linear least-squares inversion with data-driven Geophys. J. Int. (2000) 142, 000 000 Non-linear least-squares inversion with data-driven Bayesian regularization Tor Erik Rabben and Bjørn Ursin Department of Petroleum Engineering and Applied Geophysics,

More information

Structured Linear Algebra Problems in Adaptive Optics Imaging

Structured Linear Algebra Problems in Adaptive Optics Imaging Structured Linear Algebra Problems in Adaptive Optics Imaging Johnathan M. Bardsley, Sarah Knepper, and James Nagy Abstract A main problem in adaptive optics is to reconstruct the phase spectrum given

More information

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30

Problem Set 2. MAS 622J/1.126J: Pattern Recognition and Analysis. Due: 5:00 p.m. on September 30 Problem Set 2 MAS 622J/1.126J: Pattern Recognition and Analysis Due: 5:00 p.m. on September 30 [Note: All instructions to plot data or write a program should be carried out using Matlab. In order to maintain

More information

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators

Estimation theory. Parametric estimation. Properties of estimators. Minimum variance estimator. Cramer-Rao bound. Maximum likelihood estimators Estimation theory Parametric estimation Properties of estimators Minimum variance estimator Cramer-Rao bound Maximum likelihood estimators Confidence intervals Bayesian estimation 1 Random Variables Let

More information

y(x) = x w + ε(x), (1)

y(x) = x w + ε(x), (1) Linear regression We are ready to consider our first machine-learning problem: linear regression. Suppose that e are interested in the values of a function y(x): R d R, here x is a d-dimensional vector-valued

More information

The Inversion Problem: solving parameters inversion and assimilation problems

The Inversion Problem: solving parameters inversion and assimilation problems The Inversion Problem: solving parameters inversion and assimilation problems UE Numerical Methods Workshop Romain Brossier romain.brossier@univ-grenoble-alpes.fr ISTerre, Univ. Grenoble Alpes Master 08/09/2016

More information

Regularization via Spectral Filtering

Regularization via Spectral Filtering Regularization via Spectral Filtering Lorenzo Rosasco MIT, 9.520 Class 7 About this class Goal To discuss how a class of regularization methods originally designed for solving ill-posed inverse problems,

More information

Numerical Methods I: Eigenvalues and eigenvectors

Numerical Methods I: Eigenvalues and eigenvectors 1/25 Numerical Methods I: Eigenvalues and eigenvectors Georg Stadler Courant Institute, NYU stadler@cims.nyu.edu November 2, 2017 Overview 2/25 Conditioning Eigenvalues and eigenvectors How hard are they

More information

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko

Lecture 1: Introduction to low-rank tensor representation/approximation. Center for Uncertainty Quantification. Alexander Litvinenko tifica Lecture 1: Introduction to low-rank tensor representation/approximation Alexander Litvinenko http://sri-uq.kaust.edu.sa/ KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400

More information