Tikhonov Regularization for Weighted Total Least Squares Problems

Size: px
Start display at page:

Download "Tikhonov Regularization for Weighted Total Least Squares Problems"

Transcription

1 Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS) formulation. Our regularization of the weighted total least squares problem is based on the Tikhonov regularization. Numerical examples are presented to demonstrate the effectiveness of the RWTLS method. 1 Introduction In this paper, we study the regularized weighted total least squares (RWTLS) formulation. Our regularization of the weighted total least squares problem is based on the Tikhonov regularization [1]. For the total least squares (TLS) problem [2], the truncation approach has already been studied by Fierro et al. [3]. In [4], Golub et al. has considered the Tikhonov regularization approach for TLS problems. They derived a new regularization method in which Research supported in part by the National Natural Science Foundation of China and Shanghai Education Committee, Hong Kong RGC Grant Nos. 713/2P and 746/3P. Department of Mathematics & Laboratory of Mathematics for Nonlinear Sciences, Fudan University, Shanghai, 2433, People s Republic of China. ymwei@fudan.edu.cn Department of Applied Mathematics, Dalian University of Technology, Dalian, 11624, P. R. China, and Information Engineering College, Dalian University, Dalian, , P. R. China. nmzhang@dlut.edu.cn Department of Mathematics, The University of Hong Kong Pokfulam Road, Hong Kong. mng@maths.hku.hk Department of Computing and Software, McMaster University, Hamilton, Ont., Canada, L8S 4L7. 1

2 stabilization enters the formulation in a natural way, and that is able to produce regularized solutions with superior properties for certain problems in which the perturbations are large. In the present work, we focus on RWTLS problems. We show that the RWTLS solution is closely related to the Tikhonov solution to the weighted least squares solution. Our paper is organized as follows. In Section 2, we introduce the RWTLS formulation, and study its regularizing properties. Computational aspects are described in Section 3. In Section 4, numerical examples are presented to demonstrate the usefulness of the RWTLS method. 2 The Regularized Weighted Total Least Squares A general version of Tikhonov s formulation for the linear weighted TLS problem takes the form [5]: min x U[(A, b) (Ã, b)]v F subject to b = Ãx, Dx S δ, (1) where U, V, W and D are nonsingular matrices, S is a symmetric positive definite matrix, the matrix V is of the form V = W with y 2 S = y T Sy, and δ and γ are non-zero γ constants. Weighted Tikhonov regularization has an equivalent formula min x UAW x γub 2 subject to Dx S δ, (2) where δ is a positive constant. Probelm (2) is a weighted LS problem with a quadratic constraint and, using the Lagrange multiplier formulation, the above Tikhonov regularization can be rewritten as follows: L(Ã, x, µ) = U[(A, b) (Ã, b)]v 2 F + µ( Dx 2 S δ 2 ), (3) where µ is the Lagrange multiplier, zero if the inequality constraint is inactive. solution x δ to this problem is different from the solution x W T LS to The min U[(A, b) (Ã, b)]v F subject to b = Ãx, Ã, b,x where δ is less than Dx W T LS 2, the two solutions x δ and x δ to the two regularized problem in (2) and (1) have an interesting relationship, presented by Theorem 1. 2

3 Before we show the properties of the solution to (3), we have the following results about the matrix differentiation for the matrices A, Ã, W and U. Lemma 1. (i) tr(w T A T U T UÃW ) = U T UAW W T (ii) tr(w T ÃT U T UAW ) = U T UAW W T (iii) tr(w T ÃT U T UÃW ) = 2U T UÃW W T (iv) (b T U T UÃx) Ã = U T Ubx T (v) (x T ÃT U T Ub) = U T Ubx T (vi) (x T ÃT U T UÃx) = 2U T UÃxxT Proof. Since (i) is equivalent to (ii), (iv) is equivalent to (v), and (vi) is a special case of (iii), we only give the proof of (i) and (iii). We first note that for any matrices X = (x ij ) R m n, Z = (z ij ) R p q G = (g ij ) R p m, H = (h ij ) R n q, C = (c ij ) R p n, D = (d ij ) R m q, the following properties are equivalent: (see Theorem 7.1 in [6]) and where E (kl) ij i Z = GE (mn) ij H + C(E (mn) ij ) T D, i = 1,..., m, j = 1,..., n x ij z ij X = GT E (pq) ij H T + D(E (pq) ij ) T C, i = 1,..., p, j = 1,..., q, is an k-by-l zero matrix except the (i, j)-entry being equal to one. ( ) For (i), we let Y = W T A T U T U and we have (Y ÃW ) ii (Y ÃW ) ii. (Y ÃW ) Since = Y E ij W and (Y ÃW ) ij = Y T E Ãij ij W T, tr(y ÃW ) we obtain = Y T E ii W T = Y T W T. i The result follows. For (iii), we find that = W T E T iju T UÃW +(UÃW )T UE ij W, and therefore we have [(UÃW )T (UÃW )] ii It follows that [(UÃW )T (UÃW )] ij tr(y ÃW ) = i and = U T UÃW ET ii W T + U T UÃW E iiw T. = tr[(uãw )T (UÃW )] = i [(UÃW )T (UÃW )] ii = i U T UÃW ET ii W T + U T UÃW E iiw T = 2U T UÃW W T. 3

4 With Lemma 1, we have the following main theorem. Theorem 1. The RWTLS solution to (1) with the inequality constraint replaced by equality, is a solution to the problem (A T U T UA + αw T W 1 + βd T SD)x = A T U T Ub, (4) where the parameters α and β are given by α = γ2 b Ax 2 U T U, β = µ 1 + γ 2 x 2 W T W γ (1 + 2 γ2 x 2 W T W 1) (5) 1 and µ is the Lagrange multiplier in (3). The two parameters are related by βδ 2 = (Ub) T U(b Ax) + 1 α, (6) γ2 and the weighted TLS residual satisfies U[(A, b) (Ã, b)]v 2 F = α. Proof. We characterize the solution to (1) by setting the partial derivatives of L(Ã, x, µ) to zero. Using Lemma 1, the differentiation of L(Ã, x, µ) with respect to à yields where r = γu(b Ãx) = γu(b b). UÃW W T UAW W T γ rx T =, (7) Moreover, the differentiation of L(Ã, x, µ) with respect to the entries in x yields γãt U T r + µd T SDx = or (γ 2 à T U T Uà + µdt SD)x = γ 2 à T U T Ub. (8) By using (7) and (8), we have A T U T UA = (Uà γ rxt W T W 1 ) T (Uà γ rxt W T W 1 ) and ÃT U T Ub = A T U T Ub + γw T W 1 x r T Ub. = ÃT U T Uà + γ2 r 2 2W T W 1 xx T W T W 1 µd T SDxx T W T W 1 µw T W 1 xx T D T SD By using the assumption that Dx S = δ and gathering the above terms, we obtain (5) with α = µδ 2 γ 2 r 2 2 x 2 W T W 1 γ rt Ub and β = µ γ 2 (1 + γ2 x 2 W T W 1). 4

5 In order to obtain the expression for α, we first rewrite r as r = γu(b Ãx) = γu(b Ax γu 1 rx T W T W 1 x) = γu(b Ax) γ 2 r x 2 W T W 1 from which we obtain the relation From (8), we have r = γu(b Ax) 1 + γ 2 x 2 W T W 1. (9) µ = γxt à T U T r x T D T SDx = (γub r)t r δ 2 (1) By inserting (9) and (1) into the expression for α, we obtain (5). Equation (6) is proved by multiplying β by δ 2 and inserting (9) and (1). Finally, we note from (7) that UAW UÃW = γ rxt W T (UAW, γub) (UÃW, γuãx) = ( γ rxt W T, r). It follows that and therefore we have U[(A, b) (Ã, b)]v 2 F = γ rx T W T 2 F + r 2 2 = (1 + γ 2 x 2 W T W 1) r 2 2 = γ2 b Ax 2 U T U 1 + γ 2 x 2 W T W 1 = α. The next theorem tells us the relationship between the RWTLS solution and the WTLS solution without the regularization. Theorem 2. For a given value of δ, the RWTLS solution x RW T LS (δ) is related to the solution x W T LS to the weighted total least squares problem without the regularization as follows: δ solution α β δ < Dx W T LS S x RW T LS (δ) x W T LS α < and α δ > β > δ Dx W T LS S x RW T LS (δ) = x W T LS α = σ min ((UAW, γub)) 2 β = Here σ min ((UAW, γub)) is the smallest singular value of the matrix (UAW, γub). Proof. For δ < Dx W T LS S, the inequality constraint is active and therefore the Lagrange multiplier µ is positive, since this is a necessary condition for optimality, see [7]. By (5), we know that β is positive. Again from (6), we find that when δ increases, α increases, and therefore the TLS residual U[(A, b) (Ã, b)]v 2 F = α decreases. For δ Dx W T LS S, the Lagrange multiplier µ is equal to zero. The solution becomes the unconstrained minimizer x W T LS. The result follows. 5

6 For δ = Dx W T LS, the Lagrange multiplier is zero, and the solution becomes the unconstrained minimizer x W T LS. The value σ min ((UAW, γub)) 2 follows from Theorem 4.1 in [5]. The constraint is never again active for larger δ, so the solution remains unchanged. 3 Computational method To compute the RWTLS solutions, we have found it most convenient to avoid explicit use of δ; instead we use β as the free parameter, fixing its value and then computing the value of α that satisfies (5) and is smallest in absolute value. The corresponding value of δ can be computed from relation (6). We discuss how to solve (4) efficiently for many values of α and β. We notice that the equation is equivalent to the augmented system I m UA r Ub I p β 1/2 S 1/2 D s =, (11) (UA) T β 1/2 D T S 1/2 αw T W 1 x where r = Ub UAx, s = β 1/2 S 1/2 Dx. Our algorithm is based on this formulation. We reduce UA to m n bidiagonal form B by means of orthogonal transformations: H T (UA)K = B, and C = J T (S 1/2 D)K retains the banded form. Using the sequence of Givens transformations, it is easy to get J, H and K. Once B and C have been computed, we can recast the augmented system in (11) in the following form: I n B H T r H T Ub I p β 1/2 C J T s =. (12) B T β 1/2 C T αk T W T W 1 K K T x Since α changes more frequently than β in our approach, we will now use Givens rotations to annihilate β 1/2 C using B by means of Elden s algorithm [8], which can be represented as B β 1/2 C = G B = G 11 G 12 G 21 G 22 B. 6

7 When we insert this G into the augmented system (11), it becomes I n B ˆr G T 11H T Ub I p ŝ = G T 12H T Ub B T αk T W T W 1 K K T x, where ˆr = G T 11H T r + G T 21J T s, ŝ = G T 12H T r + G T 22J T s. The middle block row is now decoupled, and we obtain I n B T B ˆr αk T W T W 1 K K T x = Finally, we apply a symmetric perfect shuffle reordering n + 1, 1, n + 2, 2, n + 3, 3,, n, 2n GT 11H T Ub to the rows and columns of the above matrix, to obtain a symmetric, tridiagonal, indefinite matrix of order 2n 2n: α ˆb 11 ˆb11 1 ˆb12 ˆb12 α ˆb 22. ˆb and we can solve this permuted system by a general tridiagonal solver.. 4 Numerical Examples In this section, we present numerical results that illustrate the usefulness of the RWTLS method. Our computations are carried out in MATLAB. We consider an example in [4]. This test problem is a discretization by means of Gauss-Laguerre quadrature of the inverse Laplace transform exp( st)f(t)dt = 1 s 1 s + 4/25, s >. The exact solution f(t) = 1 exp( 4t/25) is known. This example has been implemented in the function ilaplace(n, 2) in Hansen s regularization toolbox [9]. 7

8 In the tests, we consider the size of the coefficient matrix is 64, and the the perturbed part of the coefficient matrix is E and its elements are generated from a normal distribution with zero mean and the unit standard deviation. The perturbed right-hand side is generated as b = (A + σ E 1 F E)x + σ e 1 2 e, where the elements of e are from normal distributions with zero mean and the unit standard deviation, x is the accurate solution. In Figure 1, we show the results for different σ =.1,.1,.1. The solid line is the exact solution, (which is for the discretized probelm) while the line with * is the solution from RWTLS and the dotted line is the solution by RTLS (i.e., the regularized TLS solution without the weighting). In the RWTLS method, we select U to be a diagonal matrix whose elements are 1/σ for the first 16 elements and the last 16 elements are 3σ which are not larger than.1 otherwise divide them by 1 until the condition is satisfied, the other elements are equal to 1. The first half elements of W are ones while the last half ones are equal to σ. The matrix D is the identity matrix. At the same time we let γ = 1. In each case, the optimal regularization parameter µ is selected. We see from the figures that the solutions provided by the RWTLS method are better than those by the RTLS method. One of the future research work is to study how to choose the weight W without knowing the noise. We expect some optimization models should be incorporated into the objective function and the weighting can be determined by the optimization process, see for instance [1] (a) (b) (c) Figure 1: Numerical solutions for different methods (a) σ =.1; (b) σ =.1 and (c) σ =.1. 8

9 References [1] H. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Netherlands, [2] S. Van Huffel and J. Vandewalle, The Total Least Squares Problem: Computational Aspects and Analysis, SIAM, Philadelphia, [3] R. Fierro,G. Golub, P. Hansen and D. O Leary, Regularization by truncated total least squares, SIAM J. Sci. Comput., 18 (1997) [4] G. Golub, P. Hansen and D. O Leary, Tikhonov regularization and total least squares, SIAM J. Matrix Anal. Appl.,21 (1999) [5] G. Golub and C. Van Loan, An analysis of the total least squares problem, SIAM J. Numer. Anal., 17 (198) [6] G. Rogers, Matrix Derivatives, Lecture Notes in Statistics 2, New York, 198. [7] S. Nash and A. Sofer, Linear and Nonlinear Programming, McGraw-Hill, New York, [8] L. Elden Algorithms for regularization of ill-conditioned least squares problems, BIT, 17(1977) [9] P. Hansen, Regularization tools: a Matlab package for analysis and solution of discrete ill-posed problems, Numer. Algorithms, 6 (1994) [1] H. Fu and J. Barlow, A regularized total least squares algorithm for high resolution image reconstruction, to appear in Linear Algebra and its Applications. 9

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

A model function method in total least squares

A model function method in total least squares www.oeaw.ac.at A model function method in total least squares S. Lu, S. Pereverzyev, U. Tautenhahn RICAM-Report 2008-18 www.ricam.oeaw.ac.at A MODEL FUNCTION METHOD IN TOTAL LEAST SQUARES SHUAI LU, SERGEI

More information

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS

A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

On the Solution of Constrained and Weighted Linear Least Squares Problems

On the Solution of Constrained and Weighted Linear Least Squares Problems International Mathematical Forum, 1, 2006, no. 22, 1067-1076 On the Solution of Constrained and Weighted Linear Least Squares Problems Mohammedi R. Abdel-Aziz 1 Department of Mathematics and Computer Science

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Cambridge - July 28, 211. Doctoral

More information

7.6 The Inverse of a Square Matrix

7.6 The Inverse of a Square Matrix 7.6 The Inverse of a Square Matrix Copyright Cengage Learning. All rights reserved. What You Should Learn Verify that two matrices are inverses of each other. Use Gauss-Jordan elimination to find inverses

More information

Discrete ill posed problems

Discrete ill posed problems Discrete ill posed problems Gérard MEURANT October, 2008 1 Introduction to ill posed problems 2 Tikhonov regularization 3 The L curve criterion 4 Generalized cross validation 5 Comparisons of methods Introduction

More information

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday.

Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. SUBSPACE-RESTRICTED SINGULAR VALUE DECOMPOSITIONS FOR LINEAR DISCRETE ILL-POSED PROBLEMS MICHIEL E. HOCHSTENBACH AND LOTHAR REICHEL Dedicated to Adhemar Bultheel on the occasion of his 60th birthday. Abstract.

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator

A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator A Double Regularization Approach for Inverse Problems with Noisy Data and Inexact Operator Ismael Rodrigo Bleyer Prof. Dr. Ronny Ramlau Johannes Kepler Universität - Linz Florianópolis - September, 2011.

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract Linear Algebra and its Applications 49 (006) 765 77 wwwelseviercom/locate/laa Relative perturbation bounds for the eigenvalues of diagonalizable and singular matrices Application of perturbation theory

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

Golub-Kahan iterative bidiagonalization and determining the noise level in the data

Golub-Kahan iterative bidiagonalization and determining the noise level in the data Golub-Kahan iterative bidiagonalization and determining the noise level in the data Iveta Hnětynková,, Martin Plešinger,, Zdeněk Strakoš, * Charles University, Prague ** Academy of Sciences of the Czech

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

Tensor Complementarity Problem and Semi-positive Tensors

Tensor Complementarity Problem and Semi-positive Tensors DOI 10.1007/s10957-015-0800-2 Tensor Complementarity Problem and Semi-positive Tensors Yisheng Song 1 Liqun Qi 2 Received: 14 February 2015 / Accepted: 17 August 2015 Springer Science+Business Media New

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

Problem Set # 1 Solution, 18.06

Problem Set # 1 Solution, 18.06 Problem Set # 1 Solution, 1.06 For grading: Each problem worths 10 points, and there is points of extra credit in problem. The total maximum is 100. 1. (10pts) In Lecture 1, Prof. Strang drew the cone

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting

Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting Weiyang Ding Liqun Qi Yimin Wei arxiv:1401.6238v1 [math.na] 24 Jan 2014 January 27, 2014 Abstract This paper is contributed

More information

Greedy Tikhonov regularization for large linear ill-posed problems

Greedy Tikhonov regularization for large linear ill-posed problems International Journal of Computer Mathematics Vol. 00, No. 00, Month 200x, 1 20 Greedy Tikhonov regularization for large linear ill-posed problems L. Reichel, H. Sadok, and A. Shyshkov (Received 00 Month

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional

More information

Why the QR Factorization can be more Accurate than the SVD

Why the QR Factorization can be more Accurate than the SVD Why the QR Factorization can be more Accurate than the SVD Leslie V. Foster Department of Mathematics San Jose State University San Jose, CA 95192 foster@math.sjsu.edu May 10, 2004 Problem: or Ax = b for

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation

Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Solving discrete ill posed problems with Tikhonov regularization and generalized cross validation Gérard MEURANT November 2010 1 Introduction to ill posed problems 2 Examples of ill-posed problems 3 Tikhonov

More information

1. Introduction. We consider the system of saddle point linear systems

1. Introduction. We consider the system of saddle point linear systems VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS TAKUMA KIMURA AND XIAOJUN CHEN Abstract. We propose a fast verification method for saddle point linear systems where the (, block is singular. The proposed

More information

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS

RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS RESCALING THE GSVD WITH APPLICATION TO ILL-POSED PROBLEMS L. DYKES, S. NOSCHESE, AND L. REICHEL Abstract. The generalized singular value decomposition (GSVD) of a pair of matrices expresses each matrix

More information

4.5 Integration of Rational Functions by Partial Fractions

4.5 Integration of Rational Functions by Partial Fractions 4.5 Integration of Rational Functions by Partial Fractions From algebra, we learned how to find common denominators so we can do something like this, 2 x + 1 + 3 x 3 = 2(x 3) (x + 1)(x 3) + 3(x + 1) (x

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 The Singular Value Decomposition (SVD) continued Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University

Updated: January 16, 2016 Calculus II 7.4. Math 230. Calculus II. Brian Veitch Fall 2015 Northern Illinois University Math 30 Calculus II Brian Veitch Fall 015 Northern Illinois University Integration of Rational Functions by Partial Fractions From algebra, we learned how to find common denominators so we can do something

More information

A quadratic expression is a mathematical expression that can be written in the form 2

A quadratic expression is a mathematical expression that can be written in the form 2 118 CHAPTER Algebra.6 FACTORING AND THE QUADRATIC EQUATION Textbook Reference Section 5. CLAST OBJECTIVES Factor a quadratic expression Find the roots of a quadratic equation A quadratic expression is

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

SIGNAL AND IMAGE RESTORATION: SOLVING

SIGNAL AND IMAGE RESTORATION: SOLVING 1 / 55 SIGNAL AND IMAGE RESTORATION: SOLVING ILL-POSED INVERSE PROBLEMS - ESTIMATING PARAMETERS Rosemary Renaut http://math.asu.edu/ rosie CORNELL MAY 10, 2013 2 / 55 Outline Background Parameter Estimation

More information

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0. ) Find all solutions of the linear system. Express the answer in vector form. x + 2x + x + x 5 = 2 2x 2 + 2x + 2x + x 5 = 8 x + 2x + x + 9x 5 = 2 2 Solution: Reduce the augmented matrix [ 2 2 2 8 ] to

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Structured Condition Numbers of Symmetric Algebraic Riccati Equations

Structured Condition Numbers of Symmetric Algebraic Riccati Equations Proceedings of the 2 nd International Conference of Control Dynamic Systems and Robotics Ottawa Ontario Canada May 7-8 2015 Paper No. 183 Structured Condition Numbers of Symmetric Algebraic Riccati Equations

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 A cautionary tale Notes for 2016-10-05 You have been dropped on a desert island with a laptop with a magic battery of infinite life, a MATLAB license, and a complete lack of knowledge of basic geometry.

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved. 7.5 Operations with Matrices Copyright Cengage Learning. All rights reserved. What You Should Learn Decide whether two matrices are equal. Add and subtract matrices and multiply matrices by scalars. Multiply

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

ME751 Advanced Computational Multibody Dynamics

ME751 Advanced Computational Multibody Dynamics ME751 Advanced Computational Multibody Dynamics Review: Elements of Linear Algebra & Calculus September 9, 2016 Dan Negrut University of Wisconsin-Madison Quote of the day If you can't convince them, confuse

More information

arxiv: v4 [quant-ph] 8 Mar 2013

arxiv: v4 [quant-ph] 8 Mar 2013 Decomposition of unitary matrices and quantum gates Chi-Kwong Li, Rebecca Roberts Department of Mathematics, College of William and Mary, Williamsburg, VA 2387, USA. (E-mail: ckli@math.wm.edu, rlroberts@email.wm.edu)

More information

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski.

A NEW L-CURVE FOR ILL-POSED PROBLEMS. Dedicated to Claude Brezinski. A NEW L-CURVE FOR ILL-POSED PROBLEMS LOTHAR REICHEL AND HASSANE SADOK Dedicated to Claude Brezinski. Abstract. The truncated singular value decomposition is a popular method for the solution of linear

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Lecture # 20 The Preconditioned Conjugate Gradient Method

Lecture # 20 The Preconditioned Conjugate Gradient Method Lecture # 20 The Preconditioned Conjugate Gradient Method We wish to solve Ax = b (1) A R n n is symmetric and positive definite (SPD). We then of n are being VERY LARGE, say, n = 10 6 or n = 10 7. Usually,

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2. MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices

More information

Interval solutions for interval algebraic equations

Interval solutions for interval algebraic equations Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 17 1 / 26 Overview

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

Blind image restoration as a convex optimization problem

Blind image restoration as a convex optimization problem Int. J. Simul. Multidisci.Des. Optim. 4, 33-38 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo/ 2010005 Available online at: http://www.ijsmdo.org Blind image restoration as a convex optimization problem A. Bouhamidi

More information

Linear Algebra I Lecture 8

Linear Algebra I Lecture 8 Linear Algebra I Lecture 8 Xi Chen 1 1 University of Alberta January 25, 2019 Outline 1 2 Gauss-Jordan Elimination Given a system of linear equations f 1 (x 1, x 2,..., x n ) = 0 f 2 (x 1, x 2,..., x n

More information

Statistically-Based Regularization Parameter Estimation for Large Scale Problems

Statistically-Based Regularization Parameter Estimation for Large Scale Problems Statistically-Based Regularization Parameter Estimation for Large Scale Problems Rosemary Renaut Joint work with Jodi Mead and Iveta Hnetynkova March 1, 2010 National Science Foundation: Division of Computational

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations pages 58-62 are a repeat of matrix notes. New material begins on page 63. Matrix operations: Mathcad

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices

Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Analysis of Block LDL T Factorizations for Symmetric Indefinite Matrices Haw-ren Fang August 24, 2007 Abstract We consider the block LDL T factorizations for symmetric indefinite matrices in the form LBL

More information

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point

On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View

More information

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction

Inverse Source Identification based on Acoustic Particle Velocity Measurements. Abstract. 1. Introduction The 2002 International Congress and Exposition on Noise Control Engineering Dearborn, MI, USA. August 19-21, 2002 Inverse Source Identification based on Acoustic Particle Velocity Measurements R. Visser

More information

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS

POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS POSITIVE SEMIDEFINITE INTERVALS FOR MATRIX PENCILS RICHARD J. CARON, HUIMING SONG, AND TIM TRAYNOR Abstract. Let A and E be real symmetric matrices. In this paper we are concerned with the determination

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

Distance Between Ellipses in 2D

Distance Between Ellipses in 2D Distance Between Ellipses in 2D David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To

More information

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17

Bindel, Fall 2009 Matrix Computations (CS 6210) Week 8: Friday, Oct 17 Logistics Week 8: Friday, Oct 17 1. HW 3 errata: in Problem 1, I meant to say p i < i, not that p i is strictly ascending my apologies. You would want p i > i if you were simply forming the matrices and

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy,

Outline. Math Numerical Analysis. Errors. Lecture Notes Linear Algebra: Part B. Joseph M. Mahaffy, Math 54 - Numerical Analysis Lecture Notes Linear Algebra: Part B Outline Joseph M. Mahaffy, jmahaffy@mail.sdsu.edu Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS

LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS LAVRENTIEV-TYPE REGULARIZATION METHODS FOR HERMITIAN PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Lavrentiev regularization is a popular approach to the solution of linear discrete illposed problems

More information

arxiv: v1 [math.na] 28 Jun 2013

arxiv: v1 [math.na] 28 Jun 2013 A New Operator and Method for Solving Interval Linear Equations Milan Hladík July 1, 2013 arxiv:13066739v1 [mathna] 28 Jun 2013 Abstract We deal with interval linear systems of equations We present a new

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

22A-2 SUMMER 2014 LECTURE Agenda

22A-2 SUMMER 2014 LECTURE Agenda 22A-2 SUMMER 204 LECTURE 2 NATHANIEL GALLUP The Dot Product Continued Matrices Group Work Vectors and Linear Equations Agenda 2 Dot Product Continued Angles between vectors Given two 2-dimensional vectors

More information

Lectures on Linear Algebra for IT

Lectures on Linear Algebra for IT Lectures on Linear Algebra for IT by Mgr Tereza Kovářová, PhD following content of lectures by Ing Petr Beremlijski, PhD Department of Applied Mathematics, VSB - TU Ostrava Czech Republic 3 Inverse Matrix

More information

Block Lanczos Tridiagonalization of Complex Symmetric Matrices

Block Lanczos Tridiagonalization of Complex Symmetric Matrices Block Lanczos Tridiagonalization of Complex Symmetric Matrices Sanzheng Qiao, Guohong Liu, Wei Xu Department of Computing and Software, McMaster University, Hamilton, Ontario L8S 4L7 ABSTRACT The classic

More information

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b?

COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? COMPUTATIONAL ISSUES RELATING TO INVERSION OF PRACTICAL DATA: WHERE IS THE UNCERTAINTY? CAN WE SOLVE Ax = b? Rosemary Renaut http://math.asu.edu/ rosie BRIDGING THE GAP? OCT 2, 2012 Discussion Yuen: Solve

More information