1. Introduction. We consider the system of saddle point linear systems

Size: px
Start display at page:

Download "1. Introduction. We consider the system of saddle point linear systems"

Transcription

1 VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS TAKUMA KIMURA AND XIAOJUN CHEN Abstract. We propose a fast verification method for saddle point linear systems where the (, block is singular. The proposed verification method is based on an algebraic analysis of a block diagonal preconditioner and rounding mode controlled computations. Numerical comparison of several verification methods with various block diagonal preconditioners is given. Key words. saddle point matrix, numerical verification, block preconditioning AMS subject classifications. 65F05, 65G05, 65G0. Introduction. We consider the system of saddle point linear systems ( ( ( A B x c (. Hu = b B T =, 0 y d where A is an n n symmetric positive semidefinite matrix, B is an n m matrix, with m n. We denote l = n + m. We assume that the coefficient matrix H is nonsingular, which implies that B has full-column rank. In recent years, saddle point problems have received considerable attention. A large amount of work has been devoted to developing efficient algorithms for solving saddle point problems. In a recent comprehensive survey [], Benzi, Golub and Liesen discussed a large selection of numerical methods for saddle point problems. We are aware that it is very important to verify the accuracy of approximate solutions obtained by the numerical methods. However, there is little discussion on validated solution of saddle point problems by taking all possible effects of rounding errors into account. Standard validation methods for the solution of a system of linear equations use an approximation of the inverse of the coefficient matrix. These methods are not efficient for the saddle point problem (. when the dimension l is large or the condition number of H is large, due to the indefiniteness of H and the singularity of A. In [], a numerical validation method is proposed for verifying the accuracy of approximation solutions of the saddle point problem (. without using an approximation of the inverse H, under the assumption that A is symmetric positive definite. The method uses the special structure of the saddle point problem to represent the variable x by the inverse of A and the variable y. For the case that A is singular and the size of the problem is large, it is a significant challenge to compute a rigorous upper bound for the norm H without using an approximation of the inverse H. In this paper, we present a fast method to compute rigorous error bounds for the saddle point problem (. where A is symmetric positive semidefinite. In particular, we present a fast method to compute a constant α such that (. u u α b Hu for u R l, The first version of this paper was completed while the second author was working in Hirosaki University. This work was supported partly by the Scientific Research Grant-in-Aid from Japan Society for Promotion of Science. Department of Mathematical Sciences, Hirosaki University, Hirosaki, , Japan. (h07gs80@stu.hirosaki-u.ac.jp. Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, P.R. China. (maxjchen@polyu.edu.hk. This author s work is supported partly by the Research Grants Council of Hong Kong.

2 T. KIMURA AND X. CHEN where u is the exact solution of (.. This method is based on an algebraic analysis of a block diagonal preconditioner for saddle point systems studied in a recent paper [6] by Golub, Greif and Varah. Instead of approximating the inverse of the l l indefinite matrix H, we use approximations of inverses of two symmetric positive definite matrices in R n n and R m m to define the constant α in the error bound (.. Moreover, we present fast methods to estimate upper bounds for the norms of inverses of the two symmetric positive definite matrices based on fast validated matrix computation by Oishi and Rump [8]. In section, we define the error constant α. In section 3, we discuss how to compute an upper bound of α efficiently and accurately by taking all possible effects of rounding errors into account. In section 4, we compare our verification method with the Krawczyk method [3], the LU decomposition method [8], verifylss function of INTLAB, and a block component verification method proposed in [] using examples from CUTEr [6], optimal surface fitting [3, 0], mixed finite element discretization of Stokes equations [] and image restoration [5, 5].. A new error bound. Let W be an m m symmetric positive semidefinite matrix such that M(W = A + BW B T is a symmetric positive definite matrix. Note that BW B T is singular for any symmetric positive definite matrix W when m < n. However, if W is symmetric positive definite, we can show that M(W is symmetric positive definite under the condition that A is positive semidefinite and H is nonsingular. To see it, let x 0 be a solution of M(W x = 0. Then we have x T A x + x T BW B T x = 0. Since A and BW B T are positive semidefinite, we obtain which implies x T A x = 0 and x T BW B T x = 0, A x = 0 and B T x = 0. Therefore, we find that Hz = 0 with z = ( x, 0. This contradicts that H is nonsingular. Recently, Golub, Greif and Varah [6] performed an algebraic study of the block diagonal positive definite preconditioner (. M(W = ( M(W 0 0 B T M(W B They showed that M(W has the attractive property that the eigenvalues of the associated preconditioned matrix M(W H are bounded in a small range. Lemma.. [6] The eigenvalues of the preconditioned matrix M(W H are bounded within the two intervals [, ] [ 5, + ] 5. Lemma. makes possible for us to present a rigorous error bounds for the saddle point problem (...

3 VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS 3 Theorem.. Let u be the exact solution of (.. For any u R l, we have u u max ( M(W, M(W (B T B (. b Hu. 5 Proof. Obviously, we have, u u H b Hu. Now, we use Lemma. to give an upper bound of H. Let L be a nonsingular matrix such that LL T = M(W and let (.3 S = L HL T. Then the inverse H can be given as H = L T S L. Since H and S are symmetric, we have H = max v T L T S L v v R l v T v v 0 = max v T L T S L v v R l v T L T L v v 0 (.4 max w R l w 0 w T S w w T w max v R l v 0 = S M(W. v T L T L v v T v v T M(W v v T v From (.3 and LL T = M(W, we have L T M(W HL T = S. Hence, S and M(W H have same eigenvalues. By Lemma., the eigenvalues of S are bounded within the two intervals [, ] [ 5, + ] 5. Hence all eigenvalues of S satisfy λ i 5, i =,,, l. From (.4, we obtain H 5 M(W. Moreover, from (., we have M(W max( M(W, (B T M(W B max( M(W, M(W (B T B,

4 4 T. KIMURA AND X. CHEN where the last inequality uses λ min (B T M(W B = min y R m y 0 (M(W By, By (B T By, y (By, By (y, y λ min (M(W λ min (B T B. Theorem. shows that an upper bound of the inverse H can be obtained by computing upper bounds for the norm of inverses of two symmetric positive definite matrices with sizes of n n and m m. When n and/or m are large, the number of its flops is much less than the methods working on the (n + m (n + m matrix H. For example, the LU decomposition method for (. requires O((n + m 3 flops, but estimating M(W and (B T B only requires O(n 3 + O(m 3 flops, which will save O(n m + nm flops computational cost. Moreover, since the two matrices are symmetric, we can replace by for the matrix norm in (. and have u (.5 u max( M(W, M(W (B T B b Hu. 5 In general, (.5 is easier to implement than (.. 3. Verification methods. When we apply Theorem. and other verification methods to verify the accuracy of an approximate solution of (. on a computer, it is necessary to consider rounding error. The IEEE 754 arithmetic standard [4] defines the rounding modes for double precision floating point numbers. Since Intel s CPU follows this standard, the rounding modes can be used on most PC s and workstations. We use rounding downwards, rounding upwards and rounding nearest to compute rigorous error bounds for (.5. We also apply these rounding modes to the following three verification methods. Krawczyk method [, 3] K(U := u R(Hu b + (I RH(U u, K(U int(u u K(U u u radius(u, where R is an approximate inverse of H and U is an interval vector whose center is u. LU decomposition method [8] Let LU be an approximate LU factorization of H, that is, H LU. u u U L (b Hu U L H I Block component verification method [] Let u = (x, y, r = Ax + By c and r = B T x d. u u max( x x, y y, x x A ( r + B y y, y y A (B T B ( r + BB T A r. verifylss of INTLAB [4] U = verifylss(h, b u u radius(u, for u U.

5 VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS 5 Note that when A is singular, the block component verification method cannot be applied to (. directly, but to the equivalent system ( ( ( A + BW B T B x c + BW d (3. B T =. 0 y d Obviously, the error bounds depend on the choice of W. We tested the error bounds with various W. In this paper, we consider two types of choices: and W (γ = γ/ B T B I W (γ = γ(b T B if A is singular. We set W = 0 if A is nonsingular. 4. Numerical experiment. The numerical testing was carried out on a IBM PC (3.0 GHz Pentium 4 processor, GB of memory with the use of MATLAB 7.0 and INTLAB (Version 5.4 [, 4]. We use the function setround in INTLAB [4] to compute the error bound. The function setround allows the rounding mode of the processor to be changed between round nearest (setround(0, round down (setround(-, round up (setround( and round towards zero (setround(. To compute M(W, we first use setround(0 to compute an approximate inverse R of M(W. Next we use S=intval(R beta=abss(norm(s,inf/(-norm(s*m(w-i to get an upper bound β for M(W as M(W R RM(W I β. Similarly, we apply the function setround to (.5 to get an upper bounds Γ by taking all possible effects of rounding errors into account such that (4. u u Γ. We compare the error bound (.5 with the Krawczyk method, the LU decomposition method, the block component verification method and the function verifylss of INTLAB using examples from CUTEr [6], optimal surface fitting [3, 0], mixed finite element discretization of Stokes equations [] and image restoration [5, 5]. Example 4.: CUTEr matrices. We used two test problems, genhs8 and gouldqp3 from CUTEr collection [7], which were used in [6]. The genhs8 is a (n + m (n + m saddle point matrix, where A is an n n tridiagonal matrix with, 4, along its superdiagonal, main diagonal and subdiagonal, respectively, except A, = A n,n =. The rank of A is n. B is n m with values,, 3 along its main diagonal, first subdiagonal and second subdiagonal, respectively. The gouldqp3 is a (n + m (n + m saddle point matrix, where A is an n n matrix with rank(a = n. We set the exact solution u and right hand side vector b as u = (,, T, b = Hu.

6 6 T. KIMURA AND X. CHEN Fig. 4.. Error bounds (.5 for genhs8 with different W. (n,m=(500,498, A 8.0, B T B 36.0, (B T B Table 4. The error bounds for Example 4., genhs8. rank(a= n. (n, m (0,8 (500,498 (500,498 (3000,998 cond(h u u.e-6.e-6.e e-6 b Hu.46e-4.0e e-3.03e- (.5 W ( 5.67e e- 7.95e-.33e- [3.6e-5] [.859] [48.093] [ ] W (.46e-3.4e-.45e- 7.e- [7.e-5] [.983] [48.906] [0.07] Block W ( 6.e e e-09.8e-08 component [3.6e-5] [.953] [89.373] [ ] W ( 3.96e- 3.8e e-0.88e-09 [.e-4] [.078] [90.85] [05.536] LU 5.7e e e-6 Fail [3.6e-5] [4.03] [ ] Krawczyk.3e-5.3e e-5 Fail [0.03] [6.65] [66.894] verifylss 3.33e e e-6 Fail [0.06] [4.795] [09.87] Figure shows the error bounds for genhs8 with (n, m = (500, 498 as γ changes from min( A, / A to max( A, / A. In Tables -, we report numerical results with W ( and W (. Example 4.: Surface fitting problem. Let Ω R be a convex bounded domain, p i = (p i, p i Ω be the measurement points and q i be the corresponding real values, (i = k. We consider the following surface fitting problem [3, 0] (4. min k (f(p i q i + µ f H (Ω, i= over all functions f in the Sobolev space H (Ω. Here µ is a fixed parameter. In Tables -5, Fail means out of the memory; [ ] shows CPU time(sec..

7 VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS 7 Table 4. The error bounds for Example 4., gouldqp3. rank(a= n. (n, m (699,349 (999,999 (999,499 cond(h u u.e-6.e e-6 b Hu 8.9e-4.40e e-3 (.5 W ( 5.63e- 9.54e-.58e- [.406] [55.34] [94.385] W ( 3.6e- 6.3e-.66e- [.500] [8.605] [33.90] Block W ( 4.3e e-09.97e-08 component [.468] [55.95] [95.433] W (.94e e e-09 [.546] [8.78] [34.996] LU 4.75e e-6 Fail [9.40] [ ] Krawczyk 3.37e e-5 Fail [7.760] [79.95] verifylss 3.33e e-6 Fail [5.968] [8.4] We apply a finite element approximation with uniform triangular meshes to (4., and obtain a convex optimization problem in R 4m+ [3] (4.3 min Nx + µe k q + µ(x T Gx + x T 3 Gx 3 s.t. Gx = B x + B x 3 e T k Nx = 0, where N R k m, B, B, G R m m. Here G is a symmetric positive semidefinite matrix. The problem (4.3 is equivalent to the following saddle point system N T N G N T e k x N T (q µe k µg B T x µg B T x 3 G B B y = 0 ( , e T k N y 0 where y and y are the Lagrange multipliers. In many applications, k < m, which results that the matrix N T N is singular, i.e. the (, block of the saddle point matrix in (4.4 is singular. In this example, we use real data of average temperature 006 in Aomori region from the Japan Meteorological Agency. We choose k = 47, n = 3m +, rank(a= m + 47, and set µ =.0e-5 in (4.4. Numerical results of error estimate with various m, n are given in Table 4.3. Example 4.3: The Stokes equation. We consider the saddle point system arising from the mixed finite element discretization of the stationary Stokes equation: (4.5 ν u + p = ϕ in Ω, div u = 0 in Ω, u = 0 on Ω,

8 8 T. KIMURA AND X. CHEN Table 4.3 The error bounds for Example 4., surface fitting problem. rank(a= m (n, m (766,55 (450,483 (350,783 (3070,03 cond(h 7.e+05.3e+06.e+06.56e+06 b Hu.4e-4.6e-4.43e-4.54e-4 (.5 W ( 4.46e-09.0e e e-08 [.749] [6.54] [66.3] [45.3] W ( 4.85e e e-08.7e-08 [.8] [6.59] [68.9] [50.388] Block W ( 4.4e e e+03.04e+04 component [.796] [6.30] [66.45] [45.95] W ( 4.5e+0.0e e+0.7e+04 [.859] [6.763] [68.49] [5.] LU 8.46e e-5 3.5e-4 Fail [9.4] [.70] [54.85] Krawczyk.43e-3.e e-3 Fail [7.43] [7.809] [98.3] verifylss.93e-4 4.7e e-4 Fail [5.047] [9.984] [4.56] where Ω = (0, (0,, Ω is the boundary of Ω, ν > 0 is the kinematic viscosity coefficient, ϕ = (ϕ, ϕ is a given force field, u : Ω R is a velocity field and p : Ω R is a pressure field. We apply a mixed finite element approximation with uniform triangular meshes, and obtain a saddle point linear system (. where the velocity is approximated by the standard piecewise quadratic basis functions, and the pressure is approximated by piecewise linear basis functions. In this example, A is nonsingular and A 0. We use Theorem in [9] to get a upper bound of A, i.e. A w s where w is an approximate solution of Aw = e n, s = A w e n, and e n = (,,, T R n. In this example, (B T B = O(h, where h is the mesh size. To avoid using (B T B for small h, we consider a preconditioned system. Let L be an m m nonsingular matrix such that L T L B T B. Let ( I P = ( L, H = P T A B HP = B T 0 B = BL, M(W = A + BW BT. Applying Theorem. to the preconditioned system we obtain HP u = b,, b = P T b,

9 VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS 9 (4.6 u u P P (u u P P (u u P max ( M(W, M(W ( B T B P T (b Hu, 5 max(, L 5 max ( M(W, M(W L(B T B L T P T (b Hu. We call (4.6 a preconditioned error bound. From L (B T B = O(h, the preconditioned error bound is expected to be shaper than (.5 for the Stokes equation. Similarly, we can get a preconditioned block component verification method as Method in []. Numerical results of preconditioned error bounds for Example 4.3 with ν = are given in Table 4.4. Table 4.4 The preconditioned error bounds for Example 4.3, Stokes equation. rank(a= n. (n, m (88,44 (738,400 (95,96 (040,704 cond(h 4.e+05.e e e+06 b Hu.58e-5.56e e-5 7.3e-5 (4.7.0e- 9.4e-.4e-09 5.e-09 [0.078] [.00] [9.3] [49.383] Preconditioned 7.4e- 6.06e e e-08 block component [0.078] [.00] [9.95] [49.35] LU 7.04e-6.53e-5 Fail Fail [5.780] [ ] Krawczyk.00e e-4 Fail Fail [.56] [50.39] verifylss.e-6.e-6 Fail Fail [5.0] [4.4] Example 4.4: Image restoration. Suppose the discretized scenes have p = p p pixels. Let f R p, g R q be the underlying image and the observed image, respectively. Let H R q p be the corresponding blurring matrix of block Toeplitz with Toeplitz blocks (BTTB. Restoration of f is an ill-conditioned problem. We consider the linear least squares problem with Tikhonov s regularization [5, 5] (4.7 min f Hf g + α Df where α is a regularization parameter, D R (p p p p is a regularization matrix of a first order finite difference operator D = ( (p p I p p I p p (p p with =......

10 0 T. KIMURA AND X. CHEN Problem (4.7 can be rewritten as a quadratic programming (4.8 where x = ( f v min xt Ax + c T x s.t. B T x = 0 ( H, A = T H 0 R (3p p p (3p p p, 0 αi ( H T g. 0 B T = ( D I R (p p p (3p p p, c = The optimal condition for (4.8 is a saddle point problem ( ( ( A B x c B T =, 0 y 0 where y is the Lagrange multiplier vector for the constraints B T x = 0. In this problem, A is a positive semidefinite matrix and B has full-column rank. We generate an original image of the cameraman as shown in Figure 4. (a. The image is blurred by a Gaussian function h(i, j = e (i +j /8, truncated such that the function has a support of 7 7, and then pixels are contaminated by Gaussian noise with the standard deviation of The blurred and noisy image is shown in Figure 4. (b. We solve the saddle point problem to find a restored image, which is shown in Figure 4. (c. In the saddle point matrix, A has rank(a=p p p + [p /][p /]. Here [ ] denotes the nearest integer. Numerical results of error bounds for the restored image are given in Table 4.5. (a (b (c Fig. 4.. (a original image, (b observed image, (c restored image, PSNR=9.8 db. To end this section, we use a 3 3 block ill-conditioned saddle matrix to show that the error bound (.5 is tight. Example 4.5. We consider the following problem ɛi 0 0 H = 0 0 B, b = b b, 0 B T 0 b 3

11 VALIDATED SOLUTIONS OF SADDLE POINT LINEAR SYSTEMS Table 4.5 The error bounds for Example 4.4, image restoration. (n, m (60,760 (85,00 (465,64 (3400,44 cond(h 3.5e e e e+08 b Hu.3e-4.80e e e- (.5 W ( 3.0e e e e-06 [3.564] [50.798] [4.06] [353.68] W ( 3.0e e e e-06 [3.03] [48.687] [9.068] [30.70] Block W ( 5.4e e+03.3e+04.5e+04 component [3.798] [5.36] [5.3] [355.8] W (.e+04.74e+04.5e e+04 [3.03] [49.65] [0.6] [349.4] LU.05e-09.56e-09 Fail Fail [7.660] [48.395] Krawczyk K(U U K(U U Fail Fail verifylss 3.46e-4 3.9e-4 Fail Fail [9.67] [.698] where b, b, b 3 R m, I, B R m m, and B is nonsingular. It is easy to find the inverse and the solution H ɛ = I B T, u ɛ = b B T b 3. 0 B 0 B b Consider 0 < ɛ and B. The condition number of H satisfies H H ɛ B. When ɛ 0, the condition number will go to. Using (. or (.5 with W = B B T, we obtain M(W = ( ɛi 0 0 I and (4.9 u u ɛ( 5 b Hu for 0 < ɛ / (B T B. Furthermore, the equality holds in (4.9 when B = ɛ( 5 I and u = ( ɛ b, u, u Final remark. Using the algebraic analysis of a block diagonal preconditioner in [6], we proposed a fast verification method for saddle point linear systems where the (, block may be singular. The method was implemented by using INTLAB [4] and taking all possible effects of rounding errors into account. Numerical results show that the method is efficient.

12 T. KIMURA AND X. CHEN Acknowledgements. We are grateful to the two referees for their very helpful comments and suggestions. We wish to thank Prof. T. Yamamoto for carefully reading this paper and giving us helpful comments. REFERENCES [] M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta Numerica, 4(005, pp [] X. Chen and K. Hashimoto, Numerical validation of solutions of saddle point matrix equations, Numer. Linear Algebra Appl., 0(003, pp [3] X. Chen and T. Kimura, Finite element surface fitting for bridge management, Inter. J. Information Technology & Decision Making, 5(006, pp [4] ANSI/IEEE , Standard for Binary Floating-Point Arithmetic, 985. [5] H. Fu, M.K. Ng, M. Nikolova and J.L. Barlow, Efficient minimization methods of mixed l-l and l-l norms for image estoration, SIAM J. Sci. Comput., 7(006, pp [6] G. H. Golub, C. Greif and J. M. Varah, An algebraic analysis of a block diagonal preconditioner for saddle point system, SIAM J. Matrix Anal. Appl., 7(006, pp [7] N.I.M. Gould, D. Orban and Ph.L. Toint, CUTEr, a Constrained and Unconstrained Testing Environment, revisited, [8] S. Oishi and S. M. Rump, Fast verification of solutions of matrix equations, Numer. Math., 90(00, pp [9] T. Ogita, S. Oishi and Y. Ushiro, Fast verification of solutions for sparse monotone matrix equations, Computing, Supplement 5 (00, pp [0] S. Roberts, M. Helgland and I. Altas, Approximation of a thin plate spline smoother using continuous piecewise polynomial functions, SIAM J. Numer. Anal., 4(003, pp [] S.M.Rump, Kleine Fehlerschranken bei Matrixproblemen, Ph.D thesis, Universität Karlsruhe, 980. [] S.M. Rump, Verification methods for dense and sparse system of equations, in J. Herzberger, editor, Topics in Validated Computations-Studies in Computational Mathematics, Elsevier, Amsterdam, 994, pp [3] S.M. Rump, Self-validating methods, Linear Alg. Appl. 34(00, pp [4] S.M. Rump, Interval computations with INTLAB, Brazilian Electronic Journal on Mathematics of Computation,, 999, [5] A. Tikhonov and V. Arsenin, Solution of Ill-Posed Problems, Winston, Washington, DC, 977.

Verified Solutions of Sparse Linear Systems by LU factorization

Verified Solutions of Sparse Linear Systems by LU factorization Verified Solutions of Sparse Linear Systems by LU factorization TAKESHI OGITA CREST, Japan Science and Technology Agency JST), and Faculty of Science and Engineering, Waseda University, Robert J. Shillman

More information

Fast Verified Solutions of Sparse Linear Systems with H-matrices

Fast Verified Solutions of Sparse Linear Systems with H-matrices Fast Verified Solutions of Sparse Linear Systems with H-matrices A. Minamihata Graduate School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan aminamihata@moegi.waseda.jp K. Sekine

More information

FAST VERIFICATION OF SOLUTIONS OF MATRIX EQUATIONS

FAST VERIFICATION OF SOLUTIONS OF MATRIX EQUATIONS published in Numer. Math., 90(4):755 773, 2002 FAST VERIFICATION OF SOLUTIONS OF MATRIX EQUATIONS SHIN ICHI OISHI AND SIEGFRIED M. RUMP Abstract. In this paper, we are concerned with a matrix equation

More information

Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices

Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices published in J. Comput. Appl. Math., 205(1):533 544, 2007 Convergence of Rump s Method for Inverting Arbitrarily Ill-Conditioned Matrices Shin ichi Oishi a,b Kunio Tanabe a Takeshi Ogita b,a Siegfried

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

1. Introduction. Consider the large and sparse saddle point linear system

1. Introduction. Consider the large and sparse saddle point linear system SIAM J MATRIX ANAL APPL Vol 7, No 3, pp 779 79 c 006 Society for Industrial and Applied Mathematics AN ALGEBRAIC ANALYSIS OF A BLOCK DIAGONAL PRECONDITIONER FOR SADDLE POINT SYSTEMS GENE H GOLUB, CHEN

More information

NUMERICAL ENCLOSURE OF SOLUTIONS FOR TWO DIMENSIONAL DRIVEN CAVITY PROBLEMS

NUMERICAL ENCLOSURE OF SOLUTIONS FOR TWO DIMENSIONAL DRIVEN CAVITY PROBLEMS European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 24 P. Neittaanmäki, T. Rossi, K. Majava, and O. Pironneau (eds.) O. Nevanlinna and R. Rannacher (assoc. eds.) Jyväskylä,

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Fast Iterative Solution of Saddle Point Problems

Fast Iterative Solution of Saddle Point Problems Michele Benzi Department of Mathematics and Computer Science Emory University Atlanta, GA Acknowledgments NSF (Computational Mathematics) Maxim Olshanskii (Mech-Math, Moscow State U.) Zhen Wang (PhD student,

More information

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction

ON AUGMENTED LAGRANGIAN METHODS FOR SADDLE-POINT LINEAR SYSTEMS WITH SINGULAR OR SEMIDEFINITE (1,1) BLOCKS * 1. Introduction Journal of Computational Mathematics Vol.xx, No.x, 200x, 1 9. http://www.global-sci.org/jcm doi:10.4208/jcm.1401-cr7 ON AUGMENED LAGRANGIAN MEHODS FOR SADDLE-POIN LINEAR SYSEMS WIH SINGULAR OR SEMIDEFINIE

More information

Verification methods for linear systems using ufp estimation with rounding-to-nearest

Verification methods for linear systems using ufp estimation with rounding-to-nearest NOLTA, IEICE Paper Verification methods for linear systems using ufp estimation with rounding-to-nearest Yusuke Morikura 1a), Katsuhisa Ozaki 2,4, and Shin ichi Oishi 3,4 1 Graduate School of Fundamental

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b)

x x2 2 + x3 3 x4 3. Use the divided-difference method to find a polynomial of least degree that fits the values shown: (b) Numerical Methods - PROBLEMS. The Taylor series, about the origin, for log( + x) is x x2 2 + x3 3 x4 4 + Find an upper bound on the magnitude of the truncation error on the interval x.5 when log( + x)

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems

Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization problems Zeng et al. Journal of Inequalities and Applications 205 205:355 DOI 0.86/s3660-05-0879-x RESEARCH Open Access Preconditioners for reduced saddle point systems arising in elliptic PDE-constrained optimization

More information

Tikhonov Regularization of Large Symmetric Problems

Tikhonov Regularization of Large Symmetric Problems NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi

More information

Tikhonov Regularization for Weighted Total Least Squares Problems

Tikhonov Regularization for Weighted Total Least Squares Problems Tikhonov Regularization for Weighted Total Least Squares Problems Yimin Wei Naimin Zhang Michael K. Ng Wei Xu Abstract In this paper, we study and analyze the regularized weighted total least squares (RWTLS)

More information

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES

INCOMPLETE FACTORIZATION CONSTRAINT PRECONDITIONERS FOR SADDLE-POINT MATRICES INCOMPLEE FACORIZAION CONSRAIN PRECONDIIONERS FOR SADDLE-POIN MARICES H. S. DOLLAR AND A. J. WAHEN Abstract. We consider the application of the conjugate gradient method to the solution of large symmetric,

More information

Mathematics and Computer Science

Mathematics and Computer Science Technical Report TR-2007-002 Block preconditioning for saddle point systems with indefinite (1,1) block by Michele Benzi, Jia Liu Mathematics and Computer Science EMORY UNIVERSITY International Journal

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

Exact Determinant of Integer Matrices

Exact Determinant of Integer Matrices Takeshi Ogita Department of Mathematical Sciences, Tokyo Woman s Christian University Tokyo 167-8585, Japan, ogita@lab.twcu.ac.jp Abstract. This paper is concerned with the numerical computation of the

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration

Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration Krylov subspace iterative methods for nonsymmetric discrete ill-posed problems in image restoration D. Calvetti a, B. Lewis b and L. Reichel c a Department of Mathematics, Case Western Reserve University,

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices

Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Convergence Properties of Preconditioned Hermitian and Skew-Hermitian Splitting Methods for Non-Hermitian Positive Semidefinite Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai

More information

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS

A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS A PRECONDITIONER FOR LINEAR SYSTEMS ARISING FROM INTERIOR POINT OPTIMIZATION METHODS TIM REES AND CHEN GREIF Abstract. We explore a preconditioning technique applied to the problem of solving linear systems

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS

ON A GENERAL CLASS OF PRECONDITIONERS FOR NONSYMMETRIC GENERALIZED SADDLE POINT PROBLEMS U..B. Sci. Bull., Series A, Vol. 78, Iss. 4, 06 ISSN 3-707 ON A GENERAL CLASS OF RECONDIIONERS FOR NONSYMMERIC GENERALIZED SADDLE OIN ROBLE Fatemeh anjeh Ali BEIK his paper deals with applying a class

More information

arxiv: v1 [math.na] 26 Dec 2013

arxiv: v1 [math.na] 26 Dec 2013 General constraint preconditioning iteration method for singular saddle-point problems Ai-Li Yang a,, Guo-Feng Zhang a, Yu-Jiang Wu a,b a School of Mathematics and Statistics, Lanzhou University, Lanzhou

More information

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS REVISTA DE LA UNIÓN MATEMÁTICA ARGENTINA Vol. 53, No. 1, 2012, 61 70 SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS JIAN-LEI LI AND TING-ZHU HUANG Abstract. Recently,

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Permutation transformations of tensors with an application

Permutation transformations of tensors with an application DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn

More information

Preconditioners for the incompressible Navier Stokes equations

Preconditioners for the incompressible Navier Stokes equations Preconditioners for the incompressible Navier Stokes equations C. Vuik M. ur Rehman A. Segal Delft Institute of Applied Mathematics, TU Delft, The Netherlands SIAM Conference on Computational Science and

More information

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method

Two-parameter generalized Hermitian and skew-hermitian splitting iteration method To appear in the International Journal of Computer Mathematics Vol. 00, No. 00, Month 0XX, 1 Two-parameter generalized Hermitian and skew-hermitian splitting iteration method N. Aghazadeh a, D. Khojasteh

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS

SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS SOS TENSOR DECOMPOSITION: THEORY AND APPLICATIONS HAIBIN CHEN, GUOYIN LI, AND LIQUN QI Abstract. In this paper, we examine structured tensors which have sum-of-squares (SOS) tensor decomposition, and study

More information

Block triangular preconditioner for static Maxwell equations*

Block triangular preconditioner for static Maxwell equations* Volume 3, N. 3, pp. 589 61, 11 Copyright 11 SBMAC ISSN 11-85 www.scielo.br/cam Block triangular preconditioner for static Maxwell equations* SHI-LIANG WU 1, TING-ZHU HUANG and LIANG LI 1 School of Mathematics

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING

LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING LINEARIZED BREGMAN ITERATIONS FOR FRAME-BASED IMAGE DEBLURRING JIAN-FENG CAI, STANLEY OSHER, AND ZUOWEI SHEN Abstract. Real images usually have sparse approximations under some tight frame systems derived

More information

Automatic Verified Numerical Computations for Linear Systems

Automatic Verified Numerical Computations for Linear Systems Automatic Verified Numerical Computations for Linear Systems Katsuhisa Ozaki (Shibaura Institute of Technology) joint work with Takeshi Ogita (Tokyo Woman s Christian University) Shin ichi Oishi (Waseda

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

Quadratic reformulation techniques for 0-1 quadratic programs

Quadratic reformulation techniques for 0-1 quadratic programs OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure

More information

GAKUTO International Series

GAKUTO International Series GAKUTO International Series Mathematical Sciences and Applications, Vol.28(2008) Proceedings of Fourth JSIAM-SIMMAI Seminar on Industrial and Applied Mathematics, pp.139-148 A COMPUTATIONAL APPROACH TO

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

IN THIS paper, we focus on the most common data production

IN THIS paper, we focus on the most common data production IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 2, NO 2, DECEMBER 22 479 Non-Lipschitz l p -Regularization and Box Constrained Model for Image Restoration Xiaojun Chen, Michael K Ng, and Chao Zhang Abstract

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Arnoldi-Tikhonov regularization methods

Arnoldi-Tikhonov regularization methods Arnoldi-Tikhonov regularization methods Bryan Lewis a, Lothar Reichel b,,1 a Rocketcalc LLC, 100 W. Crain Ave., Kent, OH 44240, USA. b Department of Mathematical Sciences, Kent State University, Kent,

More information

Adaptive and Efficient Algorithm for 2D Orientation Problem

Adaptive and Efficient Algorithm for 2D Orientation Problem Japan J. Indust. Appl. Math., 26 (2009), 215 231 Area 2 Adaptive and Efficient Algorithm for 2D Orientation Problem Katsuhisa Ozaki, Takeshi Ogita, Siegfried M. Rump and Shin ichi Oishi Research Institute

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

c 1999 Society for Industrial and Applied Mathematics

c 1999 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. 21, No. 1, pp. 185 194 c 1999 Society for Industrial and Applied Mathematics TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUB, PER CHRISTIAN HANSEN, AND DIANNE

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring

A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring A fast nonstationary preconditioning strategy for ill-posed problems, with application to image deblurring Marco Donatelli Dept. of Science and High Tecnology U. Insubria (Italy) Joint work with M. Hanke

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini

Indefinite Preconditioners for PDE-constrained optimization problems. V. Simoncini Indefinite Preconditioners for PDE-constrained optimization problems V. Simoncini Dipartimento di Matematica, Università di Bologna, Italy valeria.simoncini@unibo.it Partly joint work with Debora Sesana,

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Black Box Linear Algebra

Black Box Linear Algebra Black Box Linear Algebra An Introduction to Wiedemann s Approach William J. Turner Department of Mathematics & Computer Science Wabash College Symbolic Computation Sometimes called Computer Algebra Symbols

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

A generalization of the Gauss-Seidel iteration method for solving absolute value equations

A generalization of the Gauss-Seidel iteration method for solving absolute value equations A generalization of the Gauss-Seidel iteration method for solving absolute value equations Vahid Edalatpour, Davod Hezari and Davod Khojasteh Salkuyeh Faculty of Mathematical Sciences, University of Guilan,

More information

Cosine transform preconditioners for high resolution image reconstruction

Cosine transform preconditioners for high resolution image reconstruction Linear Algebra and its Applications 36 (000) 89 04 www.elsevier.com/locate/laa Cosine transform preconditioners for high resolution image reconstruction Michael K. Ng a,,, Raymond H. Chan b,,tonyf.chan

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Calculate Sensitivity Function Using Parallel Algorithm

Calculate Sensitivity Function Using Parallel Algorithm Journal of Computer Science 4 (): 928-933, 28 ISSN 549-3636 28 Science Publications Calculate Sensitivity Function Using Parallel Algorithm Hamed Al Rjoub Irbid National University, Irbid, Jordan Abstract:

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present

More information

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity

More information

Z-eigenvalue methods for a global polynomial optimization problem

Z-eigenvalue methods for a global polynomial optimization problem Math. Program., Ser. A (2009) 118:301 316 DOI 10.1007/s10107-007-0193-6 FULL LENGTH PAPER Z-eigenvalue methods for a global polynomial optimization problem Liqun Qi Fei Wang Yiju Wang Received: 6 June

More information

Preconditioned Parallel Block Jacobi SVD Algorithm

Preconditioned Parallel Block Jacobi SVD Algorithm Parallel Numerics 5, 15-24 M. Vajteršic, R. Trobec, P. Zinterhof, A. Uhl (Eds.) Chapter 2: Matrix Algebra ISBN 961-633-67-8 Preconditioned Parallel Block Jacobi SVD Algorithm Gabriel Okša 1, Marián Vajteršic

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems

A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems A Robust Preconditioner for the Hessian System in Elliptic Optimal Control Problems Etereldes Gonçalves 1, Tarek P. Mathew 1, Markus Sarkis 1,2, and Christian E. Schaerer 1 1 Instituto de Matemática Pura

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Mathematical Programming manuscript No. (will be inserted by the editor) Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization Wei Bian Xiaojun Chen Yinyu Ye July

More information

AMS Foundations Exam - Part A, January 2018

AMS Foundations Exam - Part A, January 2018 AMS Foundations Exam - Part A, January 2018 Name: ID Num. Part A: / 75 Part B: / 75 Total: / 150 This component of the exam (Part A) consists of two sections (Linear Algebra and Advanced Calculus) with

More information

Numerical Linear Algebra and. Image Restoration

Numerical Linear Algebra and. Image Restoration Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,

More information

SPARSE SIGNAL RESTORATION. 1. Introduction

SPARSE SIGNAL RESTORATION. 1. Introduction SPARSE SIGNAL RESTORATION IVAN W. SELESNICK 1. Introduction These notes describe an approach for the restoration of degraded signals using sparsity. This approach, which has become quite popular, is useful

More information

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices

Journal of Computational and Applied Mathematics. Optimization of the parameterized Uzawa preconditioners for saddle point matrices Journal of Computational Applied Mathematics 6 (009) 136 154 Contents lists available at ScienceDirect Journal of Computational Applied Mathematics journal homepage: wwwelseviercom/locate/cam Optimization

More information

Jae Heon Yun and Yu Du Han

Jae Heon Yun and Yu Du Han Bull. Korean Math. Soc. 39 (2002), No. 3, pp. 495 509 MODIFIED INCOMPLETE CHOLESKY FACTORIZATION PRECONDITIONERS FOR A SYMMETRIC POSITIVE DEFINITE MATRIX Jae Heon Yun and Yu Du Han Abstract. We propose

More information

Finite difference method for elliptic problems: I

Finite difference method for elliptic problems: I Finite difference method for elliptic problems: I Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

More information

Non-Lipschitz l p -Regularization and Box Constrained Model for Image Restoration

Non-Lipschitz l p -Regularization and Box Constrained Model for Image Restoration Non-Lipschitz l p -Regularization and Box Constrained Model for Image Restoration Xiaojun Chen, Michael K. Ng and Chao Zhang Abstract Nonsmooth nonconvex regularization has remarkable advantages for the

More information

A Smoothing Newton Method for Solving Absolute Value Equations

A Smoothing Newton Method for Solving Absolute Value Equations A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,

More information

Preconditioned conjugate gradient algorithms with column scaling

Preconditioned conjugate gradient algorithms with column scaling Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 28 Preconditioned conjugate gradient algorithms with column scaling R. Pytla Institute of Automatic Control and

More information

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota

Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota Multilevel low-rank approximation preconditioners Yousef Saad Department of Computer Science and Engineering University of Minnesota SIAM CSE Boston - March 1, 2013 First: Joint work with Ruipeng Li Work

More information

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Integer Least Squares: Sphere Decoding and the LLL Algorithm Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper

More information

Iterative Methods for Ill-Posed Problems

Iterative Methods for Ill-Posed Problems Iterative Methods for Ill-Posed Problems Based on joint work with: Serena Morigi Fiorella Sgallari Andriy Shyshkov Salt Lake City, May, 2007 Outline: Inverse and ill-posed problems Tikhonov regularization

More information

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition

Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Global Quadratic Minimization over Bivalent Constraints: Necessary and Sufficient Global Optimality Condition Guoyin Li Communicated by X.Q. Yang Abstract In this paper, we establish global optimality

More information

Matrix Multiplication Chapter IV Special Linear Systems

Matrix Multiplication Chapter IV Special Linear Systems Matrix Multiplication Chapter IV Special Linear Systems By Gokturk Poyrazoglu The State University of New York at Buffalo BEST Group Winter Lecture Series Outline 1. Diagonal Dominance and Symmetry a.

More information

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems

A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems A Regularized Directional Derivative-Based Newton Method for Inverse Singular Value Problems Wei Ma Zheng-Jian Bai September 18, 2012 Abstract In this paper, we give a regularized directional derivative-based

More information

Inverse Ill Posed Problems in Image Processing

Inverse Ill Posed Problems in Image Processing Inverse Ill Posed Problems in Image Processing Image Deblurring I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz, martin.plesinger@tul.cz, strakos@cs.cas.cz 1,3 Faculty of Mathematics

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 10-12 Large-Scale Eigenvalue Problems in Trust-Region Calculations Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug ISSN 1389-6520 Reports of the Department of

More information

On solving linear systems arising from Shishkin mesh discretizations

On solving linear systems arising from Shishkin mesh discretizations On solving linear systems arising from Shishkin mesh discretizations Petr Tichý Faculty of Mathematics and Physics, Charles University joint work with Carlos Echeverría, Jörg Liesen, and Daniel Szyld October

More information

ELA

ELA CONSTRUCTING COPOSITIVE MATRICES FROM INTERIOR MATRICES CHARLES R. JOHNSON AND ROBERT REAMS Abstract. Let A be an n by n symmetric matrix with real entries. Using the l -norm for vectors and letting S

More information