Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix

Size: px
Start display at page:

Download "Journal of Computational and Applied Mathematics. New matrix iterative methods for constraint solutions of the matrix"

Transcription

1 Journal of Computational and Applied Mathematics 35 ( Contents lists aailable at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: New matrix iteratie methods for constraint solutions of the matrix equation AX = C Zhen-yun Peng Guilin Uniersity of Electronic Technology, Guilin , PR China a r t i c l e i n f o a b s t r a c t Article history: Receied 5 March 007 Receied in reised form 9 August 009 Keywords: Iteratie algorithm Matrix equation Matrix nearness problem Minimum residual problem In this paper, two new matrix iteratie methods are presented to sole the matrix equation AX = C, the minimum residual problem min X S AX C and the matrix nearness problem min X SE X X, where S is the set of constraint matrices, such as symmetric, symmetric R-symmetric and (R, S-symmetric, and S E is the solution set of aboe matrix equation or minimum residual problem. These matrix iteratie methods hae faster conergence rate and higher accuracy than the matrix iteratie methods proposed in Deng et al. (006 [13], Huang et al. (008 [15], Peng (005 [16] and Lei and Liao (007 [17]. Paige s algorithms are used as the frame method for deriing these matrix iteratie methods. Numerical examples are used to illustrate the efficiency of these new methods. 010 Elseier.V. All rights resered. 1. Introduction Denoted by R m n and SR n n the set of m n real matrices and n n symmetric matrices, respectiely. Denoted by R(A, N(A and A the column space, null space and Frobenius norm of the matrix A, respectiely. We use A to stand for the Kronecker production of the matrix A and. For X = [x 1, x,..., x n ] R m n, we denote by ec(x = [x T 1,..., xt n ]T the ector expanded by columns of X. We reconsider numerical methods to compute constraint solutions X of the following problems: AX = C, X S min AX C X S min X X X S E where A R m n, R n p, C R m p and X R n n, S is the set of constraint matrices, such as symmetric, skew symmetric, symmetric R-symmetric, symmetric R-skew symmetric [1,], (R, S-symmetric or (R, S-skew symmetric [3], and S E is the solution set of the matrix equation (1.1 or the minimum residual problem (1.. Direct methods to sole aboe problems with unknown matrix X in arious special structures hae been used by seeral authors. For example, the symmetric solutions were considered in [4 8]. The bisymmetric, centro-symmetric and Renonnegatie definite solutions were considered in [9 1], respectiely. y using direct methods, the necessary and sufficient conditions for the existence of the general solution were deried, and the solution s expressions were gien too. Direct methods, howeer, may be less efficient for large sparse coefficient matrices A and due to limited by storage space and computing speed of the computers. Therefore, iteratie methods to sole matrix equation hae attracted much interests (1.1 (1. (1.3 Research supported by National Natural Science Foundation of China ( , and Proincial Natural Science Foundation of Guangxi ( address: yunzhenp@163.com /$ see front matter 010 Elseier.V. All rights resered. doi: /j.cam

2 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( recently. For example, Deng et al. [13] and Peng et al. [14] designed a similar matrix-form iteratie method, according to the fundamental idea of the conjugate gradient method for the standard system of linear equations, to compute the symmetric solutions of the problems (1.1 and (1.3. With the same idea, Huang et al. [15] designed a matrix-form iteratie method to compute the skew symmetric solutions of the problems (1.1 and (1.3. Peng [16] and Liao et al. [17] designed respectie matrix-form iteratie method to compute the symmetric solutions of the problems (1. and (1.3. The matrix-form iteratie methods proposed in [13 15] cannot used to compute the minimum Frobenius norm solution of the problem (1.. The matrix-form iteratie methods proposed in [16,17] can be used to compute the minimum Frobenius norm solutions of both problem (1.1 and problem (1.. ut, in general, the matrix-form iteratie method proposed in [16] has ery slow conergence rate and the matrix-form iteratie method proposed in [17] has ery low accuracy. In this paper, two new matrix iteratie methods are presented to sole the matrix equation (1.1, the minimum residual problem (1. and the matrix nearness problem (1.3. These new matrix iteratie methods hae faster conergence rate and higher accuracy than the iteratie methods proposed in aboe references in some cases. We will use Paige s algorithms [18], which are based on the bidiagonalization procedure of Golub and Kahan [19], as the frame method for deriing these new matrix iteratie methods. The basic idea is that we first characterize constraints matrix X S, then, by the Kronecker production of matrices, we transform problems (1.1 and (1. into the unconstrained linear systems and linear least squares problem in ector form and hence can be soled by Paige s algorithms, and finally, we transform ector form iteratie methods into matrix form iteratie methods. This paper is organized as follows. In Section, we shortly reiew Paige s algorithms to sole linear systems and least squares problem. ased on Paige s algorithms, we propose two new matrix iteratie algorithms to sole problems (1.1 (1.3 in Section 3. Finally, seeral numerical examples are gien to illustrate the efficiency of these new algorithms.. Paige s algorithms Paige s algorithms were used to compute the minimum norm solution x of the following problems: Linear systems: Mx = f Linear least squares: min Mx f (. where. denote the l -norm, that is u = u, u = u T u. Paige s algorithms are based on the bidiagonalization procedure of Golub and Kahan [19] which hae two forms as follows. idiag 1 (starting ector f ; reduction to lower bidiagonal form: β 1 u 1 = f, α 1 1 = M T u 1 }. β i+1 u i+1 = M i α i u i α i+1 i+1 = M T, i = 1,,.... u i+1 β i+1 i The scalars α i 0 and β i 0 are chosen so that u i = i = 1. With the definitions U k = [u 1, u,..., u k ], V k = [ 1,,..., k ], α 1 β α. k = β.. 3,... αk β k+1 the recurrence relations (.3 may be rewritten as U k+1 (β 1 e 1 = f, MV k = U k+1 k, M T U k+1 = V k T k + α k+1 k+1 e T k+1, where e 1 and e k+1 are respectiely the first and last column of the identity matrix I with size implied by context. If exact arithmetic were used, then U T k+1 U k+1 = I and V T k V k = I. idiag (starting ector M T f ; reduction to upper bidiagonal form: θ 1 1 = M T f, ρ 1 p 1 = M } 1. θ i+1 i+1 = M T p i ρ i i, i = 1,,.... ρ i+1 p i+1 = M i+1 θ i+1 p i The scalars ρ i 0 and θ i 0 are chosen so that p i = i = 1. In this case, if ρ 1 θ ρ θ 3 P k = [p 1, p,..., p k ], R V k = [ 1,,..., k ], k = ρ k 1 θ, k ρ k (.1 (.3 (.4

3 78 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( the recurrence relations (.4 may be rewritten as V k (θ 1 e 1 = M T f, MV k = P k R k, M T P k = V k R T k + θ k+1 k+1 e T k, and also P T k P k = V T k V k = I if exact arithmetic were used. Applying on the idiag 1 and, Paige constructs two algorithms, which are respectie named Paige algorithm 1 and Paige algorithm, to compute the unique minimum l -norm solution x of the linear system (.1 and the linear least squares (. as follows. Paige algorithm 1 (1 τ 0 = 1; ξ 0 = 1; ω 0 = 0; z 0 = 0; w 0 = 0; β 1 u 1 = f ; α 1 1 = M T u 1 ; ( For i = 1,,... until {x i } conergence, do (a ξ i = ξ i 1 β i /α i ; z i = z i 1 + ξ i i ; (b ω i = (τ i 1 β i ω i 1 /α i ; w i = w i 1 + ω i i ; (c β i+1 u i+1 = M i α i u i ; (d τ i = τ i 1 α i /β i+1 ; (e α i+1 i+1 = M T u i+1 β i+1 i ; (f γ i = β i+1 ξ i /(β i+1 ω i τ i ; (g x i = z i γ i w i. Paige algorithm (1 θ 1 1 = M T f ; ρ 1 p 1 = M 1 ; w 1 = 1 /ρ 1 ; ξ 1 = θ 1 /ρ 1 ; x 1 = ξ 1 w 1 ; ( For i = 1,,... until {x i } conergence, do (a θ i+1 i+1 = M T p i ρ i i ; (b ρ i+1 p i+1 = M i+1 θ i+1 p i ; (c w i+1 = ( i+1 θ i+1 w i /ρ i+1 ; (d ξ i+1 = ξ i θ i+1 /ρ i+1 ; (e x i+1 = x i + ξ i+1 w i+1. The scalars α i 0, β i 0 in Paige algorithm 1 and the scalars ρ i 0, θ i 0 in Paige algorithm are chosen so that u i = i = p i = i = 1. Paige also pointed out that the stopping criteria on Paige algorithms 1 and can be used as f Mx i ε, ξ i ε or x i x i 1 ε, where ε > 0 is a small tolerance, to compute the unique minimum l -norm solution of the linear equations (.1 and as M T (f Mx i ε, or x i x i 1 ε to compute the unique minimum l -norm solution of the linear least square (.. 3. New matrix iteratie methods ased on Paige algorithms 1 and, we propose two new matrix iteratie algorithms to sole (1.1 (1.3 in this section. We first consider the linear matrix equation (1.1 and the minimum residual problem (1. with unknown matrix X is following three cases (if matrix X is other case, the analogous results can be obtained, and thus is omitted here. Case 1. X is a symmetric matrix. Case. X is a symmetric R-symmetric matrix, that is, X {Y R n n Y = Y T, RYR = Y, R T = R = R 1 R n n }. Case 3. X is a (R, S-symmetric matrix, that is, X {Y R m n RYS = Y, R T = R = R 1 R m m, S T = S = S 1 R n n }. In case 1. Noting that X is the symmetric solution of the matrix equation AX = C if and only if X is the symmetric solution of the system of matrix equations { AX = C T XA T = C T. (3.1 And the system of matrix equations (3.1 can be transformed into the system of linear equations (.1 with coefficient matrix M and ector f as M =, A T f = ec(c T.

4 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( Therefore, β 1 u 1 = f, α 1 1 = M T u 1, β i+1 u i+1 = M i α i u i (i = 1,,..., and α i+1 i+1 = M T u i+1 β i+1 i (i = 1,,..., can be written as β 1 u 1 = ec(c T, (3. α 1 1 = ( A T, A T u 1, β i+1 u i+1 = A T i α i u i, α i+1 i+1 = ( A T, A T u i+1 β i+1 i. (3.5 From (3. (3.5, we hae ec(ui u i = ec(u T, i i = ec(v i, U i R m p, V i SR n n. And so, the ector form β 1 u 1 = f, α 1 1 = M T u 1, β i+1 u i+1 = M i α i u i (i = 1,,..., and α i+1 i+1 = M T u i+1 β i+1 i (i = 1,,..., in Paige algorithm 1 can be rewritten as matrix form β 1 U 1 = C, β 1 = C, α 1 V 1 = A T U 1 T + U T A, α 1 1 = A T U 1 T + U T A, 1 β i+1 U i+1 = AV i α i U i, β i+1 = AV i α i U i, α i+1 V i+1 = A T U i+1 T + U T i+1 A β i+1v i, α i+1 = A T U i+1 T + U T i+1 A β i+1v i. Analogously, θ 1 1 = M T f, ρ 1 p 1 = M 1, θ i+1 i+1 = M T p i ρ i i (i = 1,,..., and ρ i+1 p i+1 = M i+1 θ i+1 p i (i = 1,,..., can be written as θ 1 1 = ( A T, A T ec(c T, (3.6 ρ 1 p 1 = A T 1, (3.7 θ i+1 i+1 = ( A T, A T p i ρ i i, ρ i+1 p i+1 = A T i+1 ρ i+1 p i. From (3.6 (3.9, we hae i = ec(v i, p i = ec(pi ec(p T, V i i SR n n, P i R m p. And so, the ector form θ 1 1 = M T f, ρ 1 p 1 = M 1, θ i+1 i+1 = M T p i ρ i i (i = 1,,..., and ρ i+1 p i+1 = M i+1 θ i+1 p i (i = 1,,..., in Paige algorithm can be rewritten as matrix form θ 1 V 1 = (A T C T + C T A, ρ 1 P 1 = AV 1, ρ 1 = AV 1, θ i+1 V i+1 = A T P i T + P T i A ρ i V i, θ i+1 = A T P i T + P T i A ρ i V i, ρ i+1 P i+1 = AV i+1 θ i+1 P i, ρ i+1 = AV i+1 θ i+1 P i. θ 1 = A T C T + C T A, In case. Noting that X is the symmetric R-symmetric solution of the matrix equation AX = C if and only if X is the symmetric R-symmetric solution of the system of matrix equations AX = C T XA T = C T (3.10 ARXR = C T RXRA T = C T. (3.3 (3.4 (3.8 (3.9

5 730 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( And the system of matrix equations (3.10 can be transformed into the system of linear equations (.1 with coefficient matrix M and ector f as T A M = A T T R AR, f = ec(c T. AR T R ec(c T Therefore, β 1 u 1 = f, α 1 1 = M T u 1, β i+1 u i+1 = M i α i u i (i = 1,,..., and α i+1 i+1 = M T u i+1 β i+1 i (i = 1,,..., can be written as ec(c β 1 u 1 = T, (3.11 ec(c T α 1 1 = ( A T, A T, R RA T, RA T R u 1, (3.1 β i+1 u i+1 = A T T R AR i α i u i, (3.13 AR T R α i+1 i+1 = ( A T, A T, R RA T, RA T R u i+1 β i+1 i. (3.14 From (3.11 (3.14, we hae ec(u i ec(u u i = T i ec(u i, i = ec(v i, ec(u T i where U i R m p, V i is a symmetric R-symmetric matrix. And so, the ector form β 1 u 1 = f, α 1 1 = M T u 1, β i+1 u i+1 = M i α i u i (i = 1,,..., and α i+1 i+1 = M T u i+1 β i+1 i (i = 1,,..., in Paige algorithm 1 can be rewritten as matrix form β 1 U 1 = C, β 1 = C, α 1 V 1 = A T U 1 T + U T 1 A + R(AT U 1 T + U T AR, 1 α 1 = A T U 1 T + U T 1 A + R(AT U 1 T + U T 1 AR, β i+1 U i+1 = AV i α i U i, β i+1 = AV i α i U i, α i+1 V i+1 = A T U i+1 T + U T i+1 A + R(AT U i+1 T + U T i+1 AR β i+1v i, α i+1 = A T U i+1 T + U T i+1 A + R(AT U i+1 T + U T i+1 AR β i+1v i. Analogously, θ 1 1 = M T f, ρ 1 p 1 = M 1, θ i+1 i+1 = M T p i ρ i i (i = 1,,..., and ρ i+1 p i+1 = M i+1 θ i+1 p i (i = 1,,..., can be written as θ 1 1 = ( A T, A T, R RA T, RA T ec(c T R, (3.15 ec(c T ρ 1 p 1 = A T T R AR 1, (3.16 AR T R θ i+1 i+1 = ( A T, A T, R RA T, RA T R p i ρ i i, (3.17 ρ i+1 p i+1 = A T T R AR i+1 ρ i+1 p i. (3.18 AR T R

6 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( From (3.15 (3.18, we hae ec(p i ec(p i = ec(v i, p i = T i ec(p i, ec(p T i where P i R m p, V i is a symmetric R-symmetric matrix. And so, the ector form θ 1 1 = M T f, ρ 1 p 1 = M 1, θ i+1 i+1 = M T p i ρ i i (i = 1,,..., and ρ i+1 p i+1 = M i+1 θ i+1 p i (i = 1,,..., in Paige algorithm can be rewritten as matrix form θ 1 V 1 = A T C T + C T A + R(A T C T + C T AR, θ 1 = A T C T + C T A + R(A T C T + C T AR, ρ 1 P 1 = AV 1, ρ 1 = AV 1, θ i+1 V i+1 = A T P i T + P T i A + R(A T P i T + P T i AR ρ i V i, θ i+1 = A T P i T + P T i A + R(A T P i T + P T i AR ρ i V i, ρ i+1 P i+1 = AV i+1 θ i+1 P i, ρ i+1 = AV i α i U i. In case 3. Noting that X is the (R, S-symmetric solution of the matrix equation AX = C if and only if X is the (R, S- symmetric solution of the system of matrix equations { AX = C ARXS = C. (3.19 And the system of matrix equations (3.19 can be transformed into the system of linear equations (.1 with coefficient matrix M and ector f as M = ( T S AR, f =. Therefore, β 1 u 1 = f, α 1 1 = M T u 1, β i+1 u i+1 = M i α i u i (i = 1,,..., and α i+1 i+1 = M T u i+1 β i+1 i (i = 1,,..., can be written as β 1 u 1 =, (3.0 α 1 1 = ( A T, S RA T u 1, β i+1 u i+1 = T S i α i u i, AR (3.1 (3. α i+1 i+1 = ( A T, S RA T u i+1 β i+1 i. (3.3 From (3.0 (3.3, we hae ec(ui u i =, ec(u i i = ec(v i, where U i R m p, V i is a (R, S-symmetric matrix. And so, the ector form β 1 u 1 = f, α 1 1 = M T u 1, β i+1 u i+1 = M i α i u i (i = 1,,..., and α i+1 i+1 = M T u i+1 β i+1 i (i = 1,,..., in Paige algorithm 1 can be rewritten as matrix form β 1 U 1 = C, β 1 = C, α 1 V 1 = A T U 1 T + RA T U 1 T S, β i+1 U i+1 = AV i α i U i, β i+1 = AV i α i U i α i+1 V i+1 = A T U i+1 T + RA T U i+1 T S β i+1 V i, α i+1 = A T U i+1 T + RA T U i+1 T S β i+1 V i. α 1 = A T U 1 T + RA T U 1 T S,

7 73 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( Analogously, the formula θ 1 1 = M T f, ρ 1 p 1 = M 1, θ i+1 i+1 = M T p i ρ i i (i = 1,,..., and ρ i+1 p i+1 = M i+1 θ i+1 p i (i = 1,,..., can be written as θ 1 1 = ( A T, S RA T, (3.4 ρ 1 p 1 = T S 1, (3.5 AR θ i+1 i+1 = T p S i ρ i i, (3.6 AR ρ i+1 p i+1 = A T i+1 ρ i+1 p i. (3.7 From (3.4 (3.7, we can obtain that ec(pi i = ec(v i, p i =, ec(p i where P i R m p, V i is a (R, S-symmetric matrix. And so, the ector form θ 1 1 = M T f, ρ 1 p 1 = M 1, θ i+1 i+1 = M T p i ρ i i (i = 1,,..., and ρ i+1 p i+1 = M i+1 θ i+1 p i (i = 1,,..., in Paige algorithm can be rewritten as matrix form θ 1 V 1 = A T C T + RA T C T S, ρ 1 P 1 = AV 1, ρ 1 = AV 1, θ i+1 V i+1 = A T P i T + RA T P i T S ρ i V i, θ i+1 = A T P i T + RA T P i T S ρ i V i, ρ i+1 P i+1 = AV i+1 θ i+1 P i, ρ i+1 = AV i+1 θ i+1 P i. θ 1 = A T C T + RA T C T S, Analogous results can be obtained about the minimum residual problem (1.. According to aboe discussion, we can design two matrix form iteratie methods to compute the unique minimum Frobenius norm solution X of the linear matrix equation (1.1 and the minimum residual problem (1.. As an example when unknown matrix X is constrained as symmetric, the matrix form iteratie methods may be listed as following Paige1_M and Paige_M. Paige1_M (1 τ 0 = 1; ξ 0 = 1; ω 0 = 0; Z 0 = 0; W 0 = 0; β 1 U 1 = C; β 1 = C ; α 1 V 1 = A T U 1 T + U T 1 A; α 1 = A T U 1 T + U T 1 A ; ( For i = 1,,... until {X i } conergence, do (a ξ i = ξ i 1 β i /α i ; Z i = Z i 1 + ξ i V i ; (b ω i = (τ i 1 β i ω i 1 /α i ; W i = W i 1 + ω i V i ; (c β i+1 U i+1 = AV i α i U i ; β i+1 = AV i α i U i ; (d τ i = τ i 1 α i /β i+1 ; (e α i+1 V i+1 = A T U i+1 T + U T i+1 A β i+1v i ; α i+1 = A T U i+1 T + U T i+1 A β i+1v i ; (f γ i = β i+1 ξ i /(β i+1 ω i τ i ; (g X i = Z i γ i W i. Paige _M (1 θ 1 V 1 = (A T C T + C T A; θ 1 = A T C T + C T A ; ρ 1 P 1 = AV 1 ; ρ 1 = AV 1 ; W 1 = V 1 /ρ 1 ; ξ 1 = θ 1 /ρ 1 ; X 1 = ξ 1 W 1 ; ( For i = 1,,... until {X i } conergence, do (a θ i+1 V i+1 = A T P i T + P T i A ρ i V i ; θ i+1 = A T P i T + P T i A ρ i V i ; (b ρ i+1 P i+1 = AV i+1 θ i+1 P i ; ρ i+1 = AV i+1 θ i+1 P i ;

8 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( (c W i+1 = (V i+1 θ i+1 W i /ρ i+1 ; (d ξ i+1 = ξ i θ i+1 /ρ i+1 ; (e X i+1 = X i + ξ i+1 W i+1. The stopping criteria on the algorithms Paige1_M and Paige_M can be used as C AX i ε, ξ i ε or X i X i 1 ε, where ε > 0 is a small tolerance, to compute the unique minimum Frobenius norm solution of the linear matrix equation (1.1 and as A T C T + C T A A T AX i T T X i A T A ε or X i X i 1 ε to compute the unique minimum Frobenius norm solution of the minimum residual problem (1.. Noting that X i obtained in Paige1_M and Paige_M is a symmetric matrix, we know that X i is the symmetric solution of matrix equation (1.1 when X i is the solution of the system of matrix equations (3.1. Now we consider the matrix nearness problem (1.3. We only discuss X S E is a symmetric matrix. Noting that, for arbitrary matrix X R n n, it follows min X X = min X X T + X + X T X. X SR n n X SR n n Hence, finding the unique symmetric solution of the matrix nearness problem (1.3 is equialent to first find the minimum Frobenius norm symmetric solution of the matrix equation (1.1 or the minimum residual problem (1. with C A X+ XT instead of C. Once the minimum Frobenius norm symmetric solution X is obtained by Paige1_M and Paige_M, the unique symmetric solution ˆX of the matrix nearness problem (1.3 can be obtained. In this case, the solution ˆX can be expressed as ˆX = X + X+ XT. Analogously, if the matrix nearness problem (1.3 with unknown matrix X S E is a skew symmetric, symmetric R-symmetric, symmetric R-skew symmetric, (R, S-symmetric or (R, S-skew symmetric, matrix C in the linear matrix equation (1.1 or the minimum residual problem (1. substitute, respectiely, by C A X XT C A X+ XT R( X+ XT R here. 4. Numerical examples 4, C A X+R XS, C A X+ XT +R( X+ XT R, 4 or C A X R XS. The following process are the same as aboe, and thus is omitted In this section, we compare Paige1_M and Paige_M numerically with three methods proposed in [13,16,17], denoted, respectiely, by Deng_M, Peng_M and Liao_M. All the tests were performed by MATLA 7.1 and the initial iteratie matrices in the methods Deng_M, peng_m and Liao_M are chosen as zero matrices in suitable size. All the following examples are used to illustrate the performance of fie methods to compute the minimum [ Frobenius ] norm symmetric solution X of the matrix equation (1.1 and the minimum residual problem (1.. Let M = A T. We use κ(m to stand for the spectral condition number of M, that is, κ(m = σ 1δ 1 σ r δ r, where σ 1 and σ r are respectiely the maximum and minimum nonzero singular alues of A, and δ 1 and δ r are respectiely the maximum and minimum nonzero singular alues of. Example 4.1. Choose arbitrary random matrices A = rand(5, 6 and = rand(6, 7 (in Matlab notation, such as A = = , then κ(m 58.. Let C = Aones(6, 6, then the matrix equation AX = C is consistent, and hence has a unique minimum Frobenius norm solution. Let C = ones(5, 7, then the matrix equation AX = C is inconsistent, and hence has a unique minimum Frobenius norm least squares solution. Fig. 1 illustrates the performance of the methods: Deng_M, Peng_M, Liao_M, Paige1_M and Paige_M in case C = Aones(6. Fig. illustrates the performance of the methods: Peng_M, Liao_M, Paige1_M and Paige_M in case C = ones(5, 7. Example 4.. Suppose that the matrices A, and C be gien by example 5.3 in [17]: A is a real n n(n = l + 6 blockdiagonal matrix with first l blocks are the form i ai b and the last two blocks are and, is b i a i ( (

9 734 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( Fig. 1. Conergence cures of the function h(x k = log 10 C AX k. Fig.. Conergence cures of the function h(x k = log 10 A T C T + C T A A T AX k T T X k A T A. Fig. 3. Conergence cures of the function h(x k = log 10 C AX k. ( e d 0 a real n n block-diagonal matrix with the 3 3 block d e d. We consider the following two cases 0 d e Case I: n = 96, a i = i, b i = a i (1 i l, d =, e = 0, κ(m = 305, C = Aones(n, n. Case II: n = 96, a i =, b i = a i (1 i l, d =, e =, κ(m = 144, C = ones(n, n.

10 Z.-y. Peng / Journal of Computational and Applied Mathematics 35 ( Fig. 4. Conergence cures of the function h(x k = log 10 A T C T + C T A A T AX k T T X k A T A. Obiously, the matrix equation AX = C has a symmetric solution in case I and has no symmetric solution in case II. Fig. 3 illustrates the performance of the methods: Deng_M, Peng_M, Liao_M, Paige1_M and Paige_M in case I. Fig. 4 illustrates the performance of the methods: Peng_M, liao_m, Paige1_M and Paige_M in case II. The aboe two examples and many other examples we hae tested by MATLA confirm the following conergence results of the fie iterations: Deng_M is quite efficient to sole linear matrix equation (1.1, but do not suitable for soling minimum residual problem (1.. Liao_M is efficient when A and are sparse matrices with small spectral condition number. When A and are dense matrix with high spectral condition number, Liao_M has ery low accuracy. Peng_M, in general, has higher accuracy and ery slow conergence rate. Paige1_M has faster conergence rate and higher accuracy than other method to sole the linear matrix equation (1.1. Paige_M has faster conergence rate and higher accuracy than other methods to sole the minimum residual problem (1.. References [1] William F. Trench, Hermitian, hermitian R-symmetric, and hermitian R-skew symmetric Procrustes problems, Linear Algebra Appl. 387 ( [] Z.Y. Peng, X.Y. Hu, The reflexie and anti-reflexie solutions of the matrix equation AX =, Linear Algebra Appl. 375 ( [3] William F. Trench, Minimization problem for (R, S-symmetric and (R, S-skew symmetric matrices, Linear Algebra Appl. 389 ( [4] H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl. 131 ( [5] H. Dai, On the symmetric solutions of linear matrix equations, Linear Algebra Appl. 39 ( [6] K.E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl. 119 ( [7] A.P. Liao, Y. Lei, Optimal approximate solution of the matrix equation AX = C oer symmetric matrices, J. Comput. Math. 5 ( [8] A.P. Liao, Z.Z. ai, Least-squares solution of AX = D oer symmetric positie semidefinite matrices X, J. Comput. Math. 1 ( [9] A.P. Liao, Z.Z. ai, Least-squares solutions of the matrix equation A XA = D in bisymmetric matrix set, Math. Numer. Sinica 4 ( [10] A.P. Liao, Z.Z. ai, The constrained solutions of two matrix equations, Acta Math. Sinica (English Ser. 18 ( [11] Z.Y. Peng, The centro-symmetric solutions of linear matrix equation AX = C and its optimal approximation, J. Eng. Math. 6 ( [1] Q.W. Wang, C.L. Yang, The re-nonnegatie definite solutions to the matrix equation AX = C, Comment. Math. Uni. Carolinae 39 ( [13] Y.. Deng, Z.Z. ai, Y.H. Gao, Iteratie orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations, Numer. Linear Algebra Appl. 13 ( [14] X.Y. Peng, X.Y. Hu, L. Zhang, An iteration method for the symmetric solutions and the optomal approximation solution of matrix equation AX = C, Appl. Math. Comput. 160 ( [15] G.X. Huang, F. Yin, K. Guo, An iteratie method for the skew-symmetic solution and the optimal approximate solution of the matrix equation AX = C, J. Comput. Appl. Math. 1 ( [16] Z.Y. Peng, An iteratie method for the least squares symmetric solution of the linear matrix equation AX = C, Appl. Math. Comput. 170 ( [17] Y. Lei, A.P. Liao, A minimal residual algorithm for the inconsistent matrix equation AX = C oer symmetric matrices, Appl. Math. Comput. 188 ( [18] C.C. Paige, idiagonalization of matrices and solution of linear equation, SIAM. J. Numer. Anal. 11 ( [19] G.H. Golub, W. Kahan, Calculating the singular alues and pseudoinerse of a matrix, SIAM J. Numer. Anal. (

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C

An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation

More information

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C

The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan

More information

The Lanczos and conjugate gradient algorithms

The Lanczos and conjugate gradient algorithms The Lanczos and conjugate gradient algorithms Gérard MEURANT October, 2008 1 The Lanczos algorithm 2 The Lanczos algorithm in finite precision 3 The nonsymmetric Lanczos algorithm 4 The Golub Kahan bidiagonalization

More information

On some properties of the Lyapunov equation for damped systems

On some properties of the Lyapunov equation for damped systems On some properties of the Lyapuno equation for damped systems Ninosla Truhar, Uniersity of Osijek, Department of Mathematics, 31 Osijek, Croatia 1 ntruhar@mathos.hr Krešimir Veselić Lehrgebiet Mathematische

More information

Total least squares. Gérard MEURANT. October, 2008

Total least squares. Gérard MEURANT. October, 2008 Total least squares Gérard MEURANT October, 2008 1 Introduction to total least squares 2 Approximation of the TLS secular equation 3 Numerical experiments Introduction to total least squares In least squares

More information

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution

An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Copyright 2011 Tech Science Press CMES, vol.77, no.3, pp.173-182, 2011 An Iterative Method for the Least-Squares Minimum-Norm Symmetric Solution Minghui Wang 1, Musheng Wei 2 and Shanrui Hu 1 Abstract:

More information

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

The Conjugate Gradient Method for Solving Linear Systems of Equations

The Conjugate Gradient Method for Solving Linear Systems of Equations The Conjugate Gradient Method for Solving Linear Systems of Equations Mike Rambo Mentor: Hans de Moor May 2016 Department of Mathematics, Saint Mary s College of California Contents 1 Introduction 2 2

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 18 Outline

More information

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES

AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem

More information

THE solution of the absolute value equation (AVE) of

THE solution of the absolute value equation (AVE) of The nonlinear HSS-like iterative method for absolute value equations Mu-Zheng Zhu Member, IAENG, and Ya-E Qi arxiv:1403.7013v4 [math.na] 2 Jan 2018 Abstract Salkuyeh proposed the Picard-HSS iteration method

More information

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction

THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1) 1. Introduction Journal of Computational Mathematics, Vol22, No4, 2004, 535 544 THE INVERSE PROBLEM OF CENTROSYMMETRIC MATRICES WITH A SUBMATRIX CONSTRAINT 1 Zhen-yun Peng Department of Mathematics, Hunan University of

More information

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS

ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS ON THE GLOBAL KRYLOV SUBSPACE METHODS FOR SOLVING GENERAL COUPLED MATRIX EQUATIONS Fatemeh Panjeh Ali Beik and Davod Khojasteh Salkuyeh, Department of Mathematics, Vali-e-Asr University of Rafsanjan, Rafsanjan,

More information

The reflexive re-nonnegative definite solution to a quaternion matrix equation

The reflexive re-nonnegative definite solution to a quaternion matrix equation Electronic Journal of Linear Algebra Volume 17 Volume 17 28 Article 8 28 The reflexive re-nonnegative definite solution to a quaternion matrix equation Qing-Wen Wang wqw858@yahoo.com.cn Fei Zhang Follow

More information

Block Bidiagonal Decomposition and Least Squares Problems

Block Bidiagonal Decomposition and Least Squares Problems Block Bidiagonal Decomposition and Least Squares Problems Åke Björck Department of Mathematics Linköping University Perspectives in Numerical Analysis, Helsinki, May 27 29, 2008 Outline Bidiagonal Decomposition

More information

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D

The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D International Journal of Algebra, Vol. 5, 2011, no. 30, 1489-1504 The Skew-Symmetric Ortho-Symmetric Solutions of the Matrix Equations A XA = D D. Krishnaswamy Department of Mathematics Annamalai University

More information

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C *

A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION AXB = C * Journal of Computational Mathematics Vol.34, No.4, 2016, 437 450. http://www.global-sci.org/jcm doi:10.4208/jcm.1601-m2015-0416 A MODIFIED HSS ITERATION METHOD FOR SOLVING THE COMPLEX LINEAR MATRIX EQUATION

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry

Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with J-(Skew) Centrosymmetry Inverse Eigenvalue Problems and Their Associated Approximation Problems for Matrices with -(Skew) Centrosymmetry Zhong-Yun Liu 1 You-Cai Duan 1 Yun-Feng Lai 1 Yu-Lin Zhang 1 School of Math., Changsha University

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

6.1.1 Angle between Two Lines Intersection of Two lines Shortest Distance from a Point to a Line

6.1.1 Angle between Two Lines Intersection of Two lines Shortest Distance from a Point to a Line CHAPTER 6 : VECTORS 6. Lines in Space 6.. Angle between Two Lines 6.. Intersection of Two lines 6..3 Shortest Distance from a Point to a Line 6. Planes in Space 6.. Intersection of Two Planes 6.. Angle

More information

The skew-symmetric orthogonal solutions of the matrix equation AX = B

The skew-symmetric orthogonal solutions of the matrix equation AX = B Linear Algebra and its Applications 402 (2005) 303 318 www.elsevier.com/locate/laa The skew-symmetric orthogonal solutions of the matrix equation AX = B Chunjun Meng, Xiyan Hu, Lei Zhang College of Mathematics

More information

The semi-convergence of GSI method for singular saddle point problems

The semi-convergence of GSI method for singular saddle point problems Bull. Math. Soc. Sci. Math. Roumanie Tome 57(05 No., 04, 93 00 The semi-convergence of GSI method for singular saddle point problems by Shu-Xin Miao Abstract Recently, Miao Wang considered the GSI method

More information

S.F. Xu (Department of Mathematics, Peking University, Beijing)

S.F. Xu (Department of Mathematics, Peking University, Beijing) Journal of Computational Mathematics, Vol.14, No.1, 1996, 23 31. A SMALLEST SINGULAR VALUE METHOD FOR SOLVING INVERSE EIGENVALUE PROBLEMS 1) S.F. Xu (Department of Mathematics, Peking University, Beijing)

More information

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact

Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Lanczos tridigonalization and Golub - Kahan bidiagonalization: Ideas, connections and impact Zdeněk Strakoš Academy of Sciences and Charles University, Prague http://www.cs.cas.cz/ strakos Hong Kong, February

More information

On matrix equations X ± A X 2 A = I

On matrix equations X ± A X 2 A = I Linear Algebra and its Applications 326 21 27 44 www.elsevier.com/locate/laa On matrix equations X ± A X 2 A = I I.G. Ivanov,V.I.Hasanov,B.V.Minchev Faculty of Mathematics and Informatics, Shoumen University,

More information

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Applied Mathematics Volume 01, Article ID 398085, 14 pages doi:10.1155/01/398085 Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations Chang-Zhou Dong, 1 Qing-Wen Wang,

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Econometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets

Econometrics II - EXAM Outline Solutions All questions have 25pts Answer each question in separate sheets Econometrics II - EXAM Outline Solutions All questions hae 5pts Answer each question in separate sheets. Consider the two linear simultaneous equations G with two exogeneous ariables K, y γ + y γ + x δ

More information

ELA

ELA POSITIVE DEFINITE SOLUTION OF THE MATRIX EQUATION X = Q+A H (I X C) δ A GUOZHU YAO, ANPING LIAO, AND XUEFENG DUAN Abstract. We consider the nonlinear matrix equation X = Q+A H (I X C) δ A (0 < δ 1), where

More information

Differential Geometry of Surfaces

Differential Geometry of Surfaces Differential Geometry of urfaces Jordan mith and Carlo équin C Diision, UC Berkeley Introduction These are notes on differential geometry of surfaces ased on reading Greiner et al. n. d.. Differential

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY

More information

The Drazin inverses of products and differences of orthogonal projections

The Drazin inverses of products and differences of orthogonal projections J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,

More information

The iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q

The iterative methods for solving nonlinear matrix equation X + A X 1 A + B X 1 B = Q Vaezzadeh et al. Advances in Difference Equations 2013, 2013:229 R E S E A R C H Open Access The iterative methods for solving nonlinear matrix equation X + A X A + B X B = Q Sarah Vaezzadeh 1, Seyyed

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

A Geometric Review of Linear Algebra

A Geometric Review of Linear Algebra A Geometric Reiew of Linear Algebra The following is a compact reiew of the primary concepts of linear algebra. The order of presentation is unconentional, with emphasis on geometric intuition rather than

More information

Research Article Eigenvector-Free Solutions to the Matrix Equation AXB H =E with Two Special Constraints

Research Article Eigenvector-Free Solutions to the Matrix Equation AXB H =E with Two Special Constraints Applied Mathematics Volume 03 Article ID 869705 7 pages http://dx.doi.org/0.55/03/869705 Research Article Eigenvector-Free Solutions to the Matrix Equation AXB =E with Two Special Constraints Yuyang Qiu

More information

Numerical behavior of inexact linear solvers

Numerical behavior of inexact linear solvers Numerical behavior of inexact linear solvers Miro Rozložník joint results with Zhong-zhi Bai and Pavel Jiránek Institute of Computer Science, Czech Academy of Sciences, Prague, Czech Republic The fourth

More information

the method of steepest descent

the method of steepest descent MATH 3511 Spring 2018 the method of steepest descent http://www.phys.uconn.edu/ rozman/courses/m3511_18s/ Last modified: February 6, 2018 Abstract The Steepest Descent is an iterative method for solving

More information

MA 266 Review Topics - Exam # 2 (updated)

MA 266 Review Topics - Exam # 2 (updated) MA 66 Reiew Topics - Exam # updated Spring First Order Differential Equations Separable, st Order Linear, Homogeneous, Exact Second Order Linear Homogeneous with Equations Constant Coefficients The differential

More information

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa

Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computa Contribution of Wo¹niakowski, Strako²,... The conjugate gradient method in nite precision computations ªaw University of Technology Institute of Mathematics and Computer Science Warsaw, October 7, 2006

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,

More information

Unit 11: Vectors in the Plane

Unit 11: Vectors in the Plane 135 Unit 11: Vectors in the Plane Vectors in the Plane The term ector is used to indicate a quantity (such as force or elocity) that has both length and direction. For instance, suppose a particle moes

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 20 1 / 20 Overview

More information

SVD, PCA & Preprocessing

SVD, PCA & Preprocessing Chapter 1 SVD, PCA & Preprocessing Part 2: Pre-processing and selecting the rank Pre-processing Skillicorn chapter 3.1 2 Why pre-process? Consider matrix of weather data Monthly temperatures in degrees

More information

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU

AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E. Ying-chun LI and Zhi-hong LIU Acta Universitatis Apulensis ISSN: 158-539 No. 9/01 pp. 335-346 AN ITERATIVE METHOD FOR THE GENERALIZED CENTRO-SYMMETRIC SOLUTION OF A LINEAR MATRIX EQUATION AXB + CY D = E Ying-chun LI and Zhi-hong LIU

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019

Math 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019 Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent

More information

Blow up of Solutions for a System of Nonlinear Higher-order Kirchhoff-type Equations

Blow up of Solutions for a System of Nonlinear Higher-order Kirchhoff-type Equations Mathematics Statistics 6: 9-9, 04 DOI: 0.389/ms.04.00604 http://www.hrpub.org Blow up of Solutions for a System of Nonlinear Higher-order Kirchhoff-type Equations Erhan Pişkin Dicle Uniersity, Department

More information

Review of Matrices and Vectors 1/45

Review of Matrices and Vectors 1/45 Reiew of Matrices and Vectors /45 /45 Definition of Vector: A collection of comple or real numbers, generally put in a column [ ] T "! Transpose + + + b a b a b b a a " " " b a b a Definition of Vector

More information

Improved Newton s method with exact line searches to solve quadratic matrix equation

Improved Newton s method with exact line searches to solve quadratic matrix equation Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan

More information

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna

The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method. V. Simoncini. Dipartimento di Matematica, Università di Bologna The Nullspace free eigenvalue problem and the inexact Shift and invert Lanczos method V. Simoncini Dipartimento di Matematica, Università di Bologna and CIRSA, Ravenna, Italy valeria@dm.unibo.it 1 The

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

A Geometric Review of Linear Algebra

A Geometric Review of Linear Algebra A Geometric Reiew of Linear Algebra The following is a compact reiew of the primary concepts of linear algebra. I assume the reader is familiar with basic (i.e., high school) algebra and trigonometry.

More information

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES

4.8 Arnoldi Iteration, Krylov Subspaces and GMRES 48 Arnoldi Iteration, Krylov Subspaces and GMRES We start with the problem of using a similarity transformation to convert an n n matrix A to upper Hessenberg form H, ie, A = QHQ, (30) with an appropriate

More information

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems Applied Mathematical Sciences, Vol. 10, 2016, no. 30, 1477-1488 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6269 Performance Comparison of Relaxation Methods with Singular and Nonsingular

More information

different formulas, depending on whether or not the vector is in two dimensions or three dimensions.

different formulas, depending on whether or not the vector is in two dimensions or three dimensions. ectors The word ector comes from the Latin word ectus which means carried. It is best to think of a ector as the displacement from an initial point P to a terminal point Q. Such a ector is expressed as

More information

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated.

Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated. Math 504, Homework 5 Computation of eigenvalues and singular values Recall that your solutions to these questions will not be collected or evaluated 1 Find the eigenvalues and the associated eigenspaces

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

General Lorentz Boost Transformations, Acting on Some Important Physical Quantities

General Lorentz Boost Transformations, Acting on Some Important Physical Quantities General Lorentz Boost Transformations, Acting on Some Important Physical Quantities We are interested in transforming measurements made in a reference frame O into measurements of the same quantities as

More information

Iterative Methods for Solving A x = b

Iterative Methods for Solving A x = b Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

The Full-rank Linear Least Squares Problem

The Full-rank Linear Least Squares Problem Jim Lambers COS 7 Spring Semeseter 1-11 Lecture 3 Notes The Full-rank Linear Least Squares Problem Gien an m n matrix A, with m n, and an m-ector b, we consider the oerdetermined system of equations Ax

More information

On Some Distance-Based Indices of Trees With a Given Matching Number

On Some Distance-Based Indices of Trees With a Given Matching Number Applied Mathematical Sciences, Vol. 8, 204, no. 22, 6093-602 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.2988/ams.204.48656 On Some Distance-Based Indices of Trees With a Gien Matching Number Shu

More information

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS

REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS REORTHOGONALIZATION FOR GOLUB KAHAN LANCZOS BIDIAGONAL REDUCTION: PART II SINGULAR VECTORS JESSE L. BARLOW Department of Computer Science and Engineering, The Pennsylvania State University, University

More information

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems

Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Moments, Model Reduction and Nonlinearity in Solving Linear Algebraic Problems Zdeněk Strakoš Charles University, Prague http://www.karlin.mff.cuni.cz/ strakos 16th ILAS Meeting, Pisa, June 2010. Thanks

More information

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Diagonal and Monomial Solutions of the Matrix Equation AXB = C Iranian Journal of Mathematical Sciences and Informatics Vol. 9, No. 1 (2014), pp 31-42 Diagonal and Monomial Solutions of the Matrix Equation AXB = C Massoud Aman Department of Mathematics, Faculty of

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT 16-02 The Induced Dimension Reduction method applied to convection-diffusion-reaction problems R. Astudillo and M. B. van Gijzen ISSN 1389-6520 Reports of the Delft

More information

Algorithms and Data Structures 2014 Exercises and Solutions Week 14

Algorithms and Data Structures 2014 Exercises and Solutions Week 14 lgorithms and ata tructures 0 xercises and s Week Linear programming. onsider the following linear program. maximize x y 0 x + y 7 x x 0 y 0 x + 3y Plot the feasible region and identify the optimal solution.

More information

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 7: Iterative methods for solving linear systems. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 7: Iterative methods for solving linear systems Xiaoqun Zhang Shanghai Jiao Tong University Last updated: December 24, 2014 1.1 Review on linear algebra Norms of vectors and matrices vector

More information

arxiv: v3 [math.ra] 22 Aug 2014

arxiv: v3 [math.ra] 22 Aug 2014 arxiv:1407.0331v3 [math.ra] 22 Aug 2014 Positivity of Partitioned Hermitian Matrices with Unitarily Invariant Norms Abstract Chi-Kwong Li a, Fuzhen Zhang b a Department of Mathematics, College of William

More information

ETNA Kent State University

ETNA Kent State University C 8 Electronic Transactions on Numerical Analysis. Volume 17, pp. 76-2, 2004. Copyright 2004,. ISSN 1068-613. etnamcs.kent.edu STRONG RANK REVEALING CHOLESKY FACTORIZATION M. GU AND L. MIRANIAN Abstract.

More information

On the solvability of an equation involving the Smarandache function and Euler function

On the solvability of an equation involving the Smarandache function and Euler function Scientia Magna Vol. 008), No., 9-33 On the solvability of an equation involving the Smarandache function and Euler function Weiguo Duan and Yanrong Xue Department of Mathematics, Northwest University,

More information

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation

The Symmetric and Antipersymmetric Solutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B A l X l B l = C and Its Optimal Approximation The Symmetric Antipersymmetric Soutions of the Matrix Equation A 1 X 1 B 1 + A 2 X 2 B 2 + + A X B C Its Optima Approximation Ying Zhang Member IAENG Abstract A matrix A (a ij) R n n is said to be symmetric

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

Lecture 2: Computing functions of dense matrices

Lecture 2: Computing functions of dense matrices Lecture 2: Computing functions of dense matrices Paola Boito and Federico Poloni Università di Pisa Pisa - Hokkaido - Roma2 Summer School Pisa, August 27 - September 8, 2018 Introduction In this lecture

More information

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER *

ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Journal of Computational Mathematics Vol.xx, No.x, 2x, 6. http://www.global-sci.org/jcm doi:?? ON THE GENERALIZED DETERIORATED POSITIVE SEMI-DEFINITE AND SKEW-HERMITIAN SPLITTING PRECONDITIONER * Davod

More information

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation

The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009 Article 23 2009 The symmetric minimal rank solution of the matrix equation AX=B and the optimal approximation Qing-feng Xiao qfxiao@hnu.cn

More information

The perturbed Riemann problem for the chromatography system of Langmuir isotherm with one inert component

The perturbed Riemann problem for the chromatography system of Langmuir isotherm with one inert component Aailable online at www.tjnsa.com J. Nonlinear Sci. Appl. 9 2016, 5382 5397 Research Article The perturbed Riemann problem for the chromatography system of Langmuir isotherm with one inert component Pengpeng

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Holomorphy of the 9th Symmetric Power L-Functions for Re(s) >1. Henry H. Kim and Freydoon Shahidi

Holomorphy of the 9th Symmetric Power L-Functions for Re(s) >1. Henry H. Kim and Freydoon Shahidi IMRN International Mathematics Research Notices Volume 2006, Article ID 59326, Pages 1 7 Holomorphy of the 9th Symmetric Power L-Functions for Res >1 Henry H. Kim and Freydoon Shahidi We proe the holomorphy

More information

Sample ECE275A Midterm Exam Questions

Sample ECE275A Midterm Exam Questions Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search

On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised

More information

arxiv: v1 [math.na] 1 Sep 2018

arxiv: v1 [math.na] 1 Sep 2018 On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing

More information

A note on the unique solution of linear complementarity problem

A note on the unique solution of linear complementarity problem COMPUTATIONAL SCIENCE SHORT COMMUNICATION A note on the unique solution of linear complementarity problem Cui-Xia Li 1 and Shi-Liang Wu 1 * Received: 13 June 2016 Accepted: 14 November 2016 First Published:

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method

Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Summary of Iterative Methods for Non-symmetric Linear Equations That Are Related to the Conjugate Gradient (CG) Method Leslie Foster 11-5-2012 We will discuss the FOM (full orthogonalization method), CG,

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 4: Iterative Methods PD

More information

An inexact parallel splitting augmented Lagrangian method for large system of linear equations

An inexact parallel splitting augmented Lagrangian method for large system of linear equations An inexact parallel splitting augmented Lagrangian method for large system of linear equations Zheng Peng [1, ] DongHua Wu [] [1]. Mathematics Department of Nanjing University, Nanjing PR. China, 10093

More information

NOTES ON THE REGULAR E-OPTIMAL SPRING BALANCE WEIGHING DESIGNS WITH CORRE- LATED ERRORS

NOTES ON THE REGULAR E-OPTIMAL SPRING BALANCE WEIGHING DESIGNS WITH CORRE- LATED ERRORS REVSTAT Statistical Journal Volume 3, Number 2, June 205, 9 29 NOTES ON THE REGULAR E-OPTIMAL SPRING BALANCE WEIGHING DESIGNS WITH CORRE- LATED ERRORS Authors: Bronis law Ceranka Department of Mathematical

More information

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES

UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES UNIFYING LEAST SQUARES, TOTAL LEAST SQUARES AND DATA LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

LIOUVILLE THEOREM AND GRADIENT ESTIMATES FOR NONLINEAR ELLIPTIC EQUATIONS ON RIEMANNIAN MANIFOLDS

LIOUVILLE THEOREM AND GRADIENT ESTIMATES FOR NONLINEAR ELLIPTIC EQUATIONS ON RIEMANNIAN MANIFOLDS Electronic Journal of Differential Equations, Vol. 017 (017), No. 58, pp. 1 11. ISSN: 107-6691. URL: http://ejde.ath.txstate.edu or http://ejde.ath.unt.edu LIOUVILLE THEOREM AND GRADIENT ESTIMATES FOR

More information