0 Introduction Let the matrix A C nn be given by all its n entries The problem is to compute the product Ab of a matrix A by input vector b C n Using
|
|
- Jean Logan
- 5 years ago
- Views:
Transcription
1 Fast algorithms with preprocessing for matrix{vector multiplication problems IGohberg and VOlshevsky School of Mathematical Sciences Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, Ramat Aviv 998, Israel e{mail: Journal of Complexity, (99) Abstract In this paper the problem of complexity of multiplication of a matrix with a vector is studied for Toeplitz, Hankel, Vandermonde and Cauchy matrices and for matrices connected with them (ie for transpose, inverse and transpose to inverse matrices) The proposed algorithms have complexities at most O(n log n) ops and in a number of cases improve the known estimates In these algorithms, in a separate preprocessing phase, are singled out all the actions on the preparation of a given matrix, which aimed at the reduction of the complexity of the second stage of computations directly connected with the multiplication by an arbitrary vector Incidentally, the eective algorithms for computing the Vandermonde determinant and the determinant of a Cauchy matrix, are given
2 0 Introduction Let the matrix A C nn be given by all its n entries The problem is to compute the product Ab of a matrix A by input vector b C n Using the standard rule of a matrix times vector multiplication, the above problem can be solved in n ; n ops (ie oat point operations of addition, subtraction, multiplication and division) In the following three simple examples the special structure of a matrix enables faster computation of the product by avector Examples ) Band matrices Let A =(a ij ) i j=0 be a band matrix with the width of the band (ie, by denition, a ij =0 while j i ; j j> ; ) Obviously, in this case the computing the product Ab costs ( ; )n ops ) Small rank matrices Let matrix A C nn with rank R be given in the form of the outer sum of terms : A = X m= h m g T m (h m g m C n ): (0) The representation (0) shows that A can be multiplied by an arbitrary vector in (;)n; ops ) Semiseparable matrices Such a matrix A =(a ij ) i j=0 is dened by the equalities a ij = ( P m= f m i g m i i<j 0 i j where f m = (f m i ) g m = (g m i ) (m = ::: ) are given vectors from C n In this case X A = diag(f m ) m= diag(g m ) (0) 0 0 where by diag(f) is denoted the diagonal matrix, whose entries on the main diagonal equal the coordinates of the vector f C n The product of a semiseparable matrix by an arbitrary vector can be computed using (0) in (;)n; ops Obviously, the analogous estimate holds for the transpose to the matrices of the form (0) Let matrix A C nn be given The task is to compute the products Ab 0 Ab ::: of the matrix A by input vectors b 0 b ::: C n in the smallest possible time Such a situation arises naturally in a number of computational problems, for example, when we have to carry out the iterations with a given matrix The problem of computing the product of a matrix
3 by a matrix can also be solved in the above framework In the latter case the columns of a second matrix are interpreted as input vectors In accordance with the accepted scheme, we single out all the computations which are not dependent upon the input vectors b 0 b ::: Accordingly, the proposed algorithms will be divided into two separate phases : I Preprocessing for matrix A C nn, II Application to the vector, The rst phase also contains the preparation of the given matrix, which enables the second phase to be accomplished more eectively In the present paper we embody this scheme and propose a number of fast algorithms for matrices with a certain structure, namely for transposed Vandermonde matrix, for transpose to inverse of Vandermonde matrix, for Cauchy matrices and for matrices connected with them (ie for transpose, inverse and transpose to inverse matrices) Background Basic algorithms In this paper a limited number of well known algorithms is used intensively These algorithms are listed below and accompanied by the estimates of their complexities The particular implementations of these basic algorithms and complexity analysis for them can be found in various sources (see, for example, Aho, Hopcroft and Ullman (9)) (BA) Evaluation algorithm An algorithm of evaluation of a (n ; ) degree polynomial at n points The complexity of this algorithm will be denoted by "(n) It is well known that "(n) O(n log n): (BA) Interpolation algorithm An algorithm of interpolation of a (n ; ) degree polynomial from its values at n points The complexity of the interpolation algorithm is denoted by (n) and, as it is well known, (n) O(n log n): (BA) Fast Fourier Transform algorithm The discrete Fourier transform of the vector r =(r i ) C n is by denition the vector ( P k=0 r k! ik ) C n, where! is the primitive n-th root from unity The discrete Fourier transform can be computed by well known methods, collectively named Fast Fourier Transform (FFT) It is well known that for the complexity (n) of computing one FFT of order n, the following estimate holds (n) O(n log n):
4 Basic matrices In this subsection some matrices, which are known to have alower than O(n ) ops complexity ofmultiplication with a vector, are considered Note that the methods for fast computing of the products of these basic matrices by vectors are based on the algorithms (BA) - (BA) (BM) Vandermonde matrix By denition, the Vandermonde matrix V (t) is the matrix of the form V (t) = t 0 t 0 t t with t =(t i) C n : t t Obviously, the problem of computing the product V (t)b of the matrix V (t)byavector b =(b i ) is equivalent to the problem of evaluation of p() = P b i i at n points t 0 t ::: t Thus, the product V (t)b can be computed using the algorithm (BA) in "(n) ops (BM) Inverse of Vandermonde matrix Consider the problem of application to a vector of the matrix A C nn, which is dened as the inverse of the given Vandermonde matrix V (t) (t C n ) It is easy to see that the product Ab can be computed using the algorithm (BA) Thus, for the inverse of Vandermonde matrix, the application to a vector costs (n) ops (BM) Fourier matrix and its inverse Let! = e i n be primitive n-th root from the unity Consider the Fourier matrix F = p n V ( ) with =(! i ) and its inverse F ; = F, where superscript means conjugate transpose The products Fb and F b can be computed using the algorithm (BA) in (n) ops (BM) Factor circulant By Circ ' (r) will be denoted '-circulant with the rst column r =(r i ), ie matrix of the form r 0 'r 'r Circ ' (r) = r r 0 'r r 'r r r r 0 The matrix Circ (r) is referred to as a circulant It is known (see Cline, Plemmons and Worm (9)) that the matrix Circ ' (r) admits the following decomposition : Circ ' (r) =D ; ' F ' (r) FD ' () where F is the Fourier matrix, ' (r) = diag(fd ' r) and D ' = diag(( i ) ) with C satisfying the condition n = ' From () it follows that if the coordinates of :
5 the vector =( i ) C n are given, then the product of the matrix Circ ' (r) by an arbitrary vector can be computed in (n)+o(n) ops In this case the preprocessing phase consists of computation of the central factor ' (r) intheright-hand side of () and costs ops (n)+o(n) (BM) Toeplitz matrices The product of Toeplitz matrix A = (a i;j ) i j=0 by a vector can be computed by embedding of the matrix A in the circulant ofdoublesize Correspondingly, the amount of operations is (n)+o(n) =(n)+o(n) ops after preprocessing (n)+o(n) =(n)+o(n) (BM) Inverses of Toeplitz matrices In computations with the inverses of Toeplitz matrices, the Gohberg-Semencul formula (see Gohberg and Semencul (9) or Gohberg and Feldman (9) ) is useful This formula represents the inverse of a Toeplitz matrix in the form of the sum of products of triangular Toeplitz matrices Namely, if for the Toeplitz matrix T the equations T x = e 0 T y = e () h i h i T T with e 0 = 0 0, e = 0 0 have solutions x = (xi ), y =(y i ) and x 0 =0,then T is invertible and x y y n; y 0 T ; = x x 0 ( 0 y x 0 x y n; ; 0 ; x x x y 0 0 y 0 y n; y 0 0 y n; 0 0 y 0 x x 0 0 x x 0 0 ): ()
6 On the basis of formula () the number of fast algorithms for the inversion of Toeplitz matrices was elaborated (see Brent, Gustavson and Yun (980), de Hoog (98), Chun and Kailath (99), where the methods for solving the equations of the form () are suggested) Denote by (n) the complexity of solving one equation of the form () According to Brent, Gustavson and Yun (980), de Hoog (98), Chun and Kailath (99) (n) O(n log n): Thus, if matrix A C nn is the inverse of the given Toeplitz matrix, then from the above arguments follows that after (n) + 8(n) ops preprocessing the product of A by an arbitrary vector can be computed by () in (n) + O(n) ops Below we show how this complexity can be reduced If for Toeplitz matrix T C nn the equations () have solutions x = (x i ), y = (y i ) and x 0 =0,then T ; = x 0 (' ; ) (Circ (x) Circ ' (Z ' y) ; Circ (Z y) Circ ' (x)) () where numbers ' and (= ') are arbitrary and Z ' = 0 0 ' (' = 0) is the '-cyclic lower shift matrix Formula () was obtained in Ammar and Gader (990) for positive denite Toeplitz matrices and ' = = ;, and was extended to the general case in Gohberg and Olshevsky (99) Note, in the latter paper one can also nd other factor circulant decompositions for the inverses of Toeplitz matrices, which are useful in the fast computations with Toeplitz matrices Furthermore, representing each factor circulant in the left-hand side of () in the form of (), we nally get T ; = x 0 (' ; ) D; F ( (x) F D D ; ' F ' (Z ' y); ; (Z y) FD D ; ' F ' (x)) FD ' () where F is a Fourier matrix and the diagonal matrices D ' and ' (r) (r C n ) are dened as in (BM) From () it follows that after (n)+(n)+o(n) () ops preprocessing, the product of the inverse of the Toeplitz matrix by an arbitrary vector can be computed in (n)+o(n) ()
7 ops The last estimate is obtained here without the additional requirement for matrix A to be Hermitian In the Hermitian case this result is obtained in the earlier paper Ammar and Gader (990) Note, () and () imply that the product of the inverse of Toeplitz matrix by an arbitrary matrix can be computed in (n)n + O(n ) ops (BM) Hankel matrices Let us recall that an arbitrary Hankel matrix H = (a i+j ) i j=0 can be transformed in the Toeplitz matrix by means of multiplication from the right by the reverse identity matrix J = : Therefore the complexity of the multiplication with a vector for the class of Hankel matrices coincides with the same complexity for the class of Toeplitz matrices Transpose to Vandermonde and to inverse of Vandermonde matrices Relations between Vandermonde matrices and Bezoutians Let n R be the maximal degree of two polynomials f() andg(), then the bilinear form B f g ( ) = f() g() ; f() g() ; = X i j=0 b ij i j is called the Bezoutian of f() and g() The matrix B f g =(b ij ) i j=0 whose entries are determined by the coecients of the bilinear form B f g ( ), will be referred to as a Bezout matrix which corresponds to the polynomials f() andg() Obviously, h i Bf g = B f g ( ): () The equality () yields the following useful property of the Bezout matrix : V (s) B f g V (t) T =(B f g (s i t j )) i j=0 ()
8 where s =(s i ), t =(t i ) C n From () and obvious equality B f g ( ) =f 0 () g() ; f() g 0 () it follows that if t 0 t ::: t are n simple roots of the polynomial f(), then V (t) B f g V (t) T = diag((f 0 (t i ) g(t i )) ): () The latter formula appeared in Lander (9) and can also be found in Heinig and Rost (98) On the basis of () in the next subsection the fast algorithm for multiplication of transpose Vandermonde matrix by a vector is presented Transpose to Vandermonde matrix and its inverse For vector t =(t i ) (t i = t j while i = j) set f() = Y ( ; t i )= n + X r i i and g() =' ; n () where ' is such that ' = t n i (i = 0 ::: n; ) Under these conditions the matrix in the right-hand side of () is invertible Hence, all the matrices in the left-hand side of () are also invertible, and moreover V (t) T = B ; f g V (t) ; diag((f 0 (t i ) g(t i )) ): () Furthermore, matrix B f g is the Bezout matrix of two polynomials, one of them having the special form g() = ' ; n This fact allows to receive for B f g and for its inverse a representation involving a factor circulant To prove this, let us compute the Bezoutian B f g ( ) of the polynomials f() and g(), dened by () : = ' nx i= f() ; f() B f g ( ) =' ; f() n ; n f() = ; ; r i ( i; + i; + ::: + i; )+ X r i ( i + i+ n; + ::: + i ): with r n = From the last identity it can be easily seen that the corresponding Bezout matrix B f g has the special form B f g = 'r 'r 'r r 0 + ' 'r r 'r r n; r 0 + ' r r n; r =Circ ' (r + 'e 0 ) J () where J is, as above, the reverse identity matrix Substituting () in (), we have V (t) T = J Circ ' (r + 'e 0 ) ; V (t) ; diag((f 0 (t i ) g(t i )) ): () 8
9 On the basis of the last formula the following algorithm is proposed Algorithm Computing the product of transpose Vandermonde matrix with an arbitrary vector I Preprocessing Complexity : (n)+"(n)+(n)+n log n + O(n) ops Input : Vector t =(t i ) C n (t i = t j while i = j): Output : Parameters of the representation V (t) T in the form V (t) T = J D ' F ' (r + 'e 0 ) ; F D ; ' V (t) ; D f g : (8) Compute the coecients of the polynomial f() = Q (;t i)= n + P r i i in (n) ops Compute in O(n) ops the coecients of the polynomial f 0 () Evaluate in "(n) ops the values f 0 (t i ) (i =0 ::: n; ) Evaluate in at most n log n+n ops the values g(t i )=';t n i (i =0 ::: ) using, for example, the algorithms from x in Knuth (99) Compute in n ops the entries of the matrix D f g =diag((f 0 (t i ) g(t i )) ) Set = + max 0i t i Compute in O(n) ops the numbers i (i =0 ::: n) Set ' = n and D ' = diag(( i ) ) Compute in O(n) ops the entries of the matrix D ; ' 8 Compute the coordinates of the vector FD ' y with y = r + 'e 0 in (n)+o(n) ops using (BM), where r =(r i ) 9 Compute in O(n) ops the inverse of the diagonal matrix ' (r + 'e 0 ) = diag(fd ' y) II Application to the vector Complexity : (n) +(n) +O(n) ops Input : Vector b C n Output : Vector V (t) T b C n Compute vector V (t) T b by (8) using (BM) and (BM) The product of transposed Vandermonde matrix with an arbitrary matrix can be computed using algorithm in (n)n +(n)n + O(n ) ops 9
10 Note that representations of the matrix V (t) T in the form of the product of matrix V (t) ; by some matrices with lower complexity of multiplication with vectors, could be found in Heinig and Rost (98) and Canny, Kaltofen and Yagati (989) In Canny, Kaltofen and Yagati (989), for example, the matrix V (t) T is represented in the form V (t) T = H V (t) ; (9) where matrix H = V (t) T V (t) turned out to be a Hankel matrix For computing the entries of H it is proposed in Canny, Kaltofen and Yagati (989) to interpolate the n-th degree polynomial from its zeros at n points, and then to solve an equation of the type T x = b with Toeplitz matrix T from C nn Let (n) beasabove the complexity ofsolving one Toeplitz equation of the form () Thus, computing the entries of the matrix H in (9) by the scheme in Canny, Kaltofen and Yagati (989) and afterwards the preparation of the Hankel matrix H according to (BM) costs (n) + (n) + (n) + O(n) ops This complexity exceeds the complexity of preprocessing in the algorithm Furthermore, the appearance of the factor circulant in() instead of the Hankel matrix in (9) is reected in the additional economy in (n) ops at the stage of application to the vector Formula () also enables the proposition of fast algorithm for computing the product of matrix V (t) ;T byavector Moreover, the preprocessing phase consists of computing the parameters of the representation of V (t) ;T in the form analogous to the representation (8) of V (t) T and costs (n)+"(n)+(n)+nlog n + O(n) ops Afterwards, the application to the vector costs "(n)+(n)+o(n) ops The product of the matrix V (t) ;T by an arbitrary matrix can be computed in "(n)n +(n)n + O(n ) ops Formulas () and () yield the following representation for the inverse to a Vandermonde matrix : V (t) ; =Circ ' (r + 'e 0 ) J V (t) T diag(( f 0 (t i ) (' ; t n i ) ) ): Here, as above, J stands for the reverse identity matrix, f() = Q (;t i )= n + P r i i and r = (r i ) The last formula can be used for computing the entries of the inverse of Vandermonde matrix in O(n ) ops Cauchy matrices and matrices connected with them By denition, the Cauchy matrix C(s t) has the form C(s t) =( ) s i ; t j 0 i j=0
11 where s =(s i ) and t =(t i ) are given vectors from C n such thatt i s i (i =0 ::: ) are n dierent complex numbers As in the previous section set f() = Y ( ; t i )= n + X r i i g() =' ; n where ' is such that ' = t n i (i =0 ::: n; ) Formula () yields V (s) B f g V (t) T = ( f(s i) g(t j ) ) s i ; t j From here follows i j=0 =diag((f(s i )) ) C(s t) diag((g(t i )) ): C(s t) = diag(( f(s i ) ) ) V (s) B f g V (t) T diag(( g(t i ) ) ): This equality and () yields the following representation of the Cauchy matrix : C(s t) = diag(( f(s i ) ) ) V (s) V (t) ; diag((f 0 (t i )) ): () Formula () enables us to propose of the following algorithm Algorithm Computing the product of the Cauchy matrix by a vector I Preprocessing Complexity : (n) +"(n) +O(n) ops Input : Vectors s =(s i ) and t =(t i ) (t i = t j while i = j): Output : Parameters of the representation C(s t) intheform () Compute the coecients of the polynomial f() = Q ( ; t i )in(n) ops Evaluate in "(n)+ O(n) ops the values f (s i ) (i =0 ::: n; ) Compute in O(n) ops the coecients of the polynomial f 0 () Evaluate in "(n) ops the values f 0 (t i ) (i =0 ::: n; ) II Application to the vector Complexity : (n)+"(n)+o(n) ops Input : Vector b C n Output : Vector C(s t)b C n Compute C(s t)b by () using (BM) and (BM)
12 In an earlier paper of Gerasoulis (98) another algorithm for computing the product of Cauchy matrix with a vector was proposed Moreover, this algorithm does not restrict itself to Cauchy matrices and is valid for more general classes of matrices Gerasoulis algorithm applied for computing the product C(s t)b requires (n) +"(n) +O(n) ops of which (n) + "(n) + O(n) can be incorporated into a preprocessing stage Note, the product of C(s t) by an arbitrary matrix can be computed using algorithm in (n)n + "(n)n + O(n ) ops Formula () also enables the fast algorithms of multiplication with vectors for matrices C(s t) T C(s t) ; and C(s t) ;T to be elaborated For example, formula () implies the following representation for the inverse of Cauchy matrix : C(s t) ; =diag(( f 0 (t i ) ) ) V (t) V (s) ; diag((f(s i )) ): This formula yields that product of the inverse of Cauchy matrix by a vector can be computed in (n)+"(n)+o(n) ops after the same preprocessing as in algorithm In fact the latter proposition can be formulated as follows The system of linear equations with invertible Cauchy matrix can be solved in O(n log n) ops Determinants of Vandermonde and Cauchy matrices In the recent paper Pan (990) an outline of some algorithms for computing up to a sign the determinants of Vandermonde and Cauchy matrices with real entries is suggested For computing the Vandermonde determinant it is proposed therein rst to compute using the algorithm from Canny, Kaltofen and Yagati (989) the entries of the Hankel matrix H in (9) This step costs (n) +(n) +O(n) ops Then compute det H = (det V (t)) This seems to be quite expensive For the determinant of a Cauchy matrix in Pan (990) a complicated procedure of reducing this problem to the analogous problem for some closeto-toeplitz matrix is proposed It turned out that well-known formulas lead to simple and signicantly more eective algorithms for computing the determinants of Vandermonde and Cauchy matrices with complex entries Indeed, as is well known, det V (t) = Y 0i<j (t j ; t i ): () The square of the expression in the right-hand side of () is, by denition, the discriminant D(f) of the polynomial f() = Q ( ; t i ) It is well known (see, for example, van der Q Warden (9)) and can easily be seen, that D(f) =(;) n() f 0 (t i ) Thus, ( det V (t)) =(;) n() Y f 0 (t i ):
13 This formula enables the following algorithm to be proposed Algorithm Computing the Vandermonde determinant Complexity: (n) +"(n) + O(n) ops plus one extraction of a square root Input : Vector t =(t i ) C n (t i = t j while i = j): Output : Value det V (t) uptoasign Compute in (n) ops the coecients of the polynomial f() = Q ( ; t i ) Compute in O(n) ops the coecients of the polynomial f 0 () Evaluate the values f 0 (t i )(i =0 ::: n; ) in "(n) ops Q Compute det (V (t)) =(;) n() f 0 (t i )ino(n) ops Recover det V (t) up to a sign in one operation of the extraction of a square root Furthermore, if all the coordinates of the vector t are real numbers (this situation was considered in Pan (990) ), then the explicit expression in the right-hand side of () allows in O(n log n) time to come to a conclusion about the sign of V (t) Indeed, in this case the sign of V (t) is determined by the quantity of the pairs (t i t j ) satisfying the condition t i > t j while i < j Thus, the sign of det V (t) is determined by the number of inversions in the permutation k =(k 0 k ::: k ), where k i is the position of the element t i in the sequence, which is derived from ft i g by the rearrangement of all the elements in increasing order To compute this number of inversions, we have to rearrange the two-component sequence f(t i i)g in the increasing order of the rst components and then extract the second components into the separate sequence m =(m 0 m ::: m ) It is easy to see that m is the inverse of the permutation k and consequently has the same number of inversions Therefore, to determine the sign of det V (t) it remains to compute the number In(m) of inversions in the permutation m The algorithm based on these arguments is proposed below Note that here we deviate from the rule accepted in the present paper, and estimate the complexity in terms time, and not in ops The reason for this is that we use here the algorithms of sorting and of computing the number of inversions in a permutation These two algorithms do not involve oatpoint operations and involve cheaper operations of comparison and exchange Algorithm Determination of the sign of Vandermonde determinant Complexity : O(n log n) time Input : Vector t =(t i ) R n (t i = t j while i = j): Output : Sign of det V (t) Sort in O(n log n) time the sequence of the pairs = f(t i i)g in the increasing order of the rst components, using, for example, an algorithm from x in Knuth (9)
14 For the permutation m, formed from the second components of the sorted sequence, compute in O(n log n) time the number In(m) of its inversions, using, for example, the algorithm from Knuth (9), problem Set sign V (t) =ifthenumber In(m) iseven and sign V (t) =; in the opposite case Furthermore, for the determinant of Cauchy matrix the following expression is wellknown : Q 0i<j(s j ; s i ) Q 0i<j(t i ; t j ) det C(s t) = Q 0i j(s i ; t j ) (see, for example Polya and Szego (9), where it is attributed to Cauchy (89)) Using (), the latter formula can be rewritten in the form : det C(s t) =(;) n() det V (s) det V (t) Q f(s () i) where as above f() = Q ( ; t i ) Using this formula and the algorithm, one can compute the value det C(s t) upto a sign in (n)+"(n)+o(n) ops plus one operation of the extraction of a square root Furthermore, in the case when s i and t i are real, the formula () and the algorithm allow to determine the sign of the value det C(s t) ino(n log n) time References Aho,AV, Hopcroft,JE and Ullman,JD (9), \The design and analysis of computer algorithms", Addison-Wesley, Reading, Mass Ammar,G and Gader,P (990), New decompositions of the inverse of a Toeplitz matrix, in \Signal processing, Scattering and Operator Theory, and Numerical Methods, Proc Int Symp MTNS-89, vol III", (MAKaashoek, JH van Schruppen and ACMRan Eds), Birkhauser, Boston, pp { 8 Brent,R, Gustavson,F and Yun,D (980), Fast solutions of Toeplitz systems of equations and computation of Pade approximants, Journal of Algorithms,, 9 { 9 Canny,JF, Kaltofen,E and Yagati,L (989), Solving systems of non-linear equations faster, in \Proc ACM-SIGSAM 989 Internat Symp Symbolic Algebraic Comput", ACM, New York, pp { Chun,J and Kailath,T (99), Divide-and-conquer solutions of least-squares problems for matrices with displacement structure, SIAM Journal of Matrix Analysis Appl, (No ), 8 {
15 Cline,RE, Plemmons,RJ and Worm,G (9), Generalized inverses of certain Toeplitz matrices, Linear Algebra Appl, 8, { Gerasoulis,A (98), A fast algorithm for the multiplication of generalized Hilbert matrices with vectors, Math of Computation, 0 (No 8), 9 { 88 8 Gohberg,I and Feldman,I (9), \Convolution equations and projection methods for their solutions", Translations of Mathematical Monographs,, Amer Math Soc 9 Gohberg,I and Olshevsky, V (99), Circulants, displacements and decompositions of matrices, Integral Equations and Operator Theory, (No ), 0 { 0 Gohberg,I and Semencul,A (9), On the inversion of nite Toeplitz matrices and their continuous analogs (in Russian), Matem Issled, Kishinev, (No ), 0 { Heinig,G, Rost, K (98), \Algebraic methods for Toeplitz-like matrices and operators", Operator Theory, vol, Birkauser Verlag, Basel de Hoog,F (98), A new algorithm for solving Toeplitz system of equations, Linear Algebra Appl, 88/89, { 8 Knuth,D (99), "The art of computer programming", vol, \Seminumerical algorithms", Addison-Wesley, Reading, Mass Knuth,D (9), "The art of computer programming", vol, "Sorting and searching", Addison-Wesley, Reading, Mass Lander,FI (9), The Bezoutian and the inversion of Hankel and Toeplitz matrices (in Russian), Matem Issled, Kishinev, 9 (No ), 9{ 8 Pan,V (990), On computations with dense structured matrices, Math of Computation, (No 9), 9 { 90 Polya,G and Szego, G (9), \Problems and theorems in analysis", Springer Verlag, Berlin, New York 8 Van der Warden,BL (9), \Algebra I", Springer Verlag, Berlin, New York
Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationAlgorithms to solve block Toeplitz systems and. least-squares problems by transforming to Cauchy-like. matrices
Algorithms to solve block Toeplitz systems and least-squares problems by transforming to Cauchy-like matrices K. Gallivan S. Thirumalai P. Van Dooren 1 Introduction Fast algorithms to factor Toeplitz matrices
More informationDisplacement rank of the Drazin inverse
Available online at www.sciencedirect.com Journal of Computational and Applied Mathematics 167 (2004) 147 161 www.elsevier.com/locate/cam Displacement rank of the Drazin inverse Huaian Diao a, Yimin Wei
More informationNumerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices
TR-CS-9-1 Numerical solution of the eigenvalue problem for Hermitian Toeplitz-like matrices Michael K Ng and William F Trench July 199 Joint Computer Science Technical Report Series Department of Computer
More informationSplit Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices
Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati
More informationThe Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In
The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed
More informationApproximate Inverse-Free Preconditioners for Toeplitz Matrices
Approximate Inverse-Free Preconditioners for Toeplitz Matrices You-Wei Wen Wai-Ki Ching Michael K. Ng Abstract In this paper, we propose approximate inverse-free preconditioners for solving Toeplitz systems.
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationA Superfast Algorithm for Confluent Rational Tangential Interpolation Problem via Matrix-vector Multiplication for Confluent Cauchy-like Matrices
Contemporary Mathematics A Superfast Algorithm for Confluent Rational Tangential Interpolation Problem via Matrix-vector Multiplication for Confluent Cauchy-like Matrices Vadim Olshevsky and Amin Shokrollahi
More informationA Fast Algorithm for the Inversion of Quasiseparable Vandermonde-like Matrices
Publications -9-24 A Fast Algorithm for the Inversion of uasiseparable Vandermonde-like Matrices Sirani M Perera Embry-Riddle Aeronautical University, pereras2@erauedu Grigory Bonik Vadim Olshevsky Follow
More informationIterative Inversion of Structured Matrices
Iterative Inversion of Structured Matrices Victor Y. Pan a Marc Van Barel b, Xinmao Wang c Gianni Codevico a a Department of Mathematics and Computer Science, Lehman College, City University of New York,
More informationOn A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices
Applied Mathematics E-Notes, 1(01), 18-188 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/amen/ On A Special Case Of A Conjecture Of Ryser About Hadamard Circulant Matrices
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationClarkson Inequalities With Several Operators
isid/ms/2003/23 August 14, 2003 http://www.isid.ac.in/ statmath/eprints Clarkson Inequalities With Several Operators Rajendra Bhatia Fuad Kittaneh Indian Statistical Institute, Delhi Centre 7, SJSS Marg,
More informationRanks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices
Operator Theory: Advances and Applications, Vol 1, 1 13 c 27 Birkhäuser Verlag Basel/Switzerland Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices Tom Bella, Vadim
More informationOn the inversion of the Vandermonde matrix
On the inversion of the Vandermonde matrix A. Eisinberg, G. Fedele Dip. Elettronica Informatica e Sistemistica, Università degli Studi della Calabria, 87036, Rende (Cs), Italy Abstract The inversion of
More informationA CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES
A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES NICOLAE POPA Abstract In this paper we characterize the Schur multipliers of scalar type (see definition below) acting on scattered
More informationLifting to non-integral idempotents
Journal of Pure and Applied Algebra 162 (2001) 359 366 www.elsevier.com/locate/jpaa Lifting to non-integral idempotents Georey R. Robinson School of Mathematics and Statistics, University of Birmingham,
More informationLinear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4
Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x
More informationMATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.
MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants. Elementary matrices Theorem 1 Any elementary row operation σ on matrices with n rows can be simulated as left multiplication
More informationOld and new algorithms for Toeplitz systems
Old and new algorithms for Toeplitz systems Richard P. Brent Computer Sciences Laboratory Australian National University Canberra, Australia ABSTRACT Toeplitz linear systems and Toeplitz least squares
More information4. Parallel computation. a) Processor efficient NC and RNC algorithms for matrix computations with extensions. My paper with
RESEARCH HIGHLIGHTS The results of my research have been published in 3 books and over 230 research articles in journals and refereed proceedings of conferences and have been surveyed in my 18 surveys
More informationas specic as the Schur-Horn theorem. Indeed, the most general result in this regard is as follows due to Mirsky [4]. Seeking any additional bearing wo
ON CONSTRUCTING MATRICES WITH PRESCRIBED SINGULAR VALUES AND DIAGONAL ELEMENTS MOODY T. CHU Abstract. Similar to the well known Schur-Horn theorem that characterizes the relationship between the diagonal
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationCharacters and triangle generation of the simple Mathieu group M 11
SEMESTER PROJECT Characters and triangle generation of the simple Mathieu group M 11 Under the supervision of Prof. Donna Testerman Dr. Claude Marion Student: Mikaël Cavallin September 11, 2010 Contents
More information(17) (18)
Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative
More informationDavid Y. Y. Yun. STAN-CS July1979. DEPARTMENT OF COMPUTER SCIENCE School of Humanities and Sciences STANFORD UNIVERSITY
FAST ALGORITHMS FOR SOLVINGTOEPLITZ SYSTEM OFEQUATIONS AND FINDING RATIONALHERM-ITE INTERPOLANTS David Y. Y. Yun STAN-CS-79-748 July1979 DEPARTMENT OF COMPUTER SCIENCE School of Humanities and Sciences
More informationThe following can also be obtained from this WWW address: the papers [8, 9], more examples, comments on the implementation and a short description of
An algorithm for computing the Weierstrass normal form Mark van Hoeij Department of mathematics University of Nijmegen 6525 ED Nijmegen The Netherlands e-mail: hoeij@sci.kun.nl April 9, 1995 Abstract This
More informationBlock companion matrices, discrete-time block diagonal stability and polynomial matrices
Block companion matrices, discrete-time block diagonal stability and polynomial matrices Harald K. Wimmer Mathematisches Institut Universität Würzburg D-97074 Würzburg Germany October 25, 2008 Abstract
More information290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f
Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica
More informationFraction-free Row Reduction of Matrices of Skew Polynomials
Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr
More informationVICTORIA POWERS AND THORSTEN W ORMANN Suppose there exists a real, symmetric, psd matrix B such that f = x B x T and rank t. Since B is real symmetric
AN ALGORITHM FOR SUMS OF SQUARES OF REAL POLYNOMIALS Victoria Powers and Thorsten Wormann Introduction We present an algorithm to determine if a real polynomial is a sum of squares (of polynomials), and
More informationReview of some mathematical tools
MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical
More informationOn the Exponent of the All Pairs Shortest Path Problem
On the Exponent of the All Pairs Shortest Path Problem Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Zvi Galil Department of Computer Science Sackler Faculty
More informationToeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y
Toeplitz-circulant Preconditioners for Toeplitz Systems and Their Applications to Queueing Networks with Batch Arrivals Raymond H. Chan Wai-Ki Ching y November 4, 994 Abstract The preconditioned conjugate
More informationMath Camp Notes: Linear Algebra I
Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n
More informationSpurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics
UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics
More informationJORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5
JORDAN NORMAL FORM KATAYUN KAMDIN Abstract. This paper outlines a proof of the Jordan Normal Form Theorem. First we show that a complex, finite dimensional vector space can be decomposed into a direct
More informationComputations in Modules over Commutative Domains
Computations in Modules over Commutative Domains 2007, September, 15 Dodgson's Algorithm Method of Forward and Backward Direction The One-pass Method The Recursive Method àñòü I System of Linear Equations
More informationDeterminants of Partition Matrices
journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are
More information2 JAKOBSEN, JENSEN, JNDRUP AND ZHANG Let U q be a quantum group dened by a Cartan matrix of type A r together with the usual quantized Serre relations
QUADRATIC ALGEBRAS OF TYPE AIII; I HANS PLESNER JAKOBSEN, ANDERS JENSEN, SREN JNDRUP, AND HECHUN ZHANG 1;2 Department of Mathematics, Universitetsparken 5 DK{2100 Copenhagen, Denmark E-mail: jakobsen,
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study
More informationI = i 0,
Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example
More informationVII Selected Topics. 28 Matrix Operations
VII Selected Topics Matrix Operations Linear Programming Number Theoretic Algorithms Polynomials and the FFT Approximation Algorithms 28 Matrix Operations We focus on how to multiply matrices and solve
More informationAdditive Latin Transversals
Additive Latin Transversals Noga Alon Abstract We prove that for every odd prime p, every k p and every two subsets A = {a 1,..., a k } and B = {b 1,..., b k } of cardinality k each of Z p, there is a
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms Chapter 13 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath Parallel Numerical Algorithms
More informationFast Transforms: Banded Matrices with Banded Inverses
Fast Transforms: Banded Matrices with Banded Inverses 1. Introduction Gilbert Strang, MIT An invertible transform y = Ax expresses the vector x in a new basis. The inverse transform x = A 1 y reconstructs
More informationPolynomial functions over nite commutative rings
Polynomial functions over nite commutative rings Balázs Bulyovszky a, Gábor Horváth a, a Institute of Mathematics, University of Debrecen, Pf. 400, Debrecen, 4002, Hungary Abstract We prove a necessary
More informationOle Christensen 3. October 20, Abstract. We point out some connections between the existing theories for
Frames and pseudo-inverses. Ole Christensen 3 October 20, 1994 Abstract We point out some connections between the existing theories for frames and pseudo-inverses. In particular, using the pseudo-inverse
More informationA Traub like algorithm for Hessenberg-quasiseparable-Vandermonde matrices of arbitrary order
A Traub like algorithm for Hessenberg-quasiseparable-Vandermonde matrices of arbitrary order TBella, YEidelman, IGohberg, VOlshevsky, ETyrtyshnikov and PZhlobich Abstract Although Gaussian elimination
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationVector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)
Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational
More informationBarnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through Bezout-like Matrices
J Symbolic Computation (2002) 34, 59 81 doi:101006/jsco20020542 Available online at http://wwwidealibrarycom on Barnett s Theorems About the Greatest Common Divisor of Several Univariate Polynomials Through
More informationThe DFT as Convolution or Filtering
Connexions module: m16328 1 The DFT as Convolution or Filtering C. Sidney Burrus This work is produced by The Connexions Project and licensed under the Creative Commons Attribution License A major application
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationThe restarted QR-algorithm for eigenvalue computation of structured matrices
Journal of Computational and Applied Mathematics 149 (2002) 415 422 www.elsevier.com/locate/cam The restarted QR-algorithm for eigenvalue computation of structured matrices Daniela Calvetti a; 1, Sun-Mi
More informationTHREE LECTURES ON QUASIDETERMINANTS
THREE LECTURES ON QUASIDETERMINANTS Robert Lee Wilson Department of Mathematics Rutgers University The determinant of a matrix with entries in a commutative ring is a main organizing tool in commutative
More informationMAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.
MAT 1332: CALCULUS FOR LIFE SCIENCES JING LI Contents 1 Review: Linear Algebra II Vectors and matrices 1 11 Definition 1 12 Operations 1 2 Linear Algebra III Inverses and Determinants 1 21 Inverse Matrices
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationMatrices. Chapter Definitions and Notations
Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which
More information(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea
Torsion of dierentials on toric varieties Klaus Altmann Institut fur reine Mathematik, Humboldt-Universitat zu Berlin Ziegelstr. 13a, D-10099 Berlin, Germany. E-mail: altmann@mathematik.hu-berlin.de Abstract
More informationTopic 1: Matrix diagonalization
Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it
More informationm k,k orthogonal on the unit circle are connected via (1.1) with a certain (almost 1 ) unitary Hessenberg matrix
A QUASISEPARABLE APPROACH TO FIVE DIAGONAL CMV AND COMPANION MATRICES T. BELLA V. OLSHEVSKY AND P. ZHLOBICH Abstract. Recent work in the characterization of structured matrices in terms of the systems
More informationMATH 583A REVIEW SESSION #1
MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),
More informationPolynomial Interpolation
Polynomial Interpolation (Com S 477/577 Notes) Yan-Bin Jia Sep 1, 017 1 Interpolation Problem In practice, often we can measure a physical process or quantity (e.g., temperature) at a number of points
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationLinear Algebra, 4th day, Thursday 7/1/04 REU Info:
Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector
More informationZero controllability in discrete-time structured systems
1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state
More informationAlgorithms for structured matrix-vector product of optimal bilinear complexity
Algorithms for structured matrix-vector product of optimal bilinear complexity arxiv:160306658v1 mathna] Mar 016 Ke Ye Computational Applied Mathematics Initiative Department of Statistics University of
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9
STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they
More informationOctober 7, :8 WSPC/WS-IJWMIP paper. Polynomial functions are renable
International Journal of Wavelets, Multiresolution and Information Processing c World Scientic Publishing Company Polynomial functions are renable Henning Thielemann Institut für Informatik Martin-Luther-Universität
More informationTaylor series based nite dierence approximations of higher-degree derivatives
Journal of Computational and Applied Mathematics 54 (3) 5 4 www.elsevier.com/locate/cam Taylor series based nite dierence approximations of higher-degree derivatives Ishtiaq Rasool Khan a;b;, Ryoji Ohba
More informationClassifications of recurrence relations via subclasses of (H, m) quasiseparable matrices
Classifications of recurrence relations via subclasses of (H, m) quasiseparable matrices T Bella, V Olshevsky and P Zhlobich Abstract The results on characterization of orthogonal polynomials and Szegö
More informationOn group inverse of singular Toeplitz matrices
Linear Algebra and its Applications 399 (2005) 109 123 wwwelseviercom/locate/laa On group inverse of singular Toeplitz matrices Yimin Wei a,, Huaian Diao b,1 a Department of Mathematics, Fudan Universit,
More informationChapter 1. Comparison-Sorting and Selecting in. Totally Monotone Matrices. totally monotone matrices can be found in [4], [5], [9],
Chapter 1 Comparison-Sorting and Selecting in Totally Monotone Matrices Noga Alon Yossi Azar y Abstract An mn matrix A is called totally monotone if for all i 1 < i 2 and j 1 < j 2, A[i 1; j 1] > A[i 1;
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationIndex. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)
page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation
More informationSymmetric functions and the Vandermonde matrix
J ournal of Computational and Applied Mathematics 172 (2004) 49-64 Symmetric functions and the Vandermonde matrix Halil Oruç, Hakan K. Akmaz Department of Mathematics, Dokuz Eylül University Fen Edebiyat
More informationMODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione
MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,
More informationNumerical Methods in Matrix Computations
Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationANONSINGULAR tridiagonal linear system of the form
Generalized Diagonal Pivoting Methods for Tridiagonal Systems without Interchanges Jennifer B. Erway, Roummel F. Marcia, and Joseph A. Tyson Abstract It has been shown that a nonsingular symmetric tridiagonal
More informationPolynomial evaluation and interpolation on special sets of points
Polynomial evaluation and interpolation on special sets of points Alin Bostan and Éric Schost Laboratoire STIX, École polytechnique, 91128 Palaiseau, France Abstract We give complexity estimates for the
More informationOrthogonal Symmetric Toeplitz Matrices
Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite
More informationRON M. ROTH * GADIEL SEROUSSI **
ENCODING AND DECODING OF BCH CODES USING LIGHT AND SHORT CODEWORDS RON M. ROTH * AND GADIEL SEROUSSI ** ABSTRACT It is shown that every q-ary primitive BCH code of designed distance δ and sufficiently
More informationTROPICAL CRYPTOGRAPHY II: EXTENSIONS BY HOMOMORPHISMS
TROPICAL CRYPTOGRAPHY II: EXTENSIONS BY HOMOMORPHISMS DIMA GRIGORIEV AND VLADIMIR SHPILRAIN Abstract We use extensions of tropical algebras as platforms for very efficient public key exchange protocols
More informationPROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS
PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix
More informationEVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS
EVALUATION OF A FAMILY OF BINOMIAL DETERMINANTS CHARLES HELOU AND JAMES A SELLERS Abstract Motivated by a recent work about finite sequences where the n-th term is bounded by n, we evaluate some classes
More informationNORTHERN ILLINOIS UNIVERSITY
ABSTRACT Name: Santosh Kumar Mohanty Department: Mathematical Sciences Title: Ecient Algorithms for Eigenspace Decompositions of Toeplitz Matrices Major: Mathematical Sciences Degree: Doctor of Philosophy
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationarxiv: v1 [math.ra] 23 Feb 2018
JORDAN DERIVATIONS ON SEMIRINGS OF TRIANGULAR MATRICES arxiv:180208704v1 [mathra] 23 Feb 2018 Abstract Dimitrinka Vladeva University of forestry, bulklohridski 10, Sofia 1000, Bulgaria E-mail: d vladeva@abvbg
More informationCirculant Matrices. Ashley Lorenz
irculant Matrices Ashley Lorenz Abstract irculant matrices are a special type of Toeplitz matrix and have unique properties. irculant matrices are applicable to many areas of math and science, such as
More informationA DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 A DECOMPOSITION THEOREM FOR FRAMES AND THE FEICHTINGER CONJECTURE PETER G. CASAZZA, GITTA KUTYNIOK,
More informationNovel Approach to Analysis of Nonlinear Recursions. 1 Department of Physics, Bar-Ilan University, Ramat-Gan, ISRAEL
Novel Approach to Analysis of Nonlinear Recursions G.Berkolaiko 1 2, S. Rabinovich 1,S.Havlin 1 1 Department of Physics, Bar-Ilan University, 529 Ramat-Gan, ISRAEL 2 Department of Mathematics, Voronezh
More informationSuperfast solution of Toeplitz systems based on syzygy reduction
Superfast solution of Toeplitz systems based on syzygy reduction Houssam Khalil, Bernard Mourrain, Michelle Schatzman To cite this version: Houssam Khalil, Bernard Mourrain, Michelle Schatzman. Superfast
More informationMATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.
MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian. Spanning set Let S be a subset of a vector space V. Definition. The span of the set S is the smallest subspace W V that contains S. If
More information