Diagonalization of Tensors with Circulant Structure
|
|
- Julius Mathews
- 5 years ago
- Views:
Transcription
1 Diagonalization of Tensors with Circulant Structure Mansoor Rezgi and Lars Eldén Linköping University Post Print N.B.: When citing this work, cite the original article. Original Publication: Mansoor Rezgi and Lars Eldén, Diagonalization of Tensors with Circulant Structure,, Linear Algebra and its Applications, (4),, Copyright: Elsevier Science B.V., Amsterdam. Postprint available at: Linköping University Electronic Press
2 Diagonalization of Tensors with Circulant Structure Mansoor Rezghi Lars Eldén Abstract The concepts of tensors with diagonal and circulant structure are defined and a framework is developed for the analysis of such tensors. It is shown a tensor of arbitrary order, which is circulant with respect to two particular modes, can be diagonalized in those modes by discrete Fourier transforms. This property can be used in the efficient solution of linear systems involving contractive products of tensors with circulant structure. Tensors with circulant structure occur in models for image blurring with periodic boundary conditions. It is shown that the new framework can be applied to such problems. Introduction Circulant matrices occur in many applications. For instance, they are used as models of the blurring process in digital image processing [4, p.8]. They also occur as preconditioners in the solution of linear systems with Toeplitz structure, see e.g. [6, 7, 7, ]. Circulant matrices are particularly useful since they are diagonalized by the Fourier matrix [8, Chapter.], which means that one can solve a linear system of equations with a circulant matrix of dimension n in O(n log n) operations. In this paper we generalize the concepts of diagonal and circulant matrices to tensors of arbitrary order. We show that a tensor can be circulant in different subsets of modes and that it can be transformed to diagonal form in the corresponding modes. Thus a tensor that is circulant with respect to the modes ( dimensions ) {l, k} is transformed to {l, k}-diagonal form by multiplication by Fourier matrices in the corresponding modes. This diagonalization can be used in fast contractive products of tensors and also for solving tensor equations, e.g. arising in image deblurring (restoration). In order to motivate further the development of the theory for tensors with circulant structure, we briefly discuss the application to image blurring. Matrices with circulant structure occur in connection with spatially invariant blurring models, where periodic boundary conditions are assumed, see e.g. [6, Chapter 4], and as preconditioners [] for problems with Toeplitz structure. There the images are treated as vectors, and the blurring model gives rise to a block circulant matrix with circulant blocks (BCCB), which can be diagonalized by a two-dimensional discrete Fourier transform [8, Chapter.8]. Department of Mathematics, Tarbiat Modares University, P.O. Box 4-7, Tehran, Iran (Rezghi@modares.ac.ir). This work has been done during a visit at Linköping University, Sweden. Department of Mathematics, Linköping University, SE-8 8 Linköping, Sweden(Laeld@mai.liu.se).
3 Next assume that we model a three-dimensional blurring process with periodic boundary conditions, or use a circulant type preconditioner for a three-dimensional problem with Toeplitz structure. It is straightforward to show that a generalization of the -D approach will lead to a matrix with doubly nested circulant structure. In addition, data and unknowns, which in the application represent volumes, are treated as vectors. We will show, see Section 6, that alternatively, and more naturally, the problem can be modeled by using tensor notation and techniques, and it will be shown that the blurring process is modeled by a order-6 tensor with circulant structure that operates on a volume, giving a volume. It appears that some of the results of this paper are known, partially or implicitly, in the numerical image processing community. For instance, the MATLAB codes in [6, Chapter 4.] can be thought of as tensor implementations of operations with BCCB matrices. Thus we do not claim that the results of this paper are new in essence, or they that they lead to more efficient algorithms. However, we believe that the consistent treatment in terms of a tensor framework is novel, and that the advantage of the tensor framework is that it is straightforward to generalize it to tensors of arbitrary order. In fact, in this paper we define the concepts and prove the results for the general case. The paper is organized as follows. In Section we define some tensor concepts that will be needed. Tensors with diagonal structure are defined in Section. We introduce tensors with circulant structure in Section 4. In Section we demonstrate that tensors with circulant structure are diagonalized by discrete Fourier transforms. The application to image blurring models is briefly described in Section 6. To our knowledge, tensors with diagonal structure were first introduced in []. The concept of totally diagonal tensors introduced in Section is used in the low-rank approximation by a tensor using the Candecomp/Parafac model, see e.g. []. A fast algorithm for computing multilinear SVD of special Toeplitz and Hankel tensors is discussed in []. In [] a tensor framework is introduced for analyzing preconditioners for linear equations with Toeplitz structure. Tensor Concepts. Notation and preliminaries. Tensors will be denoted by calligraphic letters, e.g A, B, matrices by capital roman letters and vectors by small roman letters. In order not to burden the presentation with too much detail, we sometimes will not explicitly mention the dimensions of matrices and tensors, and assume that they are such that the operations are well-defined. We will try to make our presentation easy to read by illustrating the concepts in terms of small examples and figures, mostly for order- tensors. For convenience we also introduce some concepts in terms of order- tensors. In such cases the generalization to order-n tensors is obvious. Let A denote a tensor in R I I I. The different dimensions of the tensor are referred to as modes. We will use both standard subscripts and MATLAB-like notation: a particular tensor element will be denoted in two equivalent ways: A(i, j, k) = a ijk. We will refer to subtensors in the following way. A subtensor obtained by fixing one of the indices is called a slice, e.g., A(i,:, :).
4 Such a slice is usually considered as a order- tensor. However, in an assignment we assume that the singleton mode is squeezed. For example A(, :, :) is in R I I, but when we define B = A(, :, :), we let B R I I, i.e, we identify R I I with R I I in the assignment *. A fibre is a subtensor, where all indices but one are fixed, An N-dimensional multi-index ī is defined The notation A(i,:, k). ī = (i,...,i N ). (.) ī k = (i,...,i k, i k+,...,i N ), (.) is used for a multi-index where the k th mode is missing. We define the order-n Kronecker delta as {, if i = = i δ i...i N = N ;, otherwise. The elementwise product of tensors X R I K L and Y R I K L is defined R I K L Z = X. Y, z ikl = x ikl y ikl. In the same way elementwise division is defined as R I K L Z = X./Y, z ikl = x ikl /y ikl. These elementwise operations can also be defined for vectors and matrices.. Tensor-Matrix Multiplication We define mode-p multiplication of a tensor by a matrix as follows. For concreteness we first let p =. The mode- product of a tensor A R J K L by a matrix W R M J is defined J R M K L B = (W) A, b mkl = w mj a jkl. This means that all column vectors (mode- fibres) in the order- tensor are multiplied by the matrix W. Similarly, mode- multiplication by a matrix X means that all row vectors (mode- fibres) are multiplied by the matrix X. Mode- multiplication is analogous. In the case when tensor-matrix multiplication is performed in all modes in the same formula, we omit the subscripts and write j= (X, Y, Z) A, (.) where the mode of each multiplication is understood from the order in which the matrices are given. * It may seem that the property of a vector being a column or a row may be lost in such a transformation. However, the notions of column and row vectors is not essential as long as one keeps track of the ordering of the modes.
5 The notation (.) was suggested by Lim []. An alternative notation was earlier given in [9]. Our (W) p A is the same as A p W in that system. It is convenient to introduce a separate notation for multiplication by a transposed matrix V R J M : R M K L C =. Matricization ( V T) A = A (V ), c mkl = J a jkl v jm. (.4) A tensor can be matricized * in many different ways. We use the convention introduced in [] (which differs from that in [, 4]). Let r = [r,, r L ] be the modes of A mapped to the rows and c = [c,, c M ] be the modes of A mapped to the columns. The matricization is denoted A (r;c) R J K, where J = i= j= L M I ri, and K = I ci. (.) For a given order-n tensor A, the element A(i,...,i N ) is mapped to A (r;c) (j, k) where.4 Contractions j = + k = + L l= M m= [ (irl l+ [ (icm m+ l ) l = m ) I rl l + ] m = I cm m + ] i=, (.6). (.7) Let A and B be order- tensors of the same dimensions. The inner product is defined e = A, B = λ,µ,ν a λµν b λµν. The inner product can be considered as a special case of the contracted product of two tensors, cf. [8, Chapter ], which is a tensor (outer) product followed by a contraction along specified modes. The contracted product can be defined also for tensors of different numbers of modes and contractions can be made along any two conforming modes. For example, with a order-4 tensor A and matrices (order- tensors) F and G we could have, A, F,4;, = G, a jkµν f µν = g jk, (.8) where the subscripts, 4 and, indicate the contracting modes of the two arguments. Obviously (.8) defines a linear system of equations. We also need the following relations in the paper. Proposition.. Let A R I I N, and X R I I k with k < N. Then the following relations hold. * Alternatively, unfolded [9] or flattened []. µ,ν 4
6 a) For every matrix V with conforming dimensions and j k, (V ) j A, X = A, (V T) X. :k;:k j :k;:k b) For every matrix V with conforming dimensions and j N k, (V ) j A, X :k;:k = (V ) k+j A, X :k;:k. Proof. The results immediately follow from the definitions of contractive and matrix-tensor product. Tensors With Diagonal Structure A starting point for our definitions and derivations will be to consider the concept of totally diagonal tensors. Straightforward generalization of a diagonal matrix is the following. Definition.. A tensor A R I I N is called totally diagonal if a i...i N can be nonzero only if i = i = = i N. Figure : A order- totally diagonal tensor. Note that we allow a diagonal element to be zero. Figure shows an order- totally diagonal tensor. In [] totally diagonal tensors are called maximally diagonal. Obviously a totally diagonal order- tensor is a diagonal matrix. The definition of a totally diagonal tensor is not general enough for our purposes. We also need to define tensors that are partially diagonal, i.e., diagonal in two or more modes. For example consider an order- tensor A, such that for every k, A(i, j, k) = if i j. (.) This tensor is diagonal with respect to the first and second modes. Figure illustrates all possible order- partially diagonal tensors. We now give a general definition. Definition.. Let < t N be a natural number, and let {s,...,s t } be a subset of {,...,N}. A tensor A R I I N is called {s,...,s t }-diagonal, if a i...i N can be nonzero only if i s = i s = = i st. By this definition a totally diagonal order-n tensor is {,...,N}-diagonal. Although it is not strictly required that a tensor be square with respect to diagonal modes, in this
7 mode- mode- mode- mode- mode- mode- mode- mode- mode- Figure : Order- partially diagonal tensors, from left to right: {, }, {, } and {, }. Note the convention for ordering the modes that we use in the illustrations. paper we make this assumption, i.e. that the tensor dimensions are the same in the modes for which it is diagonal. It is straightforward to show that matrix multiplication of an {s,...,s t }-diagonal tensor in the modes that are not diagonal, does not affect the {s,...,s t }-diagonality, i.e., the result of such a multiplication is also diagonal in the same modes. Proposition.. Let A R I I N be {s,...,s t }-diagonal. Then for every matrix X and k / {s,...,s t }, the tensor B = (X) k A is still {s,...,s t }-diagonal. Next we define tensors that are diagonal with respect to disjoint subsets of the modes. Definition.4. Let S = {s,...,s t } and Q = {q,...,q t } be two disjoint subsets of {,...,N}. A R I I N is called S, Q-diagonal if a i...i N can be nonzero only if i s = = i st and i q = = i qt. Figure illustrates an order-4 tensor A R n n, which is {, }, {, 4}-diagonal. Thus if i i or i i 4 then a i i i i 4 =. The matricization A (,;,4) R n n of A(:, :, :, ) A(:, :, :, ) A(:, :, :, ) Figure : The tensor A R n n is {, }, {, 4}-diagonal. A is a diagonal matrix. In general, the matricization of partially diagonal tensors, gives rise to multilevel block diagonal matrices. For example, the matricization A (;,) of the {, }-diagonal tensor in Figure, is a block matrix with diagonal blocks. Figure 4 shows different matricizations of a {, }-diagonal order-6 tensor A R. Sometimes a diagonal matrix A is represented by its diagonal elements as A = diag(d), where d is a vector. Thus we can write a diagonal matrix as a ij = δ ij d i, 6
8 nz = nz = nz = 48 Figure 4: The matricizations A (,,;,4,6), A (,,;4,6,), and A (,,;4,,6) of a{, }-diagonal A R. where δ ij is the Kronecker delta. In the same way we can represent diagonal tensors in different modes by using their diagonal elements only. For example a totally diagonal tensor A R I I N can be written as a i...i N = δ i...i N d i where d denotes the diagonal elements of A. Similarly, the {, }-diagonal tensor in (.), can be written A(i, j, k) = δ ij D(i, k), and so where D(i, k) = A(i, i, k). A(i, j,:) = δ ij D(i,:), Proposition.. A tensor A R I I N D R I I k I k+...i N, such that is {l, k}-diagonal if and only if there is a where multi-indices ī and ī k are defined in (.) and (.). A(ī) = δ il i k D(ī k ), (.) Proof. The first part is trivial. For the converse let A be {l, k}-diagonal. Then defining D as D(i,...,i l,...,i k, i k+,...,i N ) = A(i,...,i l,...,i k, i l, i k+,, i N ) completes the proof. The proposition shows that a tensor is {l, k}-diagonal if its k th mode exists only via the Kronecker delta. In the following example we show that tensors with diagonal structure occur naturally in the numerical solution of a self-adjoint Sylvester equation. Example.6. Consider a Sylvester equation V = AZ +ZB. If the matrices A and B are symmetric, then, by using their eigenvalue decompositions, one can transform the equation to the form Y = SX + XT, (.) where T = diag(t,...,t n ) and S = diag(s,...,s m ) are the diagonal matrices of eigenvalues. This is actually a special case of the Bartels-Stewart algorithm for solving the 7
9 Sylvester equation, see e.g. [, Chapter 7.6.]. If we set D(i, j) = s i + t j then (.) can be written as tensor-matrix equation Y = Ω, X,;,, (.4) where Ω is a {, }, {, 4}-diagonal tensor with diagonal elements D, i.e, Now it is easy to see that (.4) is equal to and thus the solution X, can be written Ω(i, j, k, l) = δ ik δ jl D(i, j). Y = D. X, X = Y./D, where. and./ are elementwise product and division. 4 Tensors With Circulant Structure We first consider a few properties of circulant matrices and then define tensors with circulant structure. 4. Circulant Matrices A matrix A = (a ij ) i,j=,...,n is said to be circulant if a ij = a i j, if i j i j ( mod n), (4.) i.e, A is a matrix of the form a a n... a. a a an. a n... a a Circulant matrices have special structure and properties. Every column (row) of A is down (right) cyclic shift of the column left(row above) of it, so if we define C = , (4.) and a and b T are the first column and row of A respectively, then A(:, j) = C j a, j =,...,n, (4.) A(i,:) = (C i b) T, i =,...,n. (4.4) 8
10 This means that a circulant matrix is completely determined by its first column or row. Furthermore it is well known [8, Chapter.] that C has a diagonal form as C = F ΩF, Ω = diag(, w, w,...,w n ), (4.) where F = n... w w... w n w w 4... w (n ) w n w (n )... w (n )(n ), w = exp( πi/n), is the Fourier matrix and F denotes the conjugate and transpose of F. By using (4.) it is easy to prove that any circulant matrix can be diagonalized by the Fourier matrix [8, Chapter.]. Proposition 4.. Let A R n n be a circulant matrix. Then A is diagonalized by the Fourier matrix F as, A = F Λ F, Λ = diag( nfa), (4.6) A = F Λ F, Λ = diag( nfb), (4.7) where Λ and Λ are conjugate, Λ = Λ. (4.8) For completeness we give the proof here. Proof. From (4.) A can be expressed in terms of powers of the matrix C, A = (a, Ca,...,C n a). (4.9) Using the eigenvalue decomposition (4.) we can write A = F (ā,ωā,...,ω n ā), where ā = Fa, i.e., ā ā... ā A = F ā ā ω... ā ω n ā n ā n ω n... ā n ω (n )(n ) nā... = F nā F.... nān Thus Similarly, by using (4.4), so by the above discussions A = F diag( nā)f. A T = [b, Cb,...,C n b], A T = F diag( nfb)f, 9
11 and A = FΛ F, Λ = diag( nfb). Since A is real, we have A = A, and by this the second statement is proved. Multiplication by F is the same as a discrete Fourier transform, which is usually implemented using the Fast Fourier Transform (FFT). In our comments on algorithms we will use the notation fft(a) = nfa and ifft(a) = n F a for the FFT and its inverse. Of course, both operations can be performed in O(n log n) operations, see e.g. []. If then by (4.6) and, by (4.7), Ax = y, y = ifft(fft(x). fft(a)), x = ifft(fft(y)./fft(a)), y = fft(ifft(x). fft(b)), x = fft(ifft(y)./fft(b)). It follows that matrix-vector multiplication and solving linear system with a circulant coefficient matrix can be performed using (4.6) and (4.7) in O(n log n) operations. 4. Tensors Circulant With Respect to Two Modes From Section 4. we see that the key property of a circulant matrix that allows it to be diagonalized using the discrete Fourier transform, is that its columns ( rows) can be written in terms of powers of the shift matrix C, see equation (4.), times the first column ( row). In this subsection we will consider tensors, whose slices are circulant with respect to a pair of modes. Then, naturally, it follows that the tensor can be expressed in terms of powers of C, which, in turn, makes it possible to diagonalize the tensor using the discrete Fourier transform. Consider an order- tensor A R n n n, where for every k, the A(:, :, k) slices are circulant, i.e, A(i, j, k) = A(i, j, k) if i j i j ( mod n), or equivalently, A(i, j,:) = A(i, j, :) if i j i j ( mod n). Thus A is circulant with respect to the first and second modes, and we define A to be {, }-circulant. Definition 4.. A R I I N is called {l, k}-circulant, if I l = I k = n, and A(:,...,:, i l,...,i k, :,...,:) = A(:,...,:, i l,...,i k, :,...,:), if i l i k i l i k ( mod n).
12 Using (4.) every column of the A(:, :, k) can be constructed from A(:,, k) as follows: For every j =,...,n A(:, j, k) = C j A(:,, k), so the corresponding holds also for the slices, A(:, j,:) = ( C j ) A(:,, :). (4.) This shows that every A(:, j,:) slice, is obtained by j cyclic shifts in the mode- direction of A(:,, :), see Figure. Considering shifts of slices, it is straightforward to A(:, 4, ) = C A(:,, ) A(:, 4, :) = ( C ) A(:,, :) Figure : Cyclic shifts of columns and slices. obtain the following relations, the general version of (4.). Lemma 4.. If A R I I N is {l, k}-circulant, then for every i k I k we have and, for every i l I l A(:,...,:,...,i k,...,:) = ( C i k ) A(:,...,:,...,,...,:), (4.) l A(:,...,i l,...,:,...,:) = ( C i l ) A(:,...,,...,:,...,:), (4.) k where is in the k th and l th mode of A in the first and second equations, respectively. Proof. For simplicity, and without loss of generality we assume that l =, k =. For fixed i, i 4,...,i N, each slice A(:, :, i, i 4,...,i N ) is circulant. The lemma now follows immediately from (4.) and (4.4). Note the analogy between the above result and Proposition. for a {l, k}-diagonal tensor: The k th mode exists only via the multiplication by the shift matrix C. Example 4.4. Let A R 4 4 be the {, }-circulant A(:, :, ) = 4, A(:, :, ) = 7 8, A(:, :, ) = So, for every k =,,, A(:, :, k) is circulant and by (4.) A(:,, :) = (C) A(:,, :) = 7 = 4 8, 9 7
13 i.e, A(:,, :) is a cyclic shift of A(:,, :) in the mode-. In the same way by (4.) A(, :, :) = (C) A(, :, :) = 7 = 9, A(, :, :) is a cyclic shift of A(, :, :) in the mode-. These are generalizations of column and row shifts in circulant matrices. Now consider an order-4 tensor A that is {, }-circulant and shown in Figure 6. Every A(:, :, :, ) A(:, :, :, ) Figure 6: {, }-circulant tensor A R. slice A(:, j,:, :) for j =,, is shown in Figure 7. Here every slice A(:, j,:, :) is obtained by a cyclic shift in the mode- direction on previous slice A(:, j, :, :) and so by j cyclic shifts in the mode- direction on A(:,, :, :). This confirms the result of Lemma 4. and so A(:, j,:, :) = ( C j ) A(:,, :, :). A(:,, :, :) A(:,, :, :) A(:,, :, :) Figure 7: A(:, j,:, :) slices of A for j =,,. If A is {l, k}-circulant, then tensor-matrix or contractive multiplication of the tensor in the modes other than l and k, do not destroy the circulant property. Proposition 4.. Let A R I I N be {l, k}-circulant. Then for every matrix X and s l, k, the tensor B = (X) s A is still {l, k}-circulant.
14 Proof. The proof is a direct result of the definition of a circulant tensor and tensor-matrix multiplication. 4. Tensors with Circulant Structure: Disjoint Sets of Modes Next we study the case when A is circulant in disjoint subsets of modes. This type of tensor occurs in image restoration, as we will see in Section 6. The following lemma shows how a tensor can be written in terms of powers of the shift matrix C. Lemma 4.6. Let A R I I N be circulant in two disjoint subset of modes {l, k} and {p, q}. Then for every i k, i q, A(:,...,i k,...,i q,...,:) = ( C ik, C iq ) A(:,...,,...,,...,:). (4.) l,p The s are in modes k and q of A. Proof. Without loss of generality, suppose that l =, k = and p =, q = 4. For every i, i 4 consider A(:, :, i, i 4, :,...,:), since A is {, 4}-circulant A(:, :, i, i 4, :,...,:) = ( C i 4 ) A(:, :, i,, :,...,:). But A(:, :, i,, :,...,:) is {, }-circulant, so By these two equations This proves the lemma. A(:, :, i,, :,...,:) = ( C i ) A(:, :,,, :,...,:). A(:, :, i, i 4, :,...,:) = ( C i, C i 4 ) A(:, :,,, :,...,:). (4.4), This lemma shows that A(:,...,i k,...,i q,...,:) is obtained by performing i k and i q cyclic shifts in the l and p modes respectively on A(:,...,,...,,...,:). Example 4.7. Let A R be the {, }, {, 4}-circulant tensor shown in Figure 8. Here by Lemma 4.6, all elements of A can be determined by shifts on A(:, :,, ). A(:, :, :, ) A(:, :, :, ) Figure 8: {, }, {, 4}-circulant tensor A. For example A(:, :,, ) is obtained after cyclic shifts on A(:, :,, ) in the mode- direction, A(:,:,,) = ( C ) A(:,:,,) 7 = 7 =.
15 In the same way, A(:, :,, ) is obtained by cyclic shift on A(:, :,, ) in the mode- direction, A(:,:,,) = (C) A(:,:,,) ( ) = 7 = 7. A(:, :,, ) is obtained after and cyclic shifts on A(:, :,, ) in mode- and mode-, respectively. A(:,:,,) = ( C,C ) A(:,:,,), ( ) 7 = 7 = The A (,;,4) matricization of this tensor is a block circulant matrix with circulant blocks (BCCB), 7 7 A (,;,4) 7 = Figure 9 shows an example, where A is {, }, {, 4}-circulant. A(:, :, :, ) A(:, :, :, ) Figure 9: {, }, {, 4}-circulant tensor A. Lemma 4.6 can be generalized for cases when A is circulant with respect to several different disjoint subsets of modes. The following special case occurs in image blurring models. Corollary 4.8. Let A R I I N be such that for every i =,...,N, A is {i, i + N}- circulant. Then for every i N+,...,i N, A(:,...,:, i N+,...,i N ) = ( C in+,...,c i N ) A(:,...,:,,...,) (4.),...,N Proof. The proof is straightforward by induction and using Lemma
16 4.4 Tensors with Circulant Structure: Coinciding Modes We now introduce the situation where two or more modes are circulant with respect to the same mode. For simplicity consider an order- tensor A R n n n that is {, }, {, }- circulant. So by Lemma 4. for every j and k These equations show that A(:, j,:) = ( C j ) A(:,, :), ( A(:, :, k) = C k ) A(:, :, ). A(:, j, k) = ( C j ) (C A(:,, k) = j C k ) A(:,, ), i.e, A can be constructed by A(:,, ). This proves the following lemma. Lemma 4.9. Let A R n n n be {, }, {, }-circulant. Then for every j, k ( A(:, j, k) = C j+k ) A(:,, ). (4.6) Now its natural to investigate relations between mode- and mode- when A is {, }, {, }- circulant. Proposition 4.. Let A R n n n be {, }, {, }-circulant. Then mode- and mode- have the following relations a) A(:, j, k) = A(:, j, k ) if j + k j + k ( mod n). (4.7) b) For every j and for every k, A(:, j,:) = A(:,, :) (C j ), (4.8) ( A(:, :, k) = A(:, :, ) C k ). (4.9) Proof. Since j + k = j + k + pn where p, is an integer, we get by (4.6), ( ) A(:, j, k) = C j +k +pn A(:,, ) ( ) = C j +k A(:,, ) = A(:, j, k ). This proves the first statement. Then, for fixed i, by (4.7) T T a ij a i,j, a ij a i,j,. =., a ij,n a i,j,n a ijn a i,j, i.e., A(i, j,:) = A(i, j, :)C. By continuing this process A(i, j,:) = A(i,, :)C (j ), so ( A(:, j,:) = A(:,, :) C (j )). This proves (4.8). In a similar way (4.9) can also be proved.
17 By (4.7) for every i, A(i,:, :) is symmetric. These results can be written for a tensor A R I I N, which is circulant in arbitrary modes{l, k} and {l, q}. Example 4.. Let A R be the {, }, {, }-circulant that is shown in Figure. By Lemma 4.9, for every j and k, A(:, j, k) can be constructed by cyclic shifts on fiber A(:,, ). For instance Figure : Order- {, }, {, }-circulant tensor A. A(:,,) = ( C ) A(:,,) = =. By writing A(i,:, :) for every i, It is easy to see the relations between mode- and mode-. A(, :, :) = ( ), A(, :, :) = ( ), A(, :, :) = ( ) Here, it can be seen that every column of these slices are up cyclic shift of the column left. {l, k}-diagonalization of a {l, k}-circulant Tensor In this section we will show that, if a tensor is circulant in some modes then by using the Fourier transform this tensor can be diagonalized in the corresponding modes. For instance, let A R be the {, }-circulant tensor given in Figure i -.9i +.9i -.+.8i Figure : {, }-circulant tensor A (left) and {, }-diagonal tensor Ω = (F, F ), A (right). By (4.6) for every k, F A(:, :, k)f is {, }-diagonal with diagonal elements 6
18 So as shown in Figure, ( nf ) A(:,, k). Ω = (F, F ), A is a {, }-diagonal tensor, where holds the diagonal elements of Ω, i.e., In this particular example ( ) D = F A(:,, :) = F and for example, D = ( nf ) A(:,, :), (.) Ω(i, j,:) = δ ij D(i,:). (.) Ω(,, :) = D(, :) 7 = = (..8i.9i ). 6..8i.9i. +.8i +.9i, This shows that A = (F, F), Ω where Ω and D are defined as (.) and (.), respectively. Now the following theorem shows that every {l, k}-circulant tensor can be diagonalized in the {l, k} modes. Theorem.. Let A R I I N be {l, k}-circulant. Then A satisfies A = (F, F) l,k Ω, where Ω is a {l, k}-diagonal tensor with diagonal elements D = ( nf ) A(:,...,, :,...,:); l here is in the k th mode of A. In particular. Ω(ī) = δ il i k D(ī k ), where the multi-indices ī and ī k are defined in (.), (.). Proof. For simplicity and without loss of generality we assume that l =, k =. For every fixed i...i N, by (4.6) (F, F ), A(:, :, i,...,i N ), is {, }-diagonal with diagonal elements nf A(:,, i,...,i N ). If we define D = ( nf ) A(:,, :,...,:), 7
19 and for every i,...,i N set Ω(:, :, i,...,i N ) = (F, F ), A(:, :, i,...,i N ), then Ω = (F, F ), A is {, }-diagonal, and its diagonal elements are D, i.e, Ω(ī) = δ i i D(ī ). This diagonalization can be used in fast matrix-tensor/contractive products.. Diagonalization of a Tensor in Disjoint Circulant Modes In this subsection we discuss the diagonalization of tensors that are circulant in different disjoint subsects of modes. First consider the tensor A in Example 4.7, which is {, }, {, 4}-circulant. Since A is {, 4}-circulant, (F, F ),4 A is {, 4}-diagonal. But (F, F ),4 A is still {, }-circulant. Thus ( ) Ω = (F, F ), (F, F ),4 A = (F, F, F, F ) :4 A is {, }-diagonal. But we know that Ω also preserves the {, 4}-diagonality of (F, F ),4 A, i.e., Ω is {, }, {, 4}-diagonal tensor. Figure confirms this result and shows that diagonal elements Ω are Ω(:, :, :, ) Ω(:, :, :, ) 4.-.i.+.i i -.-9.i Figure : {,},{,4}-diagonal tensor Ω. ( ) D = F, F A(:, :,, ), = ( F) 7 ( 4 9 F) =..i. + 9.i,. +.i. 9.i 8
20 and For example Ω(i, i, i, i 4 ) = δ i i δ i i 4 D(i, i ). Ω(,,, ) = D(, ) =. 9.i. In general if a tensor is circulant in different disjoint modes, then it can be diagonalized in the corresponding modes. Theorem.. Let A R I I N be circulant in two disjoint subset of modes {l, k}and {p, q}. Then A satisfies A = (F, F, F, F) l,p,k,q Ω where Ω is a {l, k}, {p, q}-diagonal tensor with diagonal elements D = ( nf, mf ) A(:,...,,...,,...,:); l,p here we denote n = I l = I k, m = I p = I q, and the s are in the k th and q th modes of A. Further, Ω(ī) = δ il i k δ ipi q D(ī k,q ), where ī is defined in (.) and ī k,q = (i,...,i k, i k+,...,i q, i q+,...,i N ). Proof. Without loss of generality we suppose that A is {, },{, 4}-circulant, i.e., l =, k =, p =, q = 4, and N=4. Since A is {, 4}-circulant we have where Ω is {, 4}-diagonal, A = (F, F),4 Ω (.) Ω(i, i, i, i 4 ) = δ i i 4 D(i, i, i ), (.4) and D = ( mf ) A(:, :, :, ). (.) By Proposition 4. and Theorem., D is {, }-circulant and satisfying D = (F, F), Ω, (.6) where Ω, is a {, }-diagonal tensor Ω(i, i, i ) = δ i i D(i, i ), (.7) and D = ( nf ) D(:, :, ). (.8) From (.), D(:, :, ) = ( mf) A(:, :,, ). So substituting D(:, :, ) in (.8), gives D = ( nf, mf ) A(:, :,, ). (.9), If we define Ω = (F, F ), Ω, (.) 9
21 then Ω(:, i, :, i 4 ) = (F, F ), Ω(:, i, :, i 4 ) = δ i i 4 (F, F ), D(:, i, :) = δ i i 4 Ω(:, i, :), where the last two equations come from (.4) and (.6), respectively. So by this equation and (.7) Ω(i, i, i, i 4 ) = δ i i 4 Ω(i, i, i ) = δ i i δ i i 4 D(i, i ). This shows that Ω is a {, }, {, 4}-diagonal tensor with diagonal elements D. Then by putting Ω from (.) in (.) and the theorem is proved. A = (F, F, F, F) :4 Ω, (.) Now we consider a special situation that is a generalization of the D algorithm in [6, Chapter 4.], see also Section 6. Corollary.. Let A R I I N be such that for every i =,...,N, A is {i, i + N}- circulant. Then A can be diagonalized as A = (F,...,F, F,...,F),...,N,N+,...,N Ω, (.) where Ω is a {, N + },...,{N, N}-diagonal tensor, with diagonal elements ( D = I F,..., ) I N F A(:,...,:,,,...,). For every i,...,i N,...,N N Ω(i,...,i N ) = ( δ isi s+n )D(i,...,i N ) Proof. The proof is straightforward by induction and using Theorem.. s= By a straightforward generalization of the procedure described in Section 6.. one can see that a tensor of the structure mentioned in Corollary. occurs in N-dimensional image blurring with periodic boundary conditions. We next show that a linear equation involving such a tensor can be solved cheaply. Corollary.4. Let A be a tensor satisfying the conditions of Corollary., and let X R I I N. The linear system of equations is equivalent to Y = A, X :N;:N (.) Y = D. X, (.4) where Y = (F,...,F ) :N Y, X = (F,...,F ) :N X and ( D = I F,..., ) I N F A(:,...,:,,...,) :N
22 Proof. By (.) the linear system (.) can be written Y = (F,...,F, F,...,F) :N Ω, X :N;:N ) = (F,...,F) :N ( (F,...,F ) :N Ω, X :N;:N ) = (F,...,F) :N ( Ω,(F,...,F ) :N X :N;:N, where the last two equations are obtained using Lemma.. By multiplying the result from modes to N by F, we get (F,...,F ) :N Y = Ω,(F,...,F ) :N X :N;:N. Now if we define Y = (F,...,F ) :N Y and X = (F,...,F ) :N X then Y = Ω, X :N;:N. Since Ω is {i, i + N}-diagonal for every i =,...,N, it is straightforward to show that this equation equal to Y = D. X, and thus can be solved by element-wise division, provided that all elements of D are non-zero. The solution X is then obtained by Fourier transforms. Let fftn(x) denote the N-dimensional Fourier transform, ( I F,..., I N F) :N X, and let ifftn(x) denote the inverse transform, ( ) F,..., F I IN By (.4), if we set P = A(:,...,:,,...,), then If all elements of D are nonzero, we have and the solution of (.) can be written as :N X. Y = fftn(fftn(p).*ifftn(x)). X = Y./D X = fftn(ifftn(y)./fftn(p)).. Diagonalization of a Tensor with Coinciding Circulant Modes Consider the {, } and {, }-circulant A R shown in Example 4.. We compute Ω as Ω = (F, F, F ) : A. Figure shows that Ω is {,, }-diagonal tensor. On the other hand.4 D = (F) A(:,, ) =.6.i,.6 +.i contains the diagonal elements of Ω. This confirms the following theorem.
23 i -.6+.i Figure : {,, }-diagonal tensor Ω = (F, F, F ) : A, that A is {, }, {, }-circulant. Theorem.. Let A R I I N be {, } and {, }-circulant, then A can be diagonalized as A = (F, F, F) : Ω, where Ω is {,, }-diagonal tensor and its diagonal elements are D = (nf) A(:,,, :,...,:) so, for every i, i, i Ω(ī) = δ iii D(ī, ) Proof. A is {, }-circulant, so Ω = (F, F ), A (.) which Ω is {, }-diagonal and Ω(i, i, i, i 4,...,i N ) = δ i i D(i, i, i 4,...,i N ) (.6) D = ( nf ) A(:, :,, :,...,:). (.7) Define Ω = (F ) Ω and D = (F ) D then by (.),(.6) and (.7) Ω = (F, F, F ) : A (.8) Ω(i, i, i, i 4,...,i N ) = δ i i D(i, i, i 4,...,i N ) (.9) D = (F, F ), na(:, :,, :,...,:). (.) By (.), D is {, }-diagonal because na(:, :,, :,...,:) is {, }-circulant. So diagonal elements of D are in D = ( nf ) na(:, :,, :,...,:) = (nf) A(:, :,, :,...,:) (.) and D(i, i, i 4,...,i N ) = δ i i D(i, i 4,...,i N ). Now by this equation and (.9) Ω(i, i, i, i 4,...,i N ) = δ i i D(i, i, i 4,...,i N ) = δ i i δ i i D(i, i 4,...,i N ) = δ i i i D(i, i 4,...,i N ) shows that Ω is {,, }-diagonal and its diagonal elements are in D = (nf) A(:,,, :,...,:). By this fact and (.8) the proof can be finished.
24 Corollary.6. Let A R I I N be {, i}-circulant for i =,,...,N. Then A can be diagonalized A = (F, F,...,F) :N Ω, (.) where Ω is totally diagonal with diagonal elements ( ) d = n (N )/ F A(:,,,...,), and Ω(i,...,i N ) = δ i...i N d(i ). Proof. By induction and using Theorem. the proof is straightforward. 6 Application in Image Blurring Models We consider the image deblurring problem with space invariant point spread function. The mathematical model is the following convolution equation p(s t)x(t)dt = y(s), Ω where the kernel p is called the point spread function (PSF), which often in applications has compact support. In the discrete version, pixels of the blurred image are obtained from a weighted sum of the corresponding pixel and its neighbors in the true image, where elements of the PSF array act as weights. In particular, in the one dimensional case, if the vectors y, x and p, are the blurred image, true image and PSF array, respectively, then discrete convolution can be summarized [6, Chapter 4]: Let p = Jp, where J is the reversal matrix, i.e, it reverses the ordering of elements of p. For computing the i th pixel of blurred image y, put the center in the rotated PSF array p on the i th pixel of true image x, and compute the contractive multiplication of the corresponding arrays. Convolution in higher dimensions is analogous, but the rotation with J must be done in all modes. For example in three dimensions, where the PSF array is P R n n n, the rotated PSF array is P = (J, J, J) : P. In this process, the blurred image is not only affected by the corresponding finite size true image, but it also depends on values of pixels on the boundaries of the true image. In order to apply the PSF at a point close to the boundary, we must impose boundary conditions, i.e. we must continue the image artificially outside its boundary, e.g. by using zero, periodic, reflective and anti-reflective boundary conditions [6, 9, ]. In this brief description we ignore the ill-posed nature of image deblurring. 6. Periodic Boundary Conditions One of the most common ways of imposing boundary conditions is to continue the image periodically outside the domain. The most important advantage of this type of boundary condition is that the linear system has circulant structure, which makes it possible to solve it using FFT. We now consider, in some detail, the -D and -D cases. Then we see that the -D case and higher are simple generalizations in the sense that one only increases the number of modes of the corresponding tensors. The algebra of solving a linear system with this circulant structure is the same independently of the number of modes.
25 6.. -D Consider one dimensional image blurring with periodic boundary condition. Let p R n be the PSF array with center located at the l th entry. Then the blurred image y R n is x x x x x x x x x p p p y = p x + p x + p x Figure 4: One dimensional discrete convolution with periodic boundary conditions when n= and the center is p. obtained from the true image x R n as y i = C l+i Jp, x, i =,...,n. For simplicity let ˆp = C l Jp, so y i = C i ˆp, x, i =,...,n. This can be written as matrix-vector equation y = Ax, (6.) where b T i = (C i ˆp) T is the i th row of A. Thus, A is a circulant matrix, and by using (4.7), the linear system of equations (6.) can be solved in O(n log n) operations: In MATLAB ˆp is computed as 6.. -D x=fft(ifft(y)./fft(ˆp)). ˆp = circshift(p(n:-:),l). The process in the two dimension is analogous. Assume that all boundaries are periodic, and let P R n n be the PSF array, where p l,k is its center. Put the center p l,k in the rotated PSF array over x, and compute the contracted product of X and P in that position, giving y. Then by moving P i steps down and j steps to the right and computing the contractive product of P and X one obtains the y ij pixel of the blurred image Y. This procedure can be written y ij = C i PC (j )T, X,;,, where P = C l JPJC (k)t. We can write this as a linear tensor-matrix transformation Y = A, X :;:, A(:, :, i, j) = C i PC (j )T. (6.) 4
26 p p p p p p x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x p p p x Figure : Two dimensional convolution with periodic boundary conditions and center p,. From Lemma 4.6 we see that A is a {, } and {, 4}-circulant tensor, so it can be diagonalized by Fourier matrices. Since A(:, :,, ) = P, by Corollary.4 we have where Y = F Y F, X = F XF and Now the image X can be computed by 6.. -D Y = P. X, P = ( nf, nf ) A(:, :,, ), = nf PF = fft( P). X = fft(ifft(y)./fft( P)). The three and higher dimensional cases are now handled simply by increasing the number of modes. Let X R n n n and Y R n n n be the true and blurred image, respectively, and let P be the PSF array with center at P(l, l, l ). The rotated PSF is given by ( ) P = C l J, C l J, C l J P. : Now the relation between the true and blurred image can be written as a tensor-tensor linear system ( Y = A, X :;:, A(:, :, :, i, j, k) = C i, C j, C k ) P. (6.) : where A is a {, 4},{, } and {, 6}-circulant tensor. By Corollary.4 this linear system is equivalent to Y = P. X,
27 where Y = (F, F, F ) : Y, X = (F, F, F ) : X, and P = ( nf, nf, nf ) A(:, :, :,,, ) : = ( nf, nf, nf ) : P. So it is straightforward to show that (6.) is solved by 7 Conclusions Y=fftn(fftn( P).*ifft(X)), (6.4) X=fftn(ifft(Y)./fftn( P)). (6.) In this paper we introduce a framework for handling tensors with diagonal and circulant structure. We show that every tensor that is circulant with respect to a pair of modes can be diagonalized, by the discrete Fourier transform, with respect to those modes. This means that the linear systems with circulant structure, which occur for instance in image deblurring in N dimensions, can be solved efficiently, using N-dimensional Fourier transforms. This is of course well-known. On the other hand, the derivation of these properties of the linear systems, has been based on complicated mappings of tensor data and tensor operators on vectors and matrices. In our framework no such mappings are needed, and the blurring process can be described using notation that is natural in the application. In addition, the generalization to higher dimensions is straightforward in the new framework. The tensor framework can be used also in connection with preconditioners with circulant structure. We are presently studying how other problems involving structured matrices can be generalized to tensors in a similar way. 8 Acknowledgement We are indebted to two anonymous referees, whose suggestions lead to improvements of the paper. References [] O. Aberth. The transformation of tensors to diagonal form. SIAM J. Appl. Math., :47, 967. [] R. Badeau and R. Boyer. Fast multilinear singular value decomposition for structured tensors. SIAM J. Matrix Anal. Appl, ():8, 8. [] B. Bader and T. Kolda. Algorithm 86: MATLAB tensor classes for fast algorithm prototyping. ACM Transactions on Mathematical Software, :6 6, 6. [4] B. W. Bader and T. G. Kolda. Efficient MATLAB computations with sparse and factored tensors. SIAM Journal on Scientific Computing, ():, 7. [] S. S. Capizzano. A note on antireflective boundary conditiones and fast deblurring models. SIAM J. Sci. Comput., (4):7,. 6
28 [6] R. H. Chan and G. G. Strang. Toeplitz equations by conjugate gradients with circulant preconditioner. SIAM J. Sci. Stat. Comput., :4 9, 989. [7] T. F. Chan. An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput., 9:766 77, 988. [8] P. J. Davis. Circulant Matrices. Wiley Interscience, nd edition, 994. [9] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl., : 78,. [] L. De Lathauwer, B. De Moor, and J. Vandewalle. Independent component analysis and (simultaneous) third-order tensor diagonalization. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 49:6 7,. [] Vin de Silva and Lek-Heng Lim. Tensor rank and the ill-posedness of the best lowrank approximation problem. SIAM Journal on Matrix Analysis and Applications, ():84 7, 8. [] L. Eldén and B. Savas. A Newton Grassmann method for computing the best multilinear rank-(r, r, r ) approximation of a tensor. SIAM J. Matrix Anal. Appl., :48 7, 9. [] G. H. Golub and C. F. Van Loan. Matrix Computations. rd ed. Johns Hopkins Press, Baltimore, MD., 996. [4] R. Gonzalez and R. Woods. Digital Image Processing. Addison-Wesley, Reading, MA, 99. [] M. Hanke and J. Nagy. Restoration of atmospherically blurred images by symmetric indefinite conjugate gradient techniques. Inverse Problems, :7 7, 996. [6] P. C. Hansen, J. G. Nagy, and D. P. O Leary. Deblurring Images: Matrices, Spectra, and Filtering. SIAM, 6. [7] L. Hemmingsson. A semi-circulant preconditioner for the convection-diffusion equation. Numer. Math., 8: 48, 998. [8] S. Kobayashi and K. Nomizu. Foundations of Differential Geometry. Interscience Publisher, 96. [9] M. K. Ng, R. H. Chan, and W. Tang. A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comput,, :8 866, 999. [] K. Otto. A unifying framework for preconditioners based on fast transforms. Technical Report 87, Department of Scientific Computing, Uppsala University, Uppsala, Sweden, 996. [] G. Strang. A proposal for Toeplitz matrix calculations. Stud. Appl. Math., 74:7 76, 986. [] C. F. Van Loan. Computational frameworks for the Fast Fourier Transform. SIAM, Philadelphia, 99. 7
29 [] M. A. O. Vasilescu and D. Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. In Proc. 7th European Conference on Computer Vision (ECCV ), Lecture Notes in Computer Science, Vol., pages , Copenhagen, Denmark,. Springer Verlag. 8
Linear Algebra and its Applications
Linear Algebra and its Applications 435 (20) 422 447 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Diagonalization of tensors
More informationPermutation transformations of tensors with an application
DOI 10.1186/s40064-016-3720-1 RESEARCH Open Access Permutation transformations of tensors with an application Yao Tang Li *, Zheng Bo Li, Qi Long Liu and Qiong Liu *Correspondence: liyaotang@ynu.edu.cn
More informationFast Hankel Tensor-Vector Products and Application to Exponential Data Fitting
Fast Hankel Tensor-Vector Products and Application to Exponential Data Fitting Weiyang Ding Liqun Qi Yimin Wei arxiv:1401.6238v1 [math.na] 24 Jan 2014 January 27, 2014 Abstract This paper is contributed
More informationA fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration
NTMSCI 5, No. 2, 277-283 (2017) 277 New Trends in Mathematical Sciences http://dx.doi.org/ A fast algorithm of two-level banded Toeplitz systems of linear equations with application to image restoration
More informationTensor-Tensor Product Toolbox
Tensor-Tensor Product Toolbox 1 version 10 Canyi Lu canyilu@gmailcom Carnegie Mellon University https://githubcom/canyilu/tproduct June, 018 1 INTRODUCTION Tensors are higher-order extensions of matrices
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationPreconditioning. Noisy, Ill-Conditioned Linear Systems
Preconditioning Noisy, Ill-Conditioned Linear Systems James G. Nagy Emory University Atlanta, GA Outline 1. The Basic Problem 2. Regularization / Iterative Methods 3. Preconditioning 4. Example: Image
More informationTHE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR
THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR WEN LI AND MICHAEL K. NG Abstract. In this paper, we study the perturbation bound for the spectral radius of an m th - order n-dimensional
More informationMathematics and Computer Science
Technical Report TR-2004-012 Kronecker Product Approximation for Three-Dimensional Imaging Applications by MIsha Kilmer, James Nagy Mathematics and Computer Science EMORY UNIVERSITY Kronecker Product Approximation
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-26 1. Our enrollment is at 50, and there are still a few students who want to get in. We only have 50 seats in the room, and I cannot increase the cap further. So if you are
More informationA concise proof of Kruskal s theorem on tensor decomposition
A concise proof of Kruskal s theorem on tensor decomposition John A. Rhodes 1 Department of Mathematics and Statistics University of Alaska Fairbanks PO Box 756660 Fairbanks, AK 99775 Abstract A theorem
More informationAdvanced Numerical Linear Algebra: Inverse Problems
Advanced Numerical Linear Algebra: Inverse Problems Rosemary Renaut Spring 23 Some Background on Inverse Problems Constructing PSF Matrices The DFT Rosemary Renaut February 4, 23 References Deblurring
More informationOrthogonal Symmetric Toeplitz Matrices
Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationarxiv: v1 [math.co] 10 Aug 2016
POLYTOPES OF STOCHASTIC TENSORS HAIXIA CHANG 1, VEHBI E. PAKSOY 2 AND FUZHEN ZHANG 2 arxiv:1608.03203v1 [math.co] 10 Aug 2016 Abstract. Considering n n n stochastic tensors (a ijk ) (i.e., nonnegative
More informationarxiv: v1 [math.ra] 13 Jan 2009
A CONCISE PROOF OF KRUSKAL S THEOREM ON TENSOR DECOMPOSITION arxiv:0901.1796v1 [math.ra] 13 Jan 2009 JOHN A. RHODES Abstract. A theorem of J. Kruskal from 1977, motivated by a latent-class statistical
More information1. Connecting to Matrix Computations. Charles F. Van Loan
Four Talks on Tensor Computations 1. Connecting to Matrix Computations Charles F. Van Loan Cornell University SCAN Seminar October 27, 2014 Four Talks on Tensor Computations 1. Connecting to Matrix Computations
More informationMulti-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems
Multi-Linear Mappings, SVD, HOSVD, and the Numerical Solution of Ill-Conditioned Tensor Least Squares Problems Lars Eldén Department of Mathematics, Linköping University 1 April 2005 ERCIM April 2005 Multi-Linear
More informationLinear Subspace Models
Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,
More informationNumerical Linear Algebra and. Image Restoration
Numerical Linear Algebra and Image Restoration Maui High Performance Computing Center Wednesday, October 8, 2003 James G. Nagy Emory University Atlanta, GA Thanks to: AFOSR, Dave Tyler, Stuart Jefferies,
More informationLecture 2. Tensor Unfoldings. Charles F. Van Loan
From Matrix to Tensor: The Transition to Numerical Multilinear Algebra Lecture 2. Tensor Unfoldings Charles F. Van Loan Cornell University The Gene Golub SIAM Summer School 2010 Selva di Fasano, Brindisi,
More informationMultilinear Singular Value Decomposition for Two Qubits
Malaysian Journal of Mathematical Sciences 10(S) August: 69 83 (2016) Special Issue: The 7 th International Conference on Research and Education in Mathematics (ICREM7) MALAYSIAN JOURNAL OF MATHEMATICAL
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationKronecker Product Approximation with Multiple Factor Matrices via the Tensor Product Algorithm
Kronecker Product Approximation with Multiple actor Matrices via the Tensor Product Algorithm King Keung Wu, Yeung Yam, Helen Meng and Mehran Mesbahi Department of Mechanical and Automation Engineering,
More informationThe matrix approach for abstract argumentation frameworks
The matrix approach for abstract argumentation frameworks Claudette CAYROL, Yuming XU IRIT Report RR- -2015-01- -FR February 2015 Abstract The matrices and the operation of dual interchange are introduced
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationImage Registration Lecture 2: Vectors and Matrices
Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationThe multiple-vector tensor-vector product
I TD MTVP C KU Leuven August 29, 2013 In collaboration with: N Vanbaelen, K Meerbergen, and R Vandebril Overview I TD MTVP C 1 Introduction Inspiring example Notation 2 Tensor decompositions The CP decomposition
More informationRANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA
Discussiones Mathematicae General Algebra and Applications 23 (2003 ) 125 137 RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA Seok-Zun Song and Kyung-Tae Kang Department of Mathematics,
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationVectors and Matrices Notes.
Vectors and Matrices Notes Jonathan Coulthard JonathanCoulthard@physicsoxacuk 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove
More informationON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH
ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix
More informationEigenvalues of tensors
Eigenvalues of tensors and some very basic spectral hypergraph theory Lek-Heng Lim Matrix Computations and Scientific Computing Seminar April 16, 2008 Lek-Heng Lim (MCSC Seminar) Eigenvalues of tensors
More informationFundamentals of Multilinear Subspace Learning
Chapter 3 Fundamentals of Multilinear Subspace Learning The previous chapter covered background materials on linear subspace learning. From this chapter on, we shall proceed to multiple dimensions with
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationMathematical Beer Goggles or The Mathematics of Image Processing
How Mathematical Beer Goggles or The Mathematics of Image Processing Department of Mathematical Sciences University of Bath Postgraduate Seminar Series University of Bath 12th February 2008 1 How 2 How
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationAN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES
AN INVERSE EIGENVALUE PROBLEM AND AN ASSOCIATED APPROXIMATION PROBLEM FOR GENERALIZED K-CENTROHERMITIAN MATRICES ZHONGYUN LIU AND HEIKE FAßBENDER Abstract: A partially described inverse eigenvalue problem
More informationProduct Zero Derivations on Strictly Upper Triangular Matrix Lie Algebras
Journal of Mathematical Research with Applications Sept., 2013, Vol.33, No. 5, pp. 528 542 DOI:10.3770/j.issn:2095-2651.2013.05.002 Http://jmre.dlut.edu.cn Product Zero Derivations on Strictly Upper Triangular
More informationThird-Order Tensor Decompositions and Their Application in Quantum Chemistry
Third-Order Tensor Decompositions and Their Application in Quantum Chemistry Tyler Ueltschi University of Puget SoundTacoma, Washington, USA tueltschi@pugetsound.edu April 14, 2014 1 Introduction A tensor
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More information6 The SVD Applied to Signal and Image Deblurring
6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationThe Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-Zero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In
The Best Circulant Preconditioners for Hermitian Toeplitz Systems II: The Multiple-ero Case Raymond H. Chan Michael K. Ng y Andy M. Yip z Abstract In [0, 4], circulant-type preconditioners have been proposed
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationIll Posed Inverse Problems in Image Processing
Ill Posed Inverse Problems in Image Processing Introduction, Structured matrices, Spectral filtering, Regularization, Noise revealing I. Hnětynková 1,M.Plešinger 2,Z.Strakoš 3 hnetynko@karlin.mff.cuni.cz,
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More information8 The SVD Applied to Signal and Image Deblurring
8 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an
More informationOUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact
Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu
More informationA Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations
A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More informationAn Even Order Symmetric B Tensor is Positive Definite
An Even Order Symmetric B Tensor is Positive Definite Liqun Qi, Yisheng Song arxiv:1404.0452v4 [math.sp] 14 May 2014 October 17, 2018 Abstract It is easily checkable if a given tensor is a B tensor, or
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationDeterminants of Partition Matrices
journal of number theory 56, 283297 (1996) article no. 0018 Determinants of Partition Matrices Georg Martin Reinhart Wellesley College Communicated by A. Hildebrand Received February 14, 1994; revised
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationAnother algorithm for nonnegative matrices
Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,
More information2 Computing complex square roots of a real matrix
On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b
More informationarxiv: v1 [math.na] 15 Jun 2009
Noname manuscript No. (will be inserted by the editor) Fast transforms for high order boundary conditions Marco Donatelli arxiv:0906.2704v1 [math.na] 15 Jun 2009 the date of receipt and acceptance should
More informationBoolean Inner-Product Spaces and Boolean Matrices
Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver
More informationTikhonov Regularization of Large Symmetric Problems
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2000; 00:1 11 [Version: 2000/03/22 v1.0] Tihonov Regularization of Large Symmetric Problems D. Calvetti 1, L. Reichel 2 and A. Shuibi
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More informationImproved Newton s method with exact line searches to solve quadratic matrix equation
Journal of Computational and Applied Mathematics 222 (2008) 645 654 wwwelseviercom/locate/cam Improved Newton s method with exact line searches to solve quadratic matrix equation Jian-hui Long, Xi-yan
More informationRegularization methods for large-scale, ill-posed, linear, discrete, inverse problems
Regularization methods for large-scale, ill-posed, linear, discrete, inverse problems Silvia Gazzola Dipartimento di Matematica - Università di Padova January 10, 2012 Seminario ex-studenti 2 Silvia Gazzola
More informationAN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS
AN ALGORITHM FOR COMPUTING FUNDAMENTAL SOLUTIONS OF DIFFERENCE OPERATORS HENRIK BRANDÉN, AND PER SUNDQVIST Abstract We propose an FFT-based algorithm for computing fundamental solutions of difference operators
More informationLecture 4. CP and KSVD Representations. Charles F. Van Loan
Structured Matrix Computations from Structured Tensors Lecture 4. CP and KSVD Representations Charles F. Van Loan Cornell University CIME-EMS Summer School June 22-26, 2015 Cetraro, Italy Structured Matrix
More informationWe use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write
1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical
More informationA MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS
A MODIFIED TSVD METHOD FOR DISCRETE ILL-POSED PROBLEMS SILVIA NOSCHESE AND LOTHAR REICHEL Abstract. Truncated singular value decomposition (TSVD) is a popular method for solving linear discrete ill-posed
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationAnn. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:
Ann Funct Anal 5 (2014), no 2, 127 137 A nnals of F unctional A nalysis ISSN: 2008-8752 (electronic) URL:wwwemisde/journals/AFA/ THE ROOTS AND LINKS IN A CLASS OF M-MATRICES XIAO-DONG ZHANG This paper
More informationarxiv: v1 [math.ra] 11 Aug 2014
Double B-tensors and quasi-double B-tensors Chaoqian Li, Yaotang Li arxiv:1408.2299v1 [math.ra] 11 Aug 2014 a School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P. R. China 650091
More informationCVPR A New Tensor Algebra - Tutorial. July 26, 2017
CVPR 2017 A New Tensor Algebra - Tutorial Lior Horesh lhoresh@us.ibm.com Misha Kilmer misha.kilmer@tufts.edu July 26, 2017 Outline Motivation Background and notation New t-product and associated algebraic
More informationBOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 4, P AGES 655 664 BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION Guang-Da Hu and Qiao Zhu This paper is concerned with bounds of eigenvalues of a complex
More informationMath 671: Tensor Train decomposition methods
Math 671: Eduardo Corona 1 1 University of Michigan at Ann Arbor December 8, 2016 Table of Contents 1 Preliminaries and goal 2 Unfolding matrices for tensorized arrays The Tensor Train decomposition 3
More informationBanach Journal of Mathematical Analysis ISSN: (electronic)
Banach J. Math. Anal. 6 (2012), no. 1, 139 146 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) www.emis.de/journals/bjma/ AN EXTENSION OF KY FAN S DOMINANCE THEOREM RAHIM ALIZADEH
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationAppendix A: Matrices
Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows
More informationAlgorithms for structured matrix-vector product of optimal bilinear complexity
Algorithms for structured matrix-vector product of optimal bilinear complexity arxiv:160306658v1 mathna] Mar 016 Ke Ye Computational Applied Mathematics Initiative Department of Statistics University of
More informationApplied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices
Applied Mathematics Letters 25 (202) 2339 2343 Contents lists available at SciVerse ScienceDirect Applied Mathematics Letters journal homepage: www.elsevier.com/locate/aml Comparison theorems for a subclass
More informationA Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Suares Problem Hongguo Xu Dedicated to Professor Erxiong Jiang on the occasion of his 7th birthday. Abstract We present
More informationMatrix functions that preserve the strong Perron- Frobenius property
Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu
More informationAn iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C
Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation
More informationOn the eigenvalues of specially low-rank perturbed matrices
On the eigenvalues of specially low-rank perturbed matrices Yunkai Zhou April 12, 2011 Abstract We study the eigenvalues of a matrix A perturbed by a few special low-rank matrices. The perturbation is
More informationSpectrally arbitrary star sign patterns
Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationRepresentations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III
Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III Lie algebras. Let K be again an algebraically closed field. For the moment let G be an arbitrary algebraic group
More informationThe Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix
The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of
More informationOPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY
published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationA fast randomized algorithm for overdetermined linear least-squares regression
A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm
More informationThe inverse of a tridiagonal matrix
Linear Algebra and its Applications 325 (2001) 109 139 www.elsevier.com/locate/laa The inverse of a tridiagonal matrix Ranjan K. Mallik Department of Electrical Engineering, Indian Institute of Technology,
More informationTensor Decompositions and Applications
Tamara G. Kolda and Brett W. Bader Part I September 22, 2015 What is tensor? A N-th order tensor is an element of the tensor product of N vector spaces, each of which has its own coordinate system. a =
More informationConstructing c-ary Perfect Factors
Constructing c-ary Perfect Factors Chris J. Mitchell Computer Science Department Royal Holloway University of London Egham Hill Egham Surrey TW20 0EX England. Tel.: +44 784 443423 Fax: +44 784 443420 Email:
More informationA Divide-and-Conquer Algorithm for Functions of Triangular Matrices
A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationRanks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices
Operator Theory: Advances and Applications, Vol 1, 1 13 c 27 Birkhäuser Verlag Basel/Switzerland Ranks of Hadamard Matrices and Equivalence of Sylvester Hadamard and Pseudo-Noise Matrices Tom Bella, Vadim
More information