Block Diagonal Majorization on C 0

Similar documents
Wavelets and Linear Algebra

Linear preservers of Hadamard majorization

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

Wavelets and Linear Algebra

Invertible Matrices over Idempotent Semirings

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

Spectral inequalities and equalities involving products of matrices

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

1 Determinants. 1.1 Determinant

Matrices: 2.1 Operations with Matrices

arxiv: v1 [math.rt] 4 Dec 2017

Linear Systems and Matrices

Z-Pencils. November 20, Abstract

NEW CONSTRUCTION OF THE EAGON-NORTHCOTT COMPLEX. Oh-Jin Kang and Joohyung Kim

LINEAR INTERVAL INEQUALITIES

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Example. We can represent the information on July sales more simply as

The maximal determinant and subdeterminants of ±1 matrices

Section 9.2: Matrices.. a m1 a m2 a mn

Banach Journal of Mathematical Analysis ISSN: (electronic)

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

Tensor Complementarity Problem and Semi-positive Tensors

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

Algebra & Trig. I. For example, the system. x y 2 z. may be represented by the augmented matrix

A Questionable Distance-Regular Graph

ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES

1182 L. B. Beasley, S. Z. Song, ands. G. Lee matrix all of whose entries are 1 and =fe ij j1 i m 1 j ng denote the set of cells. The zero-term rank [5

Review of Basic Concepts in Linear Algebra

Discrete Optimization 23

Wavelets and Linear Algebra

Linear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey

Systems of Linear Equations and Matrices

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Inequalities for the spectra of symmetric doubly stochastic matrices

Intrinsic products and factorizations of matrices

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Systems of Linear Equations and Matrices

Doubly Indexed Infinite Series

A Characterization of (3+1)-Free Posets

II. Determinant Functions

On the adjacency matrix of a block graph

1 Matrices and Systems of Linear Equations. a 1n a 2n

Spectral Properties of Matrix Polynomials in the Max Algebra

Chapter 2: Matrix Algebra

Determinants of Partition Matrices

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Singular Value Inequalities for Real and Imaginary Parts of Matrices

ACI-matrices all of whose completions have the same rank

The generalized multiplier space and its Köthe-Toeplitz and null duals

Chapter 5. Measurable Functions

ICS 6N Computational Linear Algebra Matrix Algebra

Matrices and systems of linear equations

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Mathematics 13: Lecture 10

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

On Euclidean distance matrices

LEVEL MATRICES G. SEELINGER, P. SISSOKHO, L. SPENCE, AND C. VANDEN EYNDEN

Some inequalities for sum and product of positive semide nite matrices

Chapter 2: Linear Independence and Bases

Positive entries of stable matrices

Chapter 1: Systems of Linear Equations

Math 4377/6308 Advanced Linear Algebra

Properties of Solution Set of Tensor Complementarity Problem

M17 MAT25-21 HOMEWORK 6

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Detailed Proof of The PerronFrobenius Theorem

(1) 0 á per(a) Í f[ U, <=i

Quivers of Period 2. Mariya Sardarli Max Wimberley Heyi Zhu. November 26, 2014

An Even Order Symmetric B Tensor is Positive Definite

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

I = i 0,

arxiv: v1 [math.fa] 24 Oct 2018

JORDAN AND RATIONAL CANONICAL FORMS

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Possible numbers of ones in 0 1 matrices with a given rank

Optimisation Problems for the Determinant of a Sum of 3 3 Matrices

Basic Concepts in Linear Algebra

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

On Regularity of Incline Matrices

THE MEASURE-THEORETIC ENTROPY OF LINEAR CELLULAR AUTOMATA WITH RESPECT TO A MARKOV MEASURE. 1. Introduction

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Polynomial Properties in Unitriangular Matrices 1

Generators of certain inner mapping groups

Linear Algebra March 16, 2019

ELEMENTARY LINEAR ALGEBRA

MATH 56A: STOCHASTIC PROCESSES CHAPTER 1

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Systems of Linear Equations

REGULI AND APPLICATIONS, ZARANKIEWICZ PROBLEM

Shih-sen Chang, Yeol Je Cho, and Haiyun Zhou

1. Supremum and Infimum Remark: In this sections, all the subsets of R are assumed to be nonempty.

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Transcription:

Iranian Journal of Mathematical Sciences and Informatics Vol. 8, No. 2 (2013), pp 131-136 Block Diagonal Majorization on C 0 A. Armandnejad and F. Passandi Department of Mathematics, Vali-e-Asr University of Rafsanjan, 7713936417, Rafsanjan, Iran E-mail: armandnejad@mail.vru.ac.ir E-mail: passandi 91@yahoo.com Abstract. Let c 0 be the real vector space of all real sequences which converge to zero. For every x,y c 0, it is said that y is block diagonal majorized by x (written y b x) if there exists a block diagonal row stochastic matrix R such that y = Rx. In this paper we find the possible structure of linear functions T : c 0 c 0 preserving b. Keywords: Block diagonal matrices, Majorization, Stochastic matrices, Linear preservers. 2010 Mathematics subject classification: 15A86, 15B51. 1. Introduction Let V and W be two linear spaces and let be a relation on both of V and W. A linear function T : V W is said to be a linear preserver (strong linear preserver)of if for every x,y V, Tx Ty whenever x y ( Tx Ty if and only if x y). The topic of linear preservers is of interest to a large group of matrix theorists, see [8] for a survey of linear preserver problems. In this paper we shall designate by M n, R m and R n the set of all n n, 1 m and n 1 real matrices respectively. We recall that a matrix R M n is row stochastic if all its entries are nonnegative and Re = e, where e = (1,...,1) t R n. For vectors x,y R n, it is said that x is left matrix majorized by y (respectively x t is right matrix majorized by y t ) and write x l y (respectively x t r y t ) if for some row stochastic matrix R M n ; x = Ry (respectively x t = y t R). It is known Corresponding author Received 19 November 2011; Accepted 9 August 2012 c 2013 Academic Center for Education, Culture and Research TMU 131

132 A. Armandnejad and F. Passandi that, for x,y R n, x l y if and only if miny minx maxx maxy, here the maximum and the minimum are taken over all components of x and y. A characterization of linear functions T : R n R p which preserve left matrix majorization l, can be found in [5, 6]. Note that the right and left matrix majorizations are essentially different and no characterization is known for functions T : R n R p preserving right matrix majorization. There has been a great deal of interests in studying linear maps preserving or strongly preserving some special kinds of majorizations on some matrix spaces; for more information about types of majorizations see[4] and [7], and for their preservers see [1]-[3], and [5]-[6]. Definition 1.1. (a) Let {n i } i=1 be a sequence in N and let R i M ni be a row stochastic matrix for every i N. Then R = i=1 R i is called a block diagonal row stochastic matrix. (b) For every x,y c 0, it is said that y is block diagonal majorized by x (written y b x) if there exists a block diagonal row stochastic matrix R such that y = Rx. We write x b y if x b y and y b x. In this paper, we find the possible structure of linear functions T on c 0 preserving block diagonal majorization. 2. Block diagonal majorization This section studies some properties of the notion of block diagonal majorization and we obtain some equivalent conditions for this concept. Proposition 2.1. Let x,y c 0. If x b y, then infy infx supx supy. Furthermore if infx = minx (respectively supx = maxx) then infy = miny (respectively supy = maxy). Proof. Assume that x b y, then there exists a block diagonal matrix R = (r ij ) such that x = Ry, and so for every i N, x i = j=1 r ijy j where r ij 0 and j=1 r ij = 1. It follows that infy x i and hence infy infx. One can show that supx supy with a similar argument. Suppose that infx = minx. We just consider the case x,y 0. Since x c 0, there exists an integer i 1 such that infx = minx = x i = 0. In the other hand x i = j=1 r ijy j, r ij 0 and j=1 r ij = 1,itfollowsthaty k = 0forsomek Nandhenceinfy = miny. For x,y R n, one can easily show that x l y if and only if miny minx maxx maxy, but the following example shows that this is not true for b on c 0 (the converse of Proposition 2.1 is not true). Example 2.2. Let x = (0, 1 2, 1 3,...)t and y = ( 1 2, 1 3,...)t. Then x,y c 0 and infy infx supx supy, but it is easy to verify that x b y. The following proposition gives two equivalent conditions for b on c 0.

Block diagonal majorization on c 0 133 Proposition 2.3. Let x,y c 0. Then the following conditions are equivalent. (i) x b y. (ii) There exists a subsequence {k n } n=0 of sequence {k} k=0 with k 0 = 0 such that for every j N, (x kj+1,...,x kj+1 ) t l (y kj+1,...,y kj+1 ) t. (iii) There exists a subsequence {k n } n=0 of sequence {k} k=0 with k 0 = 0 such that for every j N, min y i min x i max x i max y i. Proof. (i) (ii). Let x b y. Then there exists a diagonal block matrix R = i=1 R i such that x = Ry and R i is a m i m i row stochastic matrix. Put k 0 = 0 and k n = n i=1 m i then (x kj+1,...,x kj+1 ) t = R j+1 (y kj+1,...,y kj+1 ) t and hence (x kj+1,...,x kj+1 ) t l (y kj+1,...,y kj+1 ) t. (ii) (i). Supposethat there existsasubsequence{k n } n=0 with k 0 = 0and (x kj+1,...,x kj+1 ) t l (y kj+1,...,y kj+1 ) t for every j N. Then for every j N there exists a row stochastic R j+1 M kj+1 k j such that (x kj+1,...x kj+1 ) t = R j+1 (y kj+1,...,y kj+1 ) t and hence x = Ry, where R = i=1 R i. (ii) (iii) is a direct consequence of definition of l. 3. Linear preservers of b In this section we will find the possible structure of linear functions T : c 0 c 0 which preserve b. The symbol e i is used for the sequence (0,...,0,1,0,...) in c 0, where 1 is in the ith place. Proposition 3.1. Let T : c 0 c 0 preserve b. Suppose that a := supte 1 and b := infte 1, then for every i N, a = supte i = maxte i 0 and b = infte i = minte i 0. Proof. First we show that supte i = supte j and infte i = infte j for every i,j N. By Proposition 2.3, we have e i b e j and e j b e i for every i,j N, it follows that Te i b Te j and Te j b Te i. By Proposition 2.1, infte j infte i supte i supte j and infte j infte i supte j supte i. This would imply infte i = infte j and supte i = supte j. To complete the proof it is enough to show that supte 1 = maxte 1 and infte 1 = minte 1. We just consider the case that Te 1 has only nonnegative components, so Te 1 = (t 11,t 21,...) t 0. Since lim i t i1 = 0 and t i1 0, supte 1 = maxte 1. Let k N be such that t k1 = a. By Proposition 2.3, e 1 + e 2 b e 1 and hence Te 1 +Te 2 b Te 1. By Proposition 2.1, max(te 1 +Te 2 ) maxte 1 and consequently t k1 + t k2 max(te 1 +Te 2 ) maxte 1 = t k1. It follows that t k2 = 0, and hence infte 2 = minte 2 = 0. Since e 1 b e 2, by Proposition 2.1 we conclude that infte 2 = minte 2 = infte 1 = minte 1 = 0. It is clear that b 0 a, as desired.

134 A. Armandnejad and F. Passandi Lemma 3.2. Let T : c 0 c 0 be a linear preserver of b. Suppose that a and b are as in Proposition 3.1. If t ij = a (respectively t ij = b) for some i,j N, then t ik 0 (respectively t kj 0) for all k N\{j}. Proof. Let t ij = a for some i,j N. For every k N\{j}, e k +e j b e 1 and hence(te k +Te j ) b Te 1. UseLemma2.1towritemax(Te k +Te j ) maxte 1. Thereforet ik +a = t ik +t ij max(te k +Te j ) maxte 1 = a andconsequently t ik 0. Note that a linear function T : c 0 c 0 preserves b if and only if αt : c 0 c 0 preserves b for every nonzero α R. Let T : c 0 c 0 be a nonzero linear preserver of b. Assume that a and b are as in Proposition 3.1. Now, we consider two cases: Case 1; If b a. Then T := 1 a T : c 0 c 0 preserves b and 0 b := mint e i 1 = a = maxt e i. Case 2; If b > a. Then T := 1 b T : c 0 c 0 preserves b and 0 b := mint e i 1 = a = maxt e i. Consequently, without loss of generality for every linear function T : c 0 c 0 preserving b we may assume that 0 b a = 1. Definition 3.3. Let T : c 0 c 0 be a linear preserver of b. Assume that a(= 1) and b are as in Proposition 3.1. For every k N, I k := {i N : t ik = 1} and J k := {j N : t jk = b}. Theorem 3.4. Let T : c 0 c 0 be a linear preserver of b. Assume that 0 b a = 1 are as in Proposition 3.1. Then for every k N, there exist (i) i k I k, such that for every j k, t ik j = 0. (ii) j k J k, such that for every j k, t jk j = 0. Proof. Let k N. For every m N there exists a large enough N N and there exist some i m I k such that mint( Ne k +e k+m ) = N+t imk+m. It is clear that ( Ne k +e k+m ) b ( Ne k +e k+m +e j ) for every j N\{k,k+m}. Then T( Ne k +e k+m ) b T( Ne k +e k+m +e j ) and hence ( N +t imk+m) = mint( Ne k +e k+m ) = mint( Ne k +e k+m +e j ) ( N +t imk+m +t imj). Consequently t imj 0 for every j N\{k,k+m}. Use Lemma 3.2 to conclude that t imj = 0 for every j N\{k,k +m}. Since I k is a finite set, there exist two distinct number m,n N such that i m = i n. Therefore t imj = 0 for every j (N\{k,k +m}) (N\{k,k+n}). Since m n, t imj = 0 for every j k. With an argument same as the above one may prove (ii). Let A be an infinite matrix. Then the row indices of A and the column indices of A are {1,2,...}. Let α and β be nonempty sets of indices {1,2,...}. A submatrix A[α,β] is a matrix whose rows have indices α among the row indices of A, and whose columns have indices β among the column indices of A. Now, we can prove the main theorem of this paper. For a linear operator

Block diagonal majorization on c 0 135 T : c 0 c 0, we use the symbol [T] for the infinite matrix with Te j as jth column, i.e. [T] = [ Te 1 Te2... Tej... ]. Theorem 3.5. Let T : c 0 c 0 be a linear preserver of b. Assume that 0 b a = 1 are as in Proposition 3.1. Then one of the following holds; (i) There exist infinite permutations P and Q such that P and bq are submatrices of [T]. (ii) [T] is a row substochastic matrix and there exists an infinite permutation P such that P is a submatrix of [T]. Proof. We consider two cases. Case 1; Let b 0. By Theorem 3.4 for every k N there exist i k,j k N such that (i) t k ik = 1 and t k j = 0 for every j N\{i k }, (ii) t k jk = 1 and t k j = 0 for every j N\{j k }. Put α = {i 1,i 2,...}, β = {j 1,j 2,...}, P = [T][α,N] and Q = 1 b [T][β,N]. It is clear that P and Q are infinite permutations and P and bq are submatrices of [T], therefore (i) holds. Case 2; Let b = 0. Then t i j 0 for all i,j N. For every m N, put X m = e 1 +... + e m, it is clear that X m b e 1 and hence TX m b Te 1. By using Lemma 2.1, it follows that m 0 t i j = (TX m ) i maxte 1 = 1, i N, j=1 where(tx m ) i is the i th componentoftx m. Thereforethe nonnegativeinfinite series j=1 t i j is convergent and j=1 t i j 1 for every i N. Consequently [T] is a row substochastic matrix. Since a = 1, with an argument same as the above [T] has a submatrix which is an infinite permutation, therefore (ii) holds. Acknowledgements. The authors thank the anonymous referee for the suggestions for improving this paper. References 1. A. Armandnejad and H. R. Afshin, Linear functions preserving multivariate and directional majorization, Iranian Journal of Mathematical Sciences and Informatics, 5(1), (2010), 1-5. 2. A. Armandnejad and H. Heydari, Linear functions preserving gd-majorization from M n,m to M n,k, Bull. Iranian Math. Soc. 37(1), (2011), 215-224. 3. A. Armandnejad and A. Salemi, On linear preservers of lgw-majorization on M n,m, Bull. Malays. Math. Sci. Soc., 35(3), (2012), 755-764. 4. R. Bhatia, Matrix Analysis, Springer-Verlag, New York, 1997. 5. F. Khalooei and A. Salemi, Linear Preservers of Majorization, Iranian Journal of Mathematical Sciences and Informatics, 6(2), (2011), 43-50.

136 A. Armandnejad and F. Passandi 6. F. Khalooei and A. Salemi, The structure of linear preservers of left matrix majorizations on R p, Electronic Journal of Linear Algebra, 18, (2009), 88-97. 7. A.W. Marshall and I. Olkin, Inequalities: Theory of Majorization and its Applications, Academic Press, New York, 1972. 8. S. Pierce, A survey of linear preserver problems, Linear and Multilinear Algebra, 33, (1992), 1-2.