Characterization of half-radial matrices

Similar documents
Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Wavelets and Linear Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

The following definition is fundamental.

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra Massoud Malek

I. Multiple Choice Questions (Answer any eight)

Chapter 3 Transformations

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Numerical Methods for Solving Large Scale Eigenvalue Problems

Analysis Preliminary Exam Workshop: Hilbert Spaces

Elementary linear algebra

Operators with numerical range in a closed halfplane

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

Spectral Theorem for Self-adjoint Linear Operators

PRODUCT OF OPERATORS AND NUMERICAL RANGE

Linear Algebra: Matrix Eigenvalue Problems

Computational math: Assignment 1

Review problems for MA 54, Fall 2004.

MATH36001 Generalized Inverses and the SVD 2015

ETNA Kent State University

Lecture 1: Review of linear algebra

Compression, Matrix Range and Completely Positive Map

Applied Linear Algebra in Geoscience Using MATLAB

Functional Analysis Review

Linear Algebra Review. Vectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Linear Algebra Practice Problems

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Frame Diagonalization of Matrices

Linear Algebra in Actuarial Science: Slides to the lecture

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Basic Calculus Review

Spectral inequalities and equalities involving products of matrices

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Stat 159/259: Linear Algebra Notes

The Singular Value Decomposition

Mathematical foundations - linear algebra

2. Review of Linear Algebra

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Convexity of the Joint Numerical Range

Tutorials in Optimization. Richard Socher

Eigenvalues and Eigenvectors

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

7. Symmetric Matrices and Quadratic Forms

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 408 Advanced Linear Algebra

Interpolating the arithmetic geometric mean inequality and its operator version

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Linear algebra 2. Yoav Zemel. March 1, 2012

Means of unitaries, conjugations, and the Friedrichs operator

Chap 3. Linear Algebra

LinGloss. A glossary of linear algebra

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

5 Compact linear operators

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

MATH Linear Algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

The Jordan Canonical Form

MATH 5524 MATRIX THEORY Problem Set 5

Chapter 7: Symmetric Matrices and Quadratic Forms

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Chapter 6: Orthogonality

Notes on Eigenvalues, Singular Values and QR

Introduction to Numerical Linear Algebra II

Lecture notes: Applied linear algebra Part 1. Version 2

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

1 Linear Algebra Problems

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

MATH 583A REVIEW SESSION #1

ECE 275A Homework #3 Solutions

Singular Value Decomposition

Optimization Theory. A Concise Introduction. Jiongmin Yong

Maximizing the numerical radii of matrices by permuting their entries

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Problem # Max points possible Actual score Total 120

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Lecture 2: Linear Algebra Review

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Mathematical foundations - linear algebra

The Singular Value Decomposition and Least Squares Problems

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

A Review of Linear Algebra

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Properties of Matrices and Operations on Matrices

Math Linear Algebra II. 1. Inner Products and Norms

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Lecture 3: Review of Linear Algebra

There are six more problems on the next two pages

Algebra C Numerical Linear Algebra Sample Exam Problems

Econ Slides from Lecture 7

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Transcription:

Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the radius of the smallest ball with the center at zero containing the field of values of a given square matrix A. It is well known that r(a) A r(a), where is the matrix -norm. Matrices attaining the lower bound are called radial, and have been analyzed thoroughly. This is not the case for matrices attaining the upper bound where only partial results are available. In this paper, we characterize matrices satisfying r(a) = A /, and call them half-radial. We investigate their singular value decomposition, field of values and algebraic structure. We give several necessary and sufficient conditions for a matrix to be half-radial. Our results suggest that half-radial matrices correspond to the extreme case of the attainable constant in Crouzeix s conjecture. Keywords: field of values, numerical radius, singular subspaces, Crouzeix s conjecture 000 MSC: 15A60, 47A1, 65F35 1. Introduction Consider the space C n endowed with the Euclidean vector norm z = (z z) 1/ and the Euclidean inner product z, w = w z, where denotes the Hermitian transpose. Let A C n n. The field of values (or numerical range) of A is the set in the complex plane defined as W (A) { Az, z : z C n, z = 1}. (1) Email addresses: iveta.hnetynkova@mff.cuni.cz (Iveta Hnětynková), petr.tichy@mff.cuni.cz (Petr Tichý) Preprint submitted to Linear Algebra and its Applications April 7, 018

Recall that W (A) is a compact convex subset of C which contains the spectrum σ(a) of A; see, e.g., [1, p. 8]. The radius of the smallest ball with the center at zero which contains the field of values, i.e., r(a) max Az, z = max ζ, z =1 ζ W (A) is called the numerical radius. It is well-known that r(a) A r(a), () where denotes the matrix -norm; see, e.g., [, p. 331, problem 1]. The lower as well as the upper bound in () are attainable. If ρ(a) denotes the spectral radius of A, ρ(a) = max{ λ : λ σ(a)}, then it holds that r(a) = A if and only if ρ(a) = A ; see [1, p. 44, problem 4] or [3]. Such matrices are called radial, and there exist several equivalent conditions which characterize them; see, e.g., [1, p. 45, problem 7] or [4]. Concerning the right inequality in (), a sufficient condition was formulated in [5, p. 11, Theorem 1.3-4]: If the range of A and A are orthogonal then the upper bound is attained. It is also known that if A = r(a), then A has a two-dimensional reducing subspace on which it is the shift [5, p.11, Theorem 1.3-5]. However, to our knowledge a deeper analysis of matrices satisfying r(a) = 1 A, which we call half-radial, has not been given yet. In this paper we fill this gap. We derive several equivalent (necessary and sufficient) conditions characterizing half-radial matrices. We study in more detail their specific algebraic structure, their left and right singular subspaces corresponding to the maximum singular value, and their field of values. Our results give a strong indication that half-radial matrices correspond to the extreme case of the attainable constant in Crouzeix s conjecture [6]. The paper is organized as follows. Section gives some basic notation and summarizes well-known results on the field of values, Section 3 is the core part giving characterizations of half-radial matrices, Section 4 discusses Crouzeix s conjecture, Section 5 concludes the paper.. Preliminaries In this section we introduce some basic notation and prove for completeness the inequality (). First, recall that any two vectors z and w satisfy the

polarization identity 4 Az, w = A(z + w), (z + w) A(z w), (z w) +i A(z + iw), (z + iw) i A(z iw), (z iw), where i is the imaginary unit, and the parallelogram law z + w + z w = ( z + w ). The definition of numerical radius implies the inequality Az, z r(a) z. (3) Using these tools we can now easily prove the following well-known result; see, e.g., [, p. 331, problem 1] or [5, p. 9, Theorem 1.3-1]. Theorem 1. For A C n n it holds that r(a) A r(a). Proof. The left inequality is trivial. Now using the polarization identity, the inequality (3), and the parallelogram law we find out that for any z and w it holds that 4 Az, w 4r(A) ( z + w ). Consider a unit norm vector z such that A = Az, and define w = Az Az. Then using the previous inequality, we obtain 4 A 8r(A). Further, consider the singular value decomposition (SVD) of A, A = UΣV, where U, V C n n are unitary and Σ is diagonal with the singular values of A on the diagonal. Denote σ max the largest singular value of A satisfying σ max = A. The unit norm left and right singular vectors u, v such that Av = σ max u (4) are called a pair of maximum singular vectors of A. Denote V max (A) {x C n : Ax = A x } 3

the maximum right singular subspace, i.e., the span of right singular vectors of A corresponding to σ max. Similarly, denote U max (A) the maximum left singular subspace of A. Recall that any vector z C n can be uniquely decomposed into two orthogonal components, z = x + y, x R(A ), y N (A), (5) where R( ) denotes the range, and N ( ) the null space of a given matrix. Note that here we generally consider the field of complex numbers. We introduce a definition of matrices we are interested in. Definition 1. A nonzero matrix A is called half-radial if A = r(a). In the following, we implicitly assume that A is nonzero and n. If A is the zero matrix or n = 1, all the statements are trivial. 3. Necessary and sufficient conditions for half-radial matrices Now we derive necessary and sufficient conditions for a nonzero matrix A to satisfy A = r(a). Using the decomposition (5) we define the set Θ A {z C n : z = 1, r(a) = Az, z, Ax, x = 0}, (6) i.e., the set of all unit norm maximizers of Az, z such that Ax, x = 0. In the following we prove that a matrix is half-radial if and only if Θ A { }, and provide a complete characterization of the non-empty set Θ A based on the maximum singular subspaces of A. Then, algebraic structure of A is studied. 3.1. Basic conditions A sufficient condition for A to be half-radial is formulated in the following lemma. Parts of the proofs of Lemma and Lemma 3 are inspired by [5, p.11, Theorem 1.3 4, Theorem 1.3 5]. Lemma. If Θ A is non-empty, then A = r(a). Moreover, for each z Θ A of the form (5) it holds that x = y = 1, Ax = A x and Ax, y = Ax y. 4

Proof. Let z be a unit norm vector. Considering its decomposition (5), it holds that Az, z = Ax, y + Ax, x. (7) Since 1 = z = x + y and 0 ( x y ), we find out that x y 1, with the equality if and only if x = y. If Θ A { }, then for any unit norm vector z Θ A we get A r(a) = Az, z = Ax, y Ax y A x y A. Hence all terms are equal, and therefore r(a) = A and x = y = 1. In the following lemma we prove the opposite implication. Lemma 3. If A = r(a), then Θ A is non-empty. Moreover, for any pair u, v of maximum singular vectors of A it holds that v N (A ), u N (A), u v and the vector z = 1 (u + v) satisfies z Θ A. Proof. Assume without loss of generality that A = σ max = 1. Consider now any unit norm maximum right singular vector v V max (A) and denote u = Av the corresponding maximum left singular vector. Then Moreover, for any θ R we obtain u = Av = A v = v = 1. r(a) = r ( e iθ A ) = 1. Then, since e iθ A + e iθ A is Hermitian, it holds that e iθ A + e iθ A = r ( e iθ A + e iθ A ) r ( e iθ A ) + r ( e iθ A ) = 1 and e iθ Au + e iθ A u 1, e iθ Av + e iθ A v 1. For the norm on the left we get e iθ Au + e iθ A u = e iθ A u, u + e iθ (A ) u, u + Au, Au + A u, A u = Re ( e iθ A u, u ) + Au + A u. 5

Recalling that v = A u = 1, we obtain Au Re ( e iθ A u, u ). Similarly from u = Av = 1, it can be proved that A v Re ( e iθ A v, v ). Since the inequalities hold for any θ R, we get A v = Au = 0, i.e., v N (A ) and u N (A). Hence, v R(A), u R(A ) and the maximum left and right singular vectors u, v are orthogonal. Furthermore, Clearly, the unit norm vector Av, v = Au, v = Au, u = 0. z = 1 (u + v) is in the form (5) with x = 1 v, y = 1 u. Moreover, z is a maximizer by Az, z = 1 A(u + v), (u + v) = 1 ( Av, v + Av, u + Au, v + Au, u ) = 1 Av, u yielding Az, z = 1 = r(a). Since also Ax, x = 1 Av, v = 0, it holds that z Θ A. Recall that for any pair u, v of singular vectors of A it holds that v R(A ), u R(A). Consequently, the previous lemmas imply the following result characterizing half-radial matrices by the properties of their maximum right singular subspace. Lemma 4. A = r(a) if and only if for any unit norm vector v V max (A) it holds that v R(A ) N (A ), and z = 1 ( v + Av ) A 6 Av R(A) N (A), maximizes Az, z. (8)

Equivalently, A = r(a) if and only if there exists a unit norm v V max (A) such that the condition (8) is satisfied. Proof. Let A = r(a). For any v V max (A), v = 1, u = Av A and v is a pair of maximum singular vectors of A. Thus from Lemma 3 it follows that z defined in (8) is a unit norm maximizer of Az, z. Furthermore, by the same lemma v N (A ) and Av = A u N (A). Moreover, since v and u are right and left singular vectors, we also have v R(A ), Av R(A) yelding (8). On the other hand, if there is a unit norm right singular vector v V max (A) such that the condition (8) is satisfied, then the vector z = 1 (v + u), where u = Av A, is a unit norm maximizer of Az, z. Moreover, v is the component of z in R(A ) and Av, v = 0. From (6), z is an element of Θ A. Hence, Θ A is non-empty and by Lemma we conclude that A = r(a). The previous lemma implies several SVD properties of half-radial matrices, which we summarize below. Lemma 5. Let A C n n be half-radial, i.e., A = r(a). Then: 1. V max (A) R(A ) N (A ), U max (A) R(A) N (A),. V max (A) U max (A), 3. the multiplicity m of σ max is not greater than n/, 4. the multiplicity of the zero singular value is at least m. Proof. The first property follows from the condition (8) in Lemma 4 and the fact that V max (A) R(A ) and U max (A) R(A). Combining V max (A) R(A ) with U max (A) N (A) yields the orthogonality of maximum singular subspaces. Furthermore, since V max (A), U max (A) C n and V max (A) U max (A), we directly get the restriction on the multiplicity of σ max. Finally, the relation V max (A) N (A ) implies that A is singular with the multiplicity of the zero singular value being equal to or greater than the multiplicity of σ max. 7

It is worth to note that neither the condition nor the stronger condition 1 in Lemma 5 are sufficient to ensure that a given matrix A is half-radial. This can be illustrated by the following examples. Example 1. Consider a half-radial matrix A such that the multiplicity of σ max (and thus also the dimension of maximum singular subspaces) is maximum possible, i.e., n/. Then V max (A) U max (A) = C n, and U max (A) = N (A), V max (A) = N (A ). Thus A has only two singular values σ max and 0 both with the multiplicity n/. Consequently, an SVD of half-radial A has in this case the form A = U [ σmax I 0 0 0 ] V, I R n/ n/. (9) The unitary matrices above can be chosen as U = [U max V max ], V = [V max U max ], where the columns of U max and V max form an orthonormal basis of U max (A) and V max (A), respectively, and Ue i, V e i is for i = 1,..., n/ a pair of maximum singular vectors of A. However, assuming only that A has orthogonal maximum right and left singular subspaces, both of the dimension n/, we can just conclude that an SVD of A has in general the form A = U [ σmax I 0 0 Σ ] V, I R n/ n/, (10) with Σ R n/ n/ having singular values smaller than σ max on the diagonal. The previous example also shows that if the multiplicity of σ max is n/, then assuming only that the condition 1 from Lemma 5 holds, the matrix A is half-radial. However, this is not true in general as we illustrate by the next example. Example. Consider the matrix given by its SVD 0 0 0 0 0 1 1 0 0 A = 0 0.9 0 = 0 1 0 0 0.9 0 1 0 0 1 0 0 0 0 0 8 1 0 0 0 1 0 0 0 1.

Obviously, U max (A) = span{e 3 }, V max (A) = span{e 1 }. Since Ae 3 = 0 and A e 1 = 0, we obtain U max (A) N (A) and V max (A) N (A ). The vector e is a (unit norm) eigenvector of A. Consequently, r(a) Ae, e = 0.9 e, e = 0.9. Since A = 1, we see that A is not half-radial, even though the condition 1 from Lemma 5 holds. 3.. Set of maximizers Θ A Now we study the set of maximizers Θ A. Lemma implies that for any z Θ A its component x in R(A ) satisfies x V max (A). On the other hand, Lemma 3 says that for half-radial matrices a maximizer from Θ A can be constructed as a linear combination of the vectors u, v representing a (necessarily orthogonal) pair of maximum singular vectors of A. We are ready to prove that the whole set Θ A is formed by linear combinations of such vectors, particularly we define the set { 1 ( Ω A e iα v + e iβ u ) : v V max (A), v = 1, u = Av } A, α, β R and get the following lemma. Lemma 6. Θ A is either empty or Θ A = Ω A. Proof. Let Θ A be non-empty. Consider for simplicity a unit norm vector z Θ A in the scaled form (5), z = 1 (x + y), i.e., x y, x = y = 1, Ax, x = 0. We first show that z Ω A. From Lemma we know that x satisfies Ax = A, and Ax and y are linearly dependent. Hence, Ax = γy, γ = Ax, y with γ = Ax, y = Ax y = A x y = A = σ max. 9

Now we can rotate the vector y in order to get the pair of maximum singular vectors. Define the unit norm vector ỹ by ỹ e iβ y, e iβ Ax, y A, then Ax = γy = ( A e iβ) ( e iβ ỹ ) = A ỹ. Thus x, ỹ is a pair of maximum singular vectors of A. Hence, it holds that z = 1 (x + y) = 1 ( x + e iβỹ ), x V max (A), ỹ = Ax/ A, giving z Ω A. Therefore, Θ A Ω A. Now we prove the opposite inclusion Ω A Θ A. Since Θ A { }, it holds that A = r(a) and any vector z of the form z = 1 (v + u), v V max (A), v = 1, u = Av/ A is an element of Θ A, see Lemma 3. Consider now a unit norm vector z Ω A of the form z = 1 ( e iα v + e iβ u ). We know that u = v = 1, u v, and e iα v R(A ), e iβ u N (A). Moreover, A z, z = 1 ( A e iα v ), ( e iβ u ) = 1 A Av, u = = r(a), so that z is a maximizer. Since also Ae iα v, e iα v = Av, v = 0, we get z Θ A. Note that in general, the set of maximizers Θ A need not represent the set of all maximizers of Az, z. We illustrate that by an example. Example 3. Consider the matrix given by its SVD 0 1 0 1 0 0 1 0 0 0 0 1 A = 0 0 0 = 0 0 1 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0. 10

It holds that A = 1 and r(a) = 1. From Lemma 6, we see that Θ A is generated by the maximum right singular vector e and the corresponding maximum left singular vector e 1. But obviously, the right singular vector e 3 corresponding to the singular value 1 is not in Θ A and also represents a maximizer, since e T 3 Ae 3 = 1 = 1 A. We have seen that for half-radial matrices the vectors of the form ( e iα v + e iβ u ) e iα v + e iβ u, with v V max(a), v = 1, u = Av, α, β R, A are maximizers of Az, z. This is also true for Hermitian, and, more generally, for normal and radial matrices. It remains an open question, whether it is possible to find larger classes of matrices such that the maximizers are of the form above. 3.3. Algebraic structure of half-radial matrices Denote first [ ] 0 1 J = R (11) 0 0 the two-dimensional Jordan block corresponding to the zero eigenvalue (the shift). It is well known that W (J) is the disk with the center at zero and the radius 1. Since J = 1, the matrix J is half-radial. Inspired partly by [5, p.11, Theorem 1.3 5] showing that a matrix satisfying A = r(a) has a two-dimensional reducing subspace on which it is the shift J, we now give a full characterization of half-radial matrices from the point of view of their algebraic structure. Lemma 7. Let A C n n be a nonzero matrix such that dim V max (A) = m. It holds that A = r(a) if and only if A is unitarily similar to the matrix ( A I m J) B, where B is a matrix satisfying B < A and r(b) 1 A. Proof. Assume that A = r(a) and let u i, v i, i = 1,..., m be pairs of maximum singular vectors of A such that {v 1,..., v m } is an orthonormal basis of V max (A) and {u 1,..., u m } is an orthonormal basis of U max (A). From 11

Lemma 5 it follows that all the vectors v 1,..., v m, u 1,..., u m are mutually orthogonal. Let the couple u, v be any of the couples u i, v i defined above. Denote Q [u, v, P ], where P is any matrix such that Q is unitary, and consider the unitary similarity transformation Q AQ. Since also v N (A ), u N (A), see Lemma 4, it holds that v R(A), u R(A ), and therefore v AQ = 0, Q Au = 0, i.e., the second row and the first column of Q AQ are the zero vectors. Next, using A u = Av, A u = A v, v Av = 0, u Au = 0 we obtain where Therefore, u AQ = [0, A, u AP ], Q Av = [ A, 0, P Av] T, Q AQ = u AP = A v P = 0, P Av = A P u = 0. [ A J 0 0 B ] = ( A J) B, B = P AP. Let us apply the same approach using all the vectors u i and v i, i.e., define Q [u 1, v 1, u, v,..., u m, v m, P ], where P is any matrix such that Q is unitary. Then Q AQ = ( A I m J) B, B = P AP. Trivially, it holds that r(b) r(a) = 1 A, and B < A. On the other hand, let A be unitarily similar to the matrix ( A I m J) B with r(b) 1 A. Recall that for any square matrices C and D it holds that W (C D) = cvx(w (C) W (D)); see, e.g., [1, p. 1]. Since W ( A J) is the disk with the center at zero and the radius 1 A, and r(b) 1 A, it holds that r(a) = max{r( A J), r(b)} = 1 A, Hence, A is half-radial. 1

In words, Lemma 7 shows that a matrix A is half-radial if and only if it is unitarily similar to a block diagonal matrix with one block being a halfradial matrix with maximum multiplicity of σ max (see (9)) and the other block having smaller norm and numerical radius. We have discussed previously that orthogonality of maximum singular subspaces does not ensure that a given matrix A is half-radial (see Example 1). Note that assuming only U max (A) V max (A), it is possible to follow the same construction as in the proof of Lemma 7 yielding the block structure of A = ( A I m J) B with B < A. However, if W (B) is not contained in W ( A J), then r(a) = r(b) > 1 A, and A is not half-radial. This can be illustrated on Example, where clearly B = 0.9 and thus B = r(b) = 0.9 while A = 1. Based on the previous lemma, half-radial matrices can be characterized using properties of their field of values W (A). Lemma 8. It holds that A = r(a) if and only if W (A) is the disk with the center at zero and the radius 1 A. Proof. If A = r(a), then by Lemma 7 W (A) = W (( A I m J) B) = cvx(w ( A J) W (B)). Since r(b) 1 A, and W ( A J) is the disk with the center at zero and the radius 1 A, we obtain W (A) = A W (J). Therefore, W (A) is the disk with the radius 1 A. Trivially, if W (A) is the disk of the given properties, then A = r(a). Note that since W ( ) A +A = Re (W (A)), the previous lemma implies that for a half-radial A we always have r ( ) A +A = r(a), i.e., A + A = A. To summarize the main results, we conclude this section by the following theorem giving several necessary and sufficient conditions characterizing halfradial matrices. 13

Theorem 9. Let A C n n be a nonzero matrix. Then the following conditions are equivalent: 1. A = r(a),. Θ A is non-empty, 3. Θ A = Ω A, 4. v V max (A), v = 1, it holds that v R(A ) N (A ), and z = 1 ( v + Av ) A Av R(A) N (A), maximizes Az, z, (1) 5. v V max (A), v = 1, such that (1) holds, 6. A is unitarily similar to the matrix ( A I m J) B, where m is the dimension of V max (A), B < A and r(b) 1 A, 7. W (A) is the disk with the center at zero and the radius 1 A. 4. Consequences on Crouzeix s conjecture In [6], Crouzeix proved that for any square matrix A and any polynomial p p(a) c max p(z), (13) z W (A) where c = 11.08. Later in [7] he conjectured that the constant could be replaced by c =. Recently, it has been proven in [8] that the inequality (13) holds with the constant c = 1 +. Still, it remains an open question whether Crouzeix s inequality p(a) max p(z) (14) z W (A) is satisfied for any square matrix A and any polynomial p. Based on the results in [9], it has been shown in [10] that if the field of values W (A) is a disk, then (14) holds. Thus, for half-radial matrices we conclude the following. 14

Lemma 10. Half-radial matrices satisfy Crouzeix s inequality (14). Moreover, the bound with the constant is attained for the polynomial p(z) = z. Proof. Let A be half-radial. By Lemma 8, W (A) is a disk giving the inequality (14). Furthermore, for p(z) = z we have p(a) = A = r(a) = max z = max p(z). z W (A) z W (A) There are other matrices for which the inequality (14) holds and the bound is attainable for some polynomial p. In particular, for the Choi- Crouzeix matrix of the form 0 0 1 C = 0... R n n, n >, (15) 0 1 0 0 the polynomial is p(z) = z n 1 ; see [11] and [1]. Note that C n 1 = e 1 e T n, so that C n 1 is unitarily similar (via a permutation matrix) to the matrix J 0 for J defined in (11). Hence and thus C n 1 is half-radial. result. Lemma 11. It holds that C n 1 = r(c n 1 ), More generally, we can prove the following p(a) = max p(z) (16) z W (A) for p(z) = z k and some k 1, if and only if A k is half-radial. Moreover, if A k is half-radial, then r(a k ) = r(a) k. Proof. If (16) holds for p(z) = z k, then using r(a k ) r(a) k we obtain A k r(a k ) r(a) k = max z W (A) zk. Hence r(a k ) = A k and A k is half-radial. The opposite implication follows from Lemma 10. As a consequence, if A k is half-radial then (16) holds and thus also r(a k ) = r(a) k. 15

The previous lemma can be applied to nilpotent matrices having the following structure 0... 0........ (17) 0 In particular, if A C n n of the form (17) is nilpotent with the index n, i.e., A n = 0 while A n 1 0, then A n 1 = αe 1 e T n for some α 0. Obviously, A n 1 is half-radial and (16) holds for p(z) = z n 1. We can conclude that if Crouzeix s inequality (14) holds for matrices having the structure (17), then the upper bound in (14) is attained for the polynomial p(z) = z n 1. Note that this result is not in agreement with the conjecture by Greenbaum and Overton [1, p. 39] predicting that if the bound (16) is attained for the polynomial p(z) = z n 1, then αa (for some scalar α) is unitarily similar to the Choi-Crouzeix matrix (15). 5. Conclusions In this paper, we have investigated half-radial matrices, and provided several equivalent characterizations based on their properties. Our results reveal that half-radial matrices represent a very special class of matrices. This leads us to the question, whether the constant in the inequality A r(a) can be improved for some larger classes of nonnormal matrices. For example, since the constant corresponds to matrices with 0 W (A), one can think about an improvement of the bound if 0 / W (A). Finally, our results suggest that half-radial matrices are related to the case when the upper bound in Crouzeix s inequality can be attained for some polynomial. This is supported by one of the conjectures of Greenbaum and Overton [1, p. 4] predicting that the upper bound in Crouzeix s inequality can be attained only if p is a monomial; see also our Lemma 11. A deeper analysis of this phenomenon needs further research which is, however, beyond the scope of this paper. 6. Acknowledgment This research was supported by the Grant Agency of the Czech Republic under the grant no. 17-04150J. We thank Marie Kubínová for careful reading of an earlier version of this manuscript and for pointing us to Example. 16

References [1] R. A. Horn, C. R. Johnson, Topics in matrix analysis, Cambridge University Press, Cambridge, 1994, corrected reprint of the 1991 original. [] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge University Press, Cambridge, 1990, corrected reprint of the 1985 original. [3] R. Winther, Zur theorie der beschränkten bilinearformen 17 (1) (199) 14 17. [4] M. Goldberg, G. Zwas, On matrices having equal spectral radius and spectral norm, Linear Algebra and its Applications 8 (5) (1974) 47 434. [5] K. E. Gustafson, D. K. M. Rao, Numerical range: The field of values of linear operators and matrices, Universitext, Springer-Verlag, New York, 1997. doi:10.1007/978-1-4613-8498-4. [6] M. Crouzeix, Numerical range and functional calculus in Hilbert space, J. Funct. Anal. 44 () (007) 668 690. doi:10.1016/j.jfa.006.10.013. [7] M. Crouzeix, Bounds for analytical functions of matrices, Integral Equations Operator Theory 48 (4) (004) 461 477. doi:10.1007/s0000-00- 1188-6. [8] M. Crouzeix, C. Palencia, The numerical range is a (1 + )-spectral set, SIAM J. Matrix Anal. Appl. 38 () (017) 649 655. [9] K. Okubo, T. Ando, Constants related to operators of class C ρ, Manuscripta Math. 16 (4) (1975) 385 394. doi:10.1007/bf0133467. [10] C. Badea, M. Crouzeix, B. Delyon, Convex domains and K-spectral sets, Math. Z. 5 () (006) 345 365. doi:10.1007/s0009-005-0857-y. [11] D. Choi, A proof of Crouzeix s conjecture for a class of matrices, Linear Algebra Appl. 438 (8) (013) 347 357. doi:10.1016/j.laa.01.1.045. [1] A. Greenbaum, M. L. Overton, Numerical investigation of Crouzeix s conjecture, Linear Algebra and its Applications 54 (018) 5 45. doi:10.1016/j.laa.017.04.035. 17