Numerical Range for the Matrix Exponential Function

Size: px
Start display at page:

Download "Numerical Range for the Matrix Exponential Function"

Transcription

1 Electronic Journal of Linear Algebra Volume 31 Volume 31: (2016) Article Numerical Range for the Matrix Exponential Function Christos Chorianopoulos Chun-Hua Guo University of Regina, Follow this and additional works at: Part of the Physical Sciences and Mathematics Commons Recommended Citation Chorianopoulos, Christos and Guo, Chun-Hua. (2016), "Numerical Range for the Matrix Exponential Function", Electronic Journal of Linear Algebra, Volume 31, pp DOI: This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository. For more information, please contact

2 NUMERICAL RANGE FOR THE MATRIX EXPONENTIAL FUNCTION CHRISTOS CHORIANOPOULOS AND CHUN-HUA GUO Abstract. For a given square matrix A, the numerical range for the exponential function e At, t C, is considered. Some geometrical and topological properties of the numerical range are presented. Key words. Matrix exponential function, Numerical range. AMS subject classifications. 15A16, 15A Introduction. The numerical range of matrices (and operators in general) has been a topic of extensive research for many decades. The numerical range of a matrix, its related notions and its extensions reveal a great deal of information about the matrix. The numerical range F(A) (also known as the field of values) of a matrix A C n n is the compact and convex set F(A) = {x Ax C : x C n, x x = 1} = {µ C : A λi n 2 µ λ, λ C}, where 2 denotes the spectral matrix norm. The latter definition for F(A) can be found in [9]. Note also that F(A) contains all eigenvalues of A. In the last few decades, the numerical range of matrix polynomials has also been studied extensively; see [8] in particular. If P(µ) = A m µ m +A m 1 µ m 1 + +A 0 is an n n matrix polynomial, then the numerical range of P(µ) is defined as W(P) = {µ C : 0 F(P(µ))}. More generally, one can define numerical ranges for analytic functions of square matrices [7]. Here, we will focus on the exponential function e At, where A C n n is fixed and t C is the variable. We will describe the set of all t values for which the numerical range of the matrix e At contains 0, i.e., the Crawford number for the Received by the editors on September 15, Accepted for publication on September 18, Handling Editor: Panayiotis Psarrakos. Papadiamanti 26, , Athens, Greece (cchorian@gmail.com). Department of Mathematics and Statistics, University of Regina, Regina, SK S4S 0A2, Canada (Chun-Hua.Guo@uregina.ca). This author was supported in part by an NSERC Discovery Grant. 633

3 634 C. Chorianopoulos and C.-H. Guo matrix e At is 0 (see [2] for a discussion of the Crawford number for powers of an operator). Note that our purpose here is not to study the exponential function e A using the numerical range of the matrix A (see [1] for some discussions along that line). Our topic here has some connection to [4], where the author studies classes of operators with 0 in the closure of their numerical range. For A C n n, the exponential of A is defined as e A = k=0 and the series always converges. We collect below some basic properties of the matrix exponential (see [5, 6]). Let A,B C n n. If AB = BA then e A e B = e A+B. It follows that e A is always invertible and (e A ) 1 = e A. If B is invertible then e BAB 1 = Be A B 1. It follows that the eigenvalues of e A are e λ, where λ are eigenvalues of A. If A is Hermitian (A = A), then e A is Hermitian positive definite. The structure of this paper is as follows: In Section 2, we define the numerical range of e At and show that the numerical range is a nonempty set if and only if A is not a scalar matrix. Some basic properties of the set are then presented in Section 3 when A is not a scalar matrix. The special case where the matrix A is Hermitian (or skew-hermitian) is treated in Section 4. Some illustrative examples are given in Section 5. A k k!, 2. Numerical range of e At. For fixed A C n n, we let E A (t) = e At, t C. Definition 1. The numerical range of E A (t) is W(E A ) = {t C : 0 F(e At )}. (2.1) Thus, we also have W(E A ) = {t C : x e At x = 0, for some nonzero x C n } = {t C : e At λi n 2 λ, λ C}. It follows immediately that for any square matrix A the origin cannot be in W(E A ) since e 0A = I n. We now determine when W(E A ) is nonempty. Theorem 1. Let A C n n. Then W(E A ) is nonempty if and only if A is not a scalar matrix.

4 Numerical Range for the Matrix Exponential Function 635 Proof. If A is a scalar matrix (A = ci n for some complex c), then W(E A ) is empty since F(e At ) = {e ct } and obviously e ct 0. Suppose that A is not a scalar matrix. We will show W(E A ) is nonempty by induction on the size n of A. Let U be unitary such that A = U RU and R is upper triangular. We have e At = U e Rt U, and thus, F(e At ) = F(e Rt ) for each t C. It follows that W(E A ) = W(E R ). Therefore, we may assume that A is already in upper triangular form. [ ] λ1 a Let A be a 2 2 complex matrix. Then by formula (10.40) in [5] 0 λ 2 e At = when λ 1 λ 2 ; if λ 1 = λ 2 = λ then e At = [ e λ1t a eλ 2 t e λ 1 t λ 2 λ 1 0 e λ2t [ e λt e λt at 0 e λt If A has two distinct eigenvalues, then since F(e At ) is convex for all t, it suffices to show that there is always a complex t for which the line segment connecting e λ1t and e λ2t contains the origin. In other words, we want a t C such that for some ρ (0,1) ρe λ1t +(1 ρ)e λ2t = 0, that is, e (λ1 λ2)t = 1 ρ ρ, or e(λ1 λ2)t (2k+1)πi = 1 ρ ρ, where k Z. Therefore, we can take t = ] ]. ln 1 ρ ρ +(2k +1)πi λ 1 λ 2, k Z. (2.2) Since ρ (0,1) can be arbitrary, we know from (2.2) that W(E A ) contains infinitely many parallel straight lines (these lines are horizontal when λ 1 λ 2 is real). If λ 1 = λ 2 = λ, then F(e At ) = {z C : z e λt at 2 eλt } and a 0 because otherwise A would be a scalar matrix. So, 0 F(e At ) if and only if t 2 a. Therefore, W(E A) = {t C : t 2 a }. Suppose that W(E A ) for every k k non-scalar upper triangular matrix A, where k 2. So, there is a t 0 C such that x 0 x eat0 0 = 0 for some nonzero x 0 C k. Now for any (k +1) (k +1) non-scalar upper triangular matrix A, there are three possibilities: A = [ Â y 0 ξ ] [ ξ y T, or A = 0 Â ],

5 636 C. Chorianopoulos and C.-H. Guo or A = λ λ... a, λ where y C k, ξ C, a 0, and  is a k k non-scalar upper triangular matrix. We will treat the first case. The second case can be treated similarly. The third case can be reduced to the first case by applying a permutation similarity that interchanges row k +1 with row 2 and column k +1 with column 2. By induction hypothesis, there is a t 0 C such that x 0 x eât0 0 = 0 for some nonzero x 0 C k. Then [ ] (At e At0 0 ) j = = eât0 z 0, j! 0 e ξt0 j=0 where z 0 C k. Now for w = [ x T 0 0 ] T, we have w e At0 w = [ x 0 0 ][ eât0 z 0 0 e ξt0 ] [ x0 0 ] = x x 0eÂt0 0 = 0. Thus, t 0 W(E A ). By an argument similar to the one at the end of the proof, we can show that W(EÂ) W(E A ) for any upper triangular matrix A and any leading principal submatrix Â. 3. Properties of W(E A ). In this section, we provide several basic properties of W(E A ). Proposition 1. Suppose A C n n is not a scalar matrix. Then W(E A ) is closed but it is never bounded. Moreover, W(E A ) always contains infinitely many parallel straight lines. Proof. To prove that it is closed suppose that there is a sequence {t n } W(E A ) with t n ˆt. Therefore, there is a sequence ofunit vectors{x n } such that x ne Atn x n = 0 for all n N. We need to prove that ˆt W(E A ). Since x n is bounded, there is a subsequence x nk converging to a unit vector ˆx. Therefore, 0 = lim n k x n k e Atn kx nk = ˆx e Aˆtˆx, so ˆt W(E A ). In the proof of Theorem 1, we have shown that W(E A ) contains infinitely many parallel straight lines for all 2 2 non-scalarmatrices A. The proof by induction there

6 Numerical Range for the Matrix Exponential Function 637 also reveals that, for n 2, W(E A ) always contains infinitely many parallel straight lines for all n n non-scalar matrices A. So, if A is not a scalar matrix a t W(E A ) can be as large in magnitude as we like. A question that naturally arises is: how small in magnitude can t be? Proposition 2. Suppose A C n n is not a scalar matrix and t W(E A ). Then ln2 t inf A ki. (3.1) n 2 k C Proof. Since t W(E A ), e At λi n 2 λ, λ C. For λ = 1 we have e At I n 2 1. Since 0 F(e At ), 0 e kt F(e At ) = F(e At ktin ), for all k C. Therefore It is known (Corollary in [6]) that e At ktin I n 2 1. (3.2) e X+Y e X 2 (e Y 2 1)e X 2, where X,Y are square matrices of the same size. For Y = At kti n and X = 0 the inequality becomes Combining inequalities (3.2) and (3.3) we have or Inequality (3.1) follows readily. e At ktin I n 2 e At ktin 2 1. (3.3) 1 e At ktin 2 1, ln2 t A ki n 2. The term inf A ki n 2 in (3.1) gives the distance from the matrix A to the set k C of all scalar matrices. The infimum is achieved for a particular scalar matrix since inf A ki n 2 = inf A ki n 2. In estimating this distance, we may reduce A k C k 2 A 2 to upper triangular form by the Schur triangularization. If A is Hermitian, then we readily find that the distance is 1 2 (λ n λ 1 ), where λ n and λ 1 are the largest and the smallest eigenvalues of A, respectively. The constant ln2 in the lower bound in (3.1) is unlikely to be sharp. But it will be seen later that the constant (valid for all n n non-scalar matrices) is at most π/2.

7 638 C. Chorianopoulos and C.-H. Guo ThenextresultisabouthowW(E A )willchangeifashiftorascalarmultiplication is applied to A. Proposition 3. Suppose A C n n is not a scalar matrix and let α C. Then W(E A+αI ) = W(E A ) and for α 0, W(E αa ) = 1 α W(E A). and Proof. W(E A+αI ) = {t C : x e t(a+αi) x = 0, for some nonzero x C n } = {t C : x e ta e tαi x = 0, for some nonzero x C n } = {t C : x e ta x = 0, for some nonzero x C n } = W(E A ), W(E αa ) = {t C : x e tαa x = 0, for some nonzero x C n } = { w α C : x e wa x = 0, for some nonzero x C n } = 1 α W(E A). We now give a condition under which W(E A ) contains some points on the real axis. Proposition 4. Let λ and µ be any two eigenvalues of A such that λ µ = a+bi with a,b R and b 0. Then W(E A ) {(2k+1)π/b : k Z}. Proof. From the proof by induction for Theorem 1, we only need to prove the result here when A is an upper triangular matrix with eigenvalues λ and µ. In this case, we know from (2.2) that W(E A ) contains all points of the form t = ln 1 ρ ρ +(2k +1)πi a+bi, ρ (0,1), k Z. We get t = (2k +1)π/b by taking ρ (0,1) with ln 1 ρ ρ = (2k +1)π a b. It is well known that the solution to the differential equation dx dt = Ax is given by x(t) = e At x(0). The above proposition says that for any matrix A with a nonzero imaginary part for the difference of any two eigenvalues, W(E A ) contains some t > 0. It follows that ˆx e Atˆx = 0 for some nonzero ˆx C n. Therefore, for x(0) = ˆx, we have x(0) x(t) = ˆx e Atˆx = 0, i.e., x(t) and x(0) are orthogonal.

8 Numerical Range for the Matrix Exponential Function 639 We now show that there are no isolated points in W(E A ). Proposition 5. Suppose A C n n is not a scalar matrix. Then W(E A ) does not have any isolated points. Proof. Let t 0 be apoint in W(E A ). We separatetwocases. FirstInt(F(e t0a )) = and then Int(F(e t0a )). Let t 0 W(E A ) and suppose that Int(F(e t0a )) =. Then F(e t0a ) is a line segment passingthrough the origin. By Theorem1.6.3of [6], the endpoints off(e t0a ) are eigenvalues of e t0a, say e t0λ1 and e t0λ2, where λ 1 and λ 2 are eigenvalues of A. So the origin cannot be an endpoint of the line segment, and there is an r (0,1) such that 0 = re t0λ1 +(1 r)e t0λ2, r 1 r = et0(λ2 λ1) < 0. (3.4) We will show that 0 F(e (t0+εeiθ )A ) for a suitable θ [0,2π] and all ε > 0. We just need to show the existence of s (0,1) such that se (t0+εeiθ )λ 1 +(1 s)e (t0+εeiθ )λ 2 = 0, i.e., s/(1 s) = e (t0+εeiθ )λ 2 /e (t0+εeiθ )λ 1 = e t0(λ2 λ1) e εeiθ (λ 2 λ 1). For this, we need to show that e t0(λ2 λ1) e εeiθ (λ 2 λ 1) is a negative real number. Since e t0(λ2 λ1) is a negative real number by (3.4), we just need to choose θ such that e εeiθ (λ 2 λ 1) is a positive real number, so we choose θ such that e iθ (λ 2 λ 1 ) = λ 2 λ 1. Now let t 0 W(E A ) and suppose that Int(F(e At0 )). Since t 0 W(E A ), we have 0 F(e At0 ), so 0 = x 0 x eat0 0 for some x 0 C n with x 0 2 = 1. Since Int(F(e At0 ) and F(e At0 ) is a convex set, we can take w 1,w 2 F(e At0 ) such that w 1 = re iθ1,w 2 = re iθ2, where r > 0 and the arguments θ 1 and θ 2 may be negative and satisfy 0 < θ 2 θ 1 < π 2. We also have w 1 = x 1 eat0 x 1, w 2 = x 2 eat0 x 2, for suitable x 1,x 2 C n with x 1 2 = x 2 2 = 1. For ǫ > 0, δ [0,2π), and j = 0,1,2, let We have w 0 (ǫ,δ) = x 0e At0 e Aǫeiδ x 0 = x 0e At0 w j (ǫ,δ) = x j ea(t0+ǫeiδ) x j. k=0 1 k! (Aǫeiδ ) k x 0 = k=0 1 k! ǫk e ikδ x 0e At0 A k x 0. If x 0 eat0 A k x 0 = 0 for all k 0, then we would have x 0 ea(t0+ǫeiδ) x 0 = 0 for all ǫ > 0 and all δ [0,2π), and in particular x 0e A0 x 0 = x 0x 0 = 0, which is impossible.

9 640 C. Chorianopoulos and C.-H. Guo We now let k 1 be the smallest integer such that x 0 eat0 A k x 0 0. Then w 0 (ǫ,δ) = 1 k! ǫk e ikδ x 0e At0 A k x 0 +o(ǫ k ). For ǫ sufficiently small, we have w 1 (ǫ,δ) r 2 and w 2(ǫ,δ) r 2, and argw 1 (ǫ,δ) θ (θ 2 θ 1 ), argw 2 (ǫ,δ) θ (θ 2 θ 1 ). Choose δ such that e ikδ x 0 eat0 A k x 0 = x 0 eat0 A k x 0 e i(θ1+θ2)/2. Then for ǫ > 0 sufficiently small, 0 is inside the triangle with vertices w j (ǫ,δ). So 0 F(e A(t0+ǫeiδ) ). Thus, t 0 +ǫe iδ W(E A ) for the chosen δ and all ǫ > 0 sufficiently small. So t 0 is not an isolated point. From the proof we know that for any t W(E A ), W(E A ) also contains a line segmentl t withtasoneofthe twoendpoints. This meansthat W(E A ) cannotcontain a circle disjoint from the rest of W(E A ). It also follows that W(E A ) = t W(E A) l t. Many of the line segments l t will merge into one line segment, but W(E A ) is still the union of infinitely many line segments by Proposition 1. In the next example, we show that a connected component of W(E A ) is not convex in general. [ ] 0 1 Example 1. Let A =. Then from the proof of Theorem 1 we already 0 0 know that W(E A ) = {t C : t 2}, which is connected but not convex. We now examine when a point t 0 is an interior point of W(E A ). Proposition 6. Suppose A C n n is not a scalar matrix. If the origin is an interior point of F(e At0 ), then t 0 is an interior point of W(E A ). Proof. We use D(c,r) to denote the open disk with centercand radiusr. Suppose 0 Int(F(e At0 )). Then there exists ǫ > 0 such that D(0,2ǫ) F(e At0 ). In particular, for j = 0,1,2, w j = ǫe 2jπ/3 = x j eat0 x j for some x j C n with x j 2 = 1. Note that 0 is inside the triangle with vertices w j = x j eat0 x j and that the functions f j (t) = x j eat x j are continuous at t 0. Then there exists δ > 0 such that for all t D(t 0,δ), 0 is inside the triangle with vertices w j = x j eat x j, so 0 F(e At ). Thus, D(t 0,δ) W(E A ). Corollary 1. Suppose A C n n is not a scalar matrix. If t 0 is a boundary point of W(E A ), then the origin is a boundary point of F(e At0 ).

10 Numerical Range for the Matrix Exponential Function 641 We end this section by one more observation about W(E A ). Proposition 7. Suppose A C n n is not a scalar matrix and t 0 W(E A ). Then (a) t 0 W(E A ), (b) t 0 W(E A ). Proof. For (a), it suffices to show that for every invertible matrix B, 0 F(B) implies that 0 F(B 1 ). But if 0 F(B) then 0 F(B ), therefore x 0 B x 0 = 0 for some unit x 0 C n. Then, so, 0 F(B 1 ) since Bx = x 0 B x 0 = x 0 B B 1 Bx 0 = (Bx 0 ) B 1 (Bx 0 ), For (b), we have for some unit x 0 C n 0 = x 0e t0a x 0 = x 0e t0a e t0a e t0a x 0 = (e t0a x 0 ) e t0a (e t0a x 0 ). Therefore, t 0 W(E A ) and the conclusion follows from (a). Part (a) means that W(E A ) is symmetric with respect to the origin. 4. The cases where A is Hermitian or skew-hermitian. We have seen earlier that W(E A ) contains infinitely many parallel straight lines for any non-scalar matrixa. When AisHermitian orskew-hermitian,we canshowthatw(e A ) consists of infinitely many parallel straight lines. In what follows, we only provide proofs for the case where A is Hermitian. If A is skew-hermitian, then ia is Hermitian and W(E A ) = iw(e ia ) by Proposition 3, so corresponding results for the skew-hermitan case will follow readily. Proposition 8. Suppose A C n n is not a scalar matrix. (a) If A is Hermitian then W(E A ) R = and t W(E A ) if and only if t+α W(E A ) for all α R. (b) If A is skew-hermitian then W(E A ) ir = and t W(E A ) if and only if t+iα W(E A ) for all α R. Proof. Let A be Hermitian. Then for r R the matrix exponential e ra is positive definite. Therefore, r / W(E A ). For any t W(E A ) there is a nonzero vector x t such that 0 = x te At x t = x te α 2 A e A(t+α) e α 2 A x t = w te A(t+α) w t, where w t = e α 2 A x t is nonzero. Thus, t+α W(E A ). Proposition 9. Suppose A C n n is not a scalar matrix.

11 642 C. Chorianopoulos and C.-H. Guo (a) If A is Hermitian then W(E A ) is symmetric with respect to the real axis. (b) If A is skew-hermitian then W(E A ) is symmetric with respect to the imaginary axis. Proof. The conclusion in (a) follows directly from Proposition 8 (a) and Proposition 7 (a). Propositions 8 and 9 reveal what W(E A ) looks like for Hermitian or skew- Hermitian A. In particular, if A is Hermitian, W(E A ) consists of disjoint complex stripes parallel to the real axis of the form {t C : γ Im(t) δ, γδ > 0}. Observe that γ and δ should have the same sign since W(E A ) has no intersection with real axis. Moreover,for each stripe in the upper plane, W(E A ) contains its symmetric stripe in the lower plane. However, it is possible to have γ = δ. In that case, the corresponding stripes are reduced to straight lines parallel to the real axis. Indeed we have the following result. Proposition 10. Suppose A C n n is Hermitian and has only two distinct eigenvalues λ and µ. Then W(E A ) = {t C : Im(t) = (2k +1)π/(λ µ), k Z}. Proof. Let U be unitary such that A = U DU and D is diagonal with diagonal entries equal to λ or µ. Then F(e At ) = F(e Dt ) is the line segment connecting e λt and e µt, by Theorem of [6]. Thus, t = a + bi W(E A ) if and only if 0 = se λt +(1 s)e µt = (se (λ µ)t +(1 s))e µt for some s (0,1), if and only if e (λ µ)t = e (λ µ)a e (λ µ)bi is a negative number, if and only if e (λ µ)bi = 1, if and only if (λ µ)b = (2k +1)π for some k Z. We also have examples showing that for an Hermitian matrix A having three distinct eigenvalues,w(e A )canhavestripeswith zerowidth andstripeswithnonzero width. In Example 2 of next section, some stripes with nonzero width are displayed. For Hermitian matrices, the lower bound in (3.1) can be improved. Proposition 11. Let A be an n n Hermitian and non-scalar matrix. Let λ 1 λ 2 λ n be the eigenvalues of A (so λ 1 < λ n since A is not scalar). Then for all t W(E A ), and the lower bound is sharp. t π λ n λ 1,

12 Numerical Range for the Matrix Exponential Function 643 Proof. By Proposition 8, min t W(EA) t is achieved at t a = ia W(E A ) for some a > 0, and ia is a boundary point of W(E A ). It follows from Proposition 6 that 0 is a boundary point of F(e iaa ). Since A is Hermitian, e iaa is normal. By Theorem of [6] and the fact that 0 is a boundary point of F(e iaa ), we know that 0 = se iaλp +(1 s)e iaλq fortwodistincteigenvaluesλ p andλ q ofa, andsomes (0,1). Thus, e ia(λp λq) is a negative real number, so a(λ p λ q ) = (2k+1)π for some k Z. Now, a = 2k+1 λ p λ q π π λ n λ 1. Therefore, t π λ n λ 1 for all t W(E A ). The lower bound is sharp since t 0 = πi λ n λ 1 W(E A ). In fact, we have e t0(λn λ1) = 1 and thus 1 2 et0λ et0λn = 0, so 0 F(e At0 ). Note that we have inf k C A ki n 2 = 1 2 (λ n λ 1 ) in Proposition11. So the constant ln2 in (3.1) has been improved to π/2 when A is Hermitian. 5. Some illustrative examples. In this section, we give three matrices and plot the numerical range of the corresponding matrix exponential function for each case. The plots are obtained using an inverse numerical range Matlab file based on the algorithm described in [3]. Example 2. Consider the Hermitian matrix A = , with eigenvalues λ 1 = 2,λ 2 = 3,λ 3 = 4. We plot W(E A ) within [ 4,4] [ 4,4] in Figure 5.1. We can see 6 stripes of W(E A ), symmetric about the real axis. The distance between the origin and W(E A ) is seen to be around 0.52, consistent with the exact distance of π/6 obtained in Proposition 11. Example 3. Consider the unitary matrix A = 2/2 0 2/2 2/2 0 2/ We plot W(E A ) within [ 7,7] [ 7,7] in Figure 5.2. The figure shows that W(E A ) containssomepoints on the realaxis. It alsosuggeststhat W(E A ) issymmetric about the origin and contains infinitely many parallel straight lines. All these are consistent with our theoretical results.

13 Electronic Journal of Linear Algebra ISSN C. Chorianopoulos and C.-H. Guo Fig Fig Example 4. Consider the randomly generated complex matrix i i i A = i i i i i i We plot W (EA ) within [ 2kAk2, 2kAk2 ] [ 2kAk2, 2kAk2 ] in Figure 5.3. The figure shows that W (EA ) contains some points on the real axis. It also suggests that W (EA )

14 Electronic Journal of Linear Algebra ISSN Numerical Range for the Matrix Exponential Function 645 is symmetric about the origin and contains infinitely many parallel straight lines. All these are consistent with our theoretical results Fig Acknowledgment. The authors thank the two referees for their helpful comments. REFERENCES [1] B. Beckermann and L. Reichel. Error estimates and evaluation of matrix functions via the Faber transform. SIAM Journal on Numerical Analysis, 47: , [2] M.-D. Choi and C.-K. Li. Numerical ranges of the powers of an operator. Journal of Mathematical Analysis and Applications, 365: , [3] C. Chorianopoulos, P. Psarrakos, and F. Uhlig. A method for the inverse numerical range problem. Electronic Journal of Linear Algebra, 20: , [4] C. Diogo. Algebraic properties of the set of operators with 0 in the closure of the numerical range. Operators and Matrices, 9:83 93, [5] N.J. Higham. Functions of Matrices. SIAM, Philadelphia, [6] R.A. Horn and C.R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, [7] P. Lancaster and P. Psarrakos. Normal and seminormal eigenvalues of matrix functions. Integral Equations Operator Theory, 41: , [8] C.-K. Li and L. Rodman. Numerical range of matrix polynomials. SIAM Journal on Matrix Analysis and Applications, 15: , [9] J.G. Stampfli and J.P. Williams. Growth conditions and the numerical range in a Banach algebra. The Tohoku Mathematical Journal, 20: , 1968.

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.

More information

BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES

BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES BIRKHOFF-JAMES ǫ-orthogonality SETS IN NORMED LINEAR SPACES MILTIADIS KARAMANLIS AND PANAYIOTIS J. PSARRAKOS Dedicated to Profess Natalia Bebiano on the occasion of her 60th birthday Abstract. Consider

More information

Higher rank numerical ranges of rectangular matrix polynomials

Higher rank numerical ranges of rectangular matrix polynomials Journal of Linear and Topological Algebra Vol. 03, No. 03, 2014, 173-184 Higher rank numerical ranges of rectangular matrix polynomials Gh. Aghamollaei a, M. Zahraei b a Department of Mathematics, Shahid

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation

The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation Linear Algebra and its Applications 360 (003) 43 57 www.elsevier.com/locate/laa The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation Panayiotis J. Psarrakos

More information

Polynomial numerical hulls of order 3

Polynomial numerical hulls of order 3 Electronic Journal of Linear Algebra Volume 18 Volume 18 2009) Article 22 2009 Polynomial numerical hulls of order 3 Hamid Reza Afshin Mohammad Ali Mehrjoofard Abbas Salemi salemi@mail.uk.ac.ir Follow

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

N-WEAKLY SUPERCYCLIC MATRICES

N-WEAKLY SUPERCYCLIC MATRICES N-WEAKLY SUPERCYCLIC MATRICES NATHAN S. FELDMAN Abstract. We define an operator to n-weakly hypercyclic if it has an orbit that has a dense projection onto every n-dimensional subspace. Similarly, an operator

More information

Matrix functions that preserve the strong Perron- Frobenius property

Matrix functions that preserve the strong Perron- Frobenius property Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 18 2015 Matrix functions that preserve the strong Perron- Frobenius property Pietro Paparella University of Washington, pietrop@uw.edu

More information

On cardinality of Pareto spectra

On cardinality of Pareto spectra Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 50 2011 On cardinality of Pareto spectra Alberto Seeger Jose Vicente-Perez Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Gershgorin type sets for eigenvalues of matrix polynomials

Gershgorin type sets for eigenvalues of matrix polynomials Electronic Journal of Linear Algebra Volume 34 Volume 34 (18) Article 47 18 Gershgorin type sets for eigenvalues of matrix polynomials Christina Michailidou National Technical University of Athens, xristina.michailidou@gmail.com

More information

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices Chi-Kwong Li Department of Mathematics, College of William & Mary, P.O. Box 8795, Williamsburg, VA 23187-8795, USA. E-mail:

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

Refined Inertia of Matrix Patterns

Refined Inertia of Matrix Patterns Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 24 2017 Refined Inertia of Matrix Patterns Kevin N. Vander Meulen Redeemer University College, kvanderm@redeemer.ca Jonathan Earl

More information

Non-trivial solutions to certain matrix equations

Non-trivial solutions to certain matrix equations Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 4 00 Non-trivial solutions to certain matrix equations Aihua Li ali@loyno.edu Duane Randall Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Two Results About The Matrix Exponential

Two Results About The Matrix Exponential Two Results About The Matrix Exponential Hongguo Xu Abstract Two results about the matrix exponential are given. One is to characterize the matrices A which satisfy e A e AH = e AH e A, another is about

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices

Singular Value and Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Electronic Journal of Linear Algebra Volume 32 Volume 32 (2017) Article 8 2017 Singular Value Norm Inequalities Associated with 2 x 2 Positive Semidefinite Block Matrices Aliaa Burqan Zarqa University,

More information

Lecture 7. Econ August 18

Lecture 7. Econ August 18 Lecture 7 Econ 2001 2015 August 18 Lecture 7 Outline First, the theorem of the maximum, an amazing result about continuity in optimization problems. Then, we start linear algebra, mostly looking at familiar

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

Inertially arbitrary nonzero patterns of order 4

Inertially arbitrary nonzero patterns of order 4 Electronic Journal of Linear Algebra Volume 1 Article 00 Inertially arbitrary nonzero patterns of order Michael S. Cavers Kevin N. Vander Meulen kvanderm@cs.redeemer.ca Follow this and additional works

More information

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi Real Analysis Math 3AH Rudin, Chapter # Dominique Abdi.. If r is rational (r 0) and x is irrational, prove that r + x and rx are irrational. Solution. Assume the contrary, that r+x and rx are rational.

More information

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices

A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices University of Wyoming Wyoming Scholars Repository Mathematics Faculty Publications Mathematics 9-1-1997 A Simple Proof of Fiedler's Conjecture Concerning Orthogonal Matrices Bryan L. Shader University

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

Chapter 4. Measure Theory. 1. Measure Spaces

Chapter 4. Measure Theory. 1. Measure Spaces Chapter 4. Measure Theory 1. Measure Spaces Let X be a nonempty set. A collection S of subsets of X is said to be an algebra on X if S has the following properties: 1. X S; 2. if A S, then A c S; 3. if

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Inequalities involving eigenvalues for difference of operator means

Inequalities involving eigenvalues for difference of operator means Electronic Journal of Linear Algebra Volume 7 Article 5 014 Inequalities involving eigenvalues for difference of operator means Mandeep Singh msrawla@yahoo.com Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Topological properties

Topological properties CHAPTER 4 Topological properties 1. Connectedness Definitions and examples Basic properties Connected components Connected versus path connected, again 2. Compactness Definition and first examples Topological

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

An angle metric through the notion of Grassmann representative

An angle metric through the notion of Grassmann representative Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994

International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 International Competition in Mathematics for Universtiy Students in Plovdiv, Bulgaria 1994 1 PROBLEMS AND SOLUTIONS First day July 29, 1994 Problem 1. 13 points a Let A be a n n, n 2, symmetric, invertible

More information

Recognition of hidden positive row diagonally dominant matrices

Recognition of hidden positive row diagonally dominant matrices Electronic Journal of Linear Algebra Volume 10 Article 9 2003 Recognition of hidden positive row diagonally dominant matrices Walter D. Morris wmorris@gmu.edu Follow this and additional works at: http://repository.uwyo.edu/ela

More information

Sparse spectrally arbitrary patterns

Sparse spectrally arbitrary patterns Electronic Journal of Linear Algebra Volume 28 Volume 28: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 8 2015 Sparse spectrally arbitrary patterns Brydon

More information

Problem Set 2: Solutions Math 201A: Fall 2016

Problem Set 2: Solutions Math 201A: Fall 2016 Problem Set 2: s Math 201A: Fall 2016 Problem 1. (a) Prove that a closed subset of a complete metric space is complete. (b) Prove that a closed subset of a compact metric space is compact. (c) Prove that

More information

Characterization of half-radial matrices

Characterization of half-radial matrices Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices

Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 35 2009 Short proofs of theorems of Mirsky and Horn on diagonals and eigenvalues of matrices Eric A. Carlen carlen@math.rutgers.edu

More information

Average Mixing Matrix of Trees

Average Mixing Matrix of Trees Electronic Journal of Linear Algebra Volume 34 Volume 34 08 Article 9 08 Average Mixing Matrix of Trees Chris Godsil University of Waterloo, cgodsil@uwaterloo.ca Krystal Guo Université libre de Bruxelles,

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

TEST CODE: MMA (Objective type) 2015 SYLLABUS

TEST CODE: MMA (Objective type) 2015 SYLLABUS TEST CODE: MMA (Objective type) 2015 SYLLABUS Analytical Reasoning Algebra Arithmetic, geometric and harmonic progression. Continued fractions. Elementary combinatorics: Permutations and combinations,

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Part IA. Vectors and Matrices. Year

Part IA. Vectors and Matrices. Year Part IA Vectors and Matrices Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2018 Paper 1, Section I 1C Vectors and Matrices For z, w C define the principal value of z w. State de Moivre s

More information

On (T,f )-connections of matrices and generalized inverses of linear operators

On (T,f )-connections of matrices and generalized inverses of linear operators Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 33 2015 On (T,f )-connections of matrices and generalized inverses of linear operators Marek Niezgoda University of Life Sciences

More information

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability... Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

A factorization of the inverse of the shifted companion matrix

A factorization of the inverse of the shifted companion matrix Electronic Journal of Linear Algebra Volume 26 Volume 26 (203) Article 8 203 A factorization of the inverse of the shifted companion matrix Jared L Aurentz jaurentz@mathwsuedu Follow this and additional

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

ELA

ELA A METHOD FOR THE INVERSE NUMERICAL RANGE PROBLEM CHRISTOS CHORIANOPOULOS, PANAYIOTIS PSARRAKOS, AND FRANK UHLIG Abstract. For a given complex square matrix A, we develop, implement and test a fast geometric

More information

The third smallest eigenvalue of the Laplacian matrix

The third smallest eigenvalue of the Laplacian matrix Electronic Journal of Linear Algebra Volume 8 ELA Volume 8 (001) Article 11 001 The third smallest eigenvalue of the Laplacian matrix Sukanta Pati pati@iitg.ernet.in Follow this and additional works at:

More information

The symmetric linear matrix equation

The symmetric linear matrix equation Electronic Journal of Linear Algebra Volume 9 Volume 9 (00) Article 8 00 The symmetric linear matrix equation Andre CM Ran Martine CB Reurings mcreurin@csvunl Follow this and additional works at: http://repositoryuwyoedu/ela

More information

Pairs of matrices, one of which commutes with their commutator

Pairs of matrices, one of which commutes with their commutator Electronic Journal of Linear Algebra Volume 22 Volume 22 (2011) Article 38 2011 Pairs of matrices, one of which commutes with their commutator Gerald Bourgeois Follow this and additional works at: http://repository.uwyo.edu/ela

More information

determinantal conjecture, Marcus-de Oliveira, determinants, normal matrices,

determinantal conjecture, Marcus-de Oliveira, determinants, normal matrices, 1 3 4 5 6 7 8 BOUNDARY MATRICES AND THE MARCUS-DE OLIVEIRA DETERMINANTAL CONJECTURE AMEET SHARMA Abstract. We present notes on the Marcus-de Oliveira conjecture. The conjecture concerns the region in the

More information

Numerical Ranges and Dilations. Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement

Numerical Ranges and Dilations. Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement Numerical Ranges and Dilations Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement Man-Duen Choi Department of Mathematics, University of Toronto, Toronto, Ontario, Canada M5S 3G3.

More information

1.4 Outer measures 10 CHAPTER 1. MEASURE

1.4 Outer measures 10 CHAPTER 1. MEASURE 10 CHAPTER 1. MEASURE 1.3.6. ( Almost everywhere and null sets If (X, A, µ is a measure space, then a set in A is called a null set (or µ-null if its measure is 0. Clearly a countable union of null sets

More information

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004)

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004) NEW ZEALAND JOURNAL OF MATHEMATICS Volume 34 (2005), 43 60 ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS N. Katilova (Received February 2004) Abstract. In this article,

More information

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103

Analysis and Linear Algebra. Lectures 1-3 on the mathematical tools that will be used in C103 Analysis and Linear Algebra Lectures 1-3 on the mathematical tools that will be used in C103 Set Notation A, B sets AcB union A1B intersection A\B the set of objects in A that are not in B N. Empty set

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

MATH 31BH Homework 1 Solutions

MATH 31BH Homework 1 Solutions MATH 3BH Homework Solutions January 0, 04 Problem.5. (a) (x, y)-plane in R 3 is closed and not open. To see that this plane is not open, notice that any ball around the origin (0, 0, 0) will contain points

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

A note on estimates for the spectral radius of a nonnegative matrix

A note on estimates for the spectral radius of a nonnegative matrix Electronic Journal of Linear Algebra Volume 13 Volume 13 (2005) Article 22 2005 A note on estimates for the spectral radius of a nonnegative matrix Shi-Ming Yang Ting-Zhu Huang tingzhuhuang@126com Follow

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Decomposable Numerical Ranges on Orthonormal Tensors

Decomposable Numerical Ranges on Orthonormal Tensors Decomposable Numerical Ranges on Orthonormal Tensors Chi-Kwong Li Department of Mathematics, College of William and Mary, P.O.Box 8795, Williamsburg, Virginia 23187-8795, USA. E-mail: ckli@math.wm.edu

More information

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

More information

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control

Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Iterative Solution of a Matrix Riccati Equation Arising in Stochastic Control Chun-Hua Guo Dedicated to Peter Lancaster on the occasion of his 70th birthday We consider iterative methods for finding the

More information

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5

MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5 MATH 722, COMPLEX ANALYSIS, SPRING 2009 PART 5.. The Arzela-Ascoli Theorem.. The Riemann mapping theorem Let X be a metric space, and let F be a family of continuous complex-valued functions on X. We have

More information

2 Sequences, Continuity, and Limits

2 Sequences, Continuity, and Limits 2 Sequences, Continuity, and Limits In this chapter, we introduce the fundamental notions of continuity and limit of a real-valued function of two variables. As in ACICARA, the definitions as well as proofs

More information

Maths 212: Homework Solutions

Maths 212: Homework Solutions Maths 212: Homework Solutions 1. The definition of A ensures that x π for all x A, so π is an upper bound of A. To show it is the least upper bound, suppose x < π and consider two cases. If x < 1, then

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

More calculations on determinant evaluations

More calculations on determinant evaluations Electronic Journal of Linear Algebra Volume 16 Article 007 More calculations on determinant evaluations A. R. Moghaddamfar moghadam@kntu.ac.ir S. M. H. Pooya S. Navid Salehy S. Nima Salehy Follow this

More information

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS Abstract. We present elementary proofs of the Cauchy-Binet Theorem on determinants and of the fact that the eigenvalues of a matrix

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

DAVIS WIELANDT SHELLS OF NORMAL OPERATORS

DAVIS WIELANDT SHELLS OF NORMAL OPERATORS DAVIS WIELANDT SHELLS OF NORMAL OPERATORS CHI-KWONG LI AND YIU-TUNG POON Dedicated to Professor Hans Schneider for his 80th birthday. Abstract. For a finite-dimensional operator A with spectrum σ(a), the

More information

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem 56 Chapter 7 Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem Recall that C(X) is not a normed linear space when X is not compact. On the other hand we could use semi

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

A method for computing quadratic Brunovsky forms

A method for computing quadratic Brunovsky forms Electronic Journal of Linear Algebra Volume 13 Volume 13 (25) Article 3 25 A method for computing quadratic Brunovsky forms Wen-Long Jin wjin@uciedu Follow this and additional works at: http://repositoryuwyoedu/ela

More information

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control

Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Electronic Journal of Linear Algebra Volume 34 Volume 34 08) Article 39 08 Structured eigenvalue/eigenvector backward errors of matrix pencils arising in optimal control Christian Mehl Technische Universitaet

More information

THEOREMS, ETC., FOR MATH 515

THEOREMS, ETC., FOR MATH 515 THEOREMS, ETC., FOR MATH 515 Proposition 1 (=comment on page 17). If A is an algebra, then any finite union or finite intersection of sets in A is also in A. Proposition 2 (=Proposition 1.1). For every

More information

Line segments on the boundary of the numerical ranges of some tridiagonal matrices

Line segments on the boundary of the numerical ranges of some tridiagonal matrices Electronic Journal of Linear Algebra Volume 30 Volume 30 (2015) Article 46 2015 Line segments on the boundary of the numerical ranges of some tridiagonal matrices Ilya M. Spitkovsky NYUAD and College of

More information

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM

MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM MAPPING AND PRESERVER PROPERTIES OF THE PRINCIPAL PIVOT TRANSFORM OLGA SLYUSAREVA AND MICHAEL TSATSOMEROS Abstract. The principal pivot transform (PPT) is a transformation of a matrix A tantamount to exchanging

More information

Perturbations of functions of diagonalizable matrices

Perturbations of functions of diagonalizable matrices Electronic Journal of Linear Algebra Volume 20 Volume 20 (2010) Article 22 2010 Perturbations of functions of diagonalizable matrices Michael I. Gil gilmi@bezeqint.net Follow this and additional works

More information

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries

Chapter 1. Measure Spaces. 1.1 Algebras and σ algebras of sets Notation and preliminaries Chapter 1 Measure Spaces 1.1 Algebras and σ algebras of sets 1.1.1 Notation and preliminaries We shall denote by X a nonempty set, by P(X) the set of all parts (i.e., subsets) of X, and by the empty set.

More information

NOTES ON THE NUMERICAL RANGE

NOTES ON THE NUMERICAL RANGE NOTES ON THE NUMERICAL RANGE JOEL H. SHAPIRO Abstract. This is an introduction to the notion of numerical range for bounded linear operators on Hilbert space. The main results are: determination of the

More information

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009 LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix

More information

Generalized eigenspaces

Generalized eigenspaces Generalized eigenspaces November 30, 2012 Contents 1 Introduction 1 2 Polynomials 2 3 Calculating the characteristic polynomial 5 4 Projections 7 5 Generalized eigenvalues 10 6 Eigenpolynomials 15 1 Introduction

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis

Real Analysis, 2nd Edition, G.B.Folland Elements of Functional Analysis Real Analysis, 2nd Edition, G.B.Folland Chapter 5 Elements of Functional Analysis Yung-Hsiang Huang 5.1 Normed Vector Spaces 1. Note for any x, y X and a, b K, x+y x + y and by ax b y x + b a x. 2. It

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

HIGHER RANK NUMERICAL RANGES OF NORMAL MATRICES

HIGHER RANK NUMERICAL RANGES OF NORMAL MATRICES HIGHER RANK NUMERICAL RANGES OF NORMAL MATRICES HWA-LONG GAU, CHI-KWONG LI, YIU-TUNG POON, AND NUNG-SING SZE Abstract. The higher rank numerical range is closely connected to the construction of quantum

More information

Using least-squares to find an approximate eigenvector

Using least-squares to find an approximate eigenvector Electronic Journal of Linear Algebra Volume 6 Article 9 2007 Using least-squares to find an approximate eigenvector David Hecker dhecker@sju.edu Debotah Lurie Follow this and additional works at: http://repository.uwyo.edu/ela

More information