Characterization of half-radial matrices

Size: px
Start display at page:

Download "Characterization of half-radial matrices"

Transcription

1 Characterization of half-radial matrices Iveta Hnětynková, Petr Tichý Faculty of Mathematics and Physics, Charles University, Sokolovská 83, Prague 8, Czech Republic Abstract Numerical radius r(a) is the radius of the smallest ball with the center at zero containing the field of values of a given square matrix A. It is well known that r(a) A r(a), where is the matrix -norm. Matrices attaining the lower bound are called radial, and have been analyzed thoroughly. This is not the case for matrices attaining the upper bound where only partial results are available. In this paper, we characterize matrices satisfying r(a) = A /, and call them half-radial. We investigate their singular value decomposition, field of values and algebraic structure. We give several necessary and sufficient conditions for a matrix to be half-radial. Our results suggest that half-radial matrices correspond to the extreme case of the attainable constant in Crouzeix s conjecture. Keywords: field of values, numerical radius, singular subspaces, Crouzeix s conjecture 000 MSC: 15A60, 47A1, 65F35 1. Introduction Consider the space C n endowed with the Euclidean vector norm z = (z z) 1/ and the Euclidean inner product z, w = w z, where denotes the Hermitian transpose. Let A C n n. The field of values (or numerical range) of A is the set in the complex plane defined as W (A) { Az, z : z C n, z = 1}. (1) addresses: iveta.hnetynkova@mff.cuni.cz (Iveta Hnětynková), petr.tichy@mff.cuni.cz (Petr Tichý) Preprint submitted to Linear Algebra and its Applications April 7, 018

2 Recall that W (A) is a compact convex subset of C which contains the spectrum σ(a) of A; see, e.g., [1, p. 8]. The radius of the smallest ball with the center at zero which contains the field of values, i.e., r(a) max Az, z = max ζ, z =1 ζ W (A) is called the numerical radius. It is well-known that r(a) A r(a), () where denotes the matrix -norm; see, e.g., [, p. 331, problem 1]. The lower as well as the upper bound in () are attainable. If ρ(a) denotes the spectral radius of A, ρ(a) = max{ λ : λ σ(a)}, then it holds that r(a) = A if and only if ρ(a) = A ; see [1, p. 44, problem 4] or [3]. Such matrices are called radial, and there exist several equivalent conditions which characterize them; see, e.g., [1, p. 45, problem 7] or [4]. Concerning the right inequality in (), a sufficient condition was formulated in [5, p. 11, Theorem 1.3-4]: If the range of A and A are orthogonal then the upper bound is attained. It is also known that if A = r(a), then A has a two-dimensional reducing subspace on which it is the shift [5, p.11, Theorem 1.3-5]. However, to our knowledge a deeper analysis of matrices satisfying r(a) = 1 A, which we call half-radial, has not been given yet. In this paper we fill this gap. We derive several equivalent (necessary and sufficient) conditions characterizing half-radial matrices. We study in more detail their specific algebraic structure, their left and right singular subspaces corresponding to the maximum singular value, and their field of values. Our results give a strong indication that half-radial matrices correspond to the extreme case of the attainable constant in Crouzeix s conjecture [6]. The paper is organized as follows. Section gives some basic notation and summarizes well-known results on the field of values, Section 3 is the core part giving characterizations of half-radial matrices, Section 4 discusses Crouzeix s conjecture, Section 5 concludes the paper.. Preliminaries In this section we introduce some basic notation and prove for completeness the inequality (). First, recall that any two vectors z and w satisfy the

3 polarization identity 4 Az, w = A(z + w), (z + w) A(z w), (z w) +i A(z + iw), (z + iw) i A(z iw), (z iw), where i is the imaginary unit, and the parallelogram law z + w + z w = ( z + w ). The definition of numerical radius implies the inequality Az, z r(a) z. (3) Using these tools we can now easily prove the following well-known result; see, e.g., [, p. 331, problem 1] or [5, p. 9, Theorem 1.3-1]. Theorem 1. For A C n n it holds that r(a) A r(a). Proof. The left inequality is trivial. Now using the polarization identity, the inequality (3), and the parallelogram law we find out that for any z and w it holds that 4 Az, w 4r(A) ( z + w ). Consider a unit norm vector z such that A = Az, and define w = Az Az. Then using the previous inequality, we obtain 4 A 8r(A). Further, consider the singular value decomposition (SVD) of A, A = UΣV, where U, V C n n are unitary and Σ is diagonal with the singular values of A on the diagonal. Denote σ max the largest singular value of A satisfying σ max = A. The unit norm left and right singular vectors u, v such that Av = σ max u (4) are called a pair of maximum singular vectors of A. Denote V max (A) {x C n : Ax = A x } 3

4 the maximum right singular subspace, i.e., the span of right singular vectors of A corresponding to σ max. Similarly, denote U max (A) the maximum left singular subspace of A. Recall that any vector z C n can be uniquely decomposed into two orthogonal components, z = x + y, x R(A ), y N (A), (5) where R( ) denotes the range, and N ( ) the null space of a given matrix. Note that here we generally consider the field of complex numbers. We introduce a definition of matrices we are interested in. Definition 1. A nonzero matrix A is called half-radial if A = r(a). In the following, we implicitly assume that A is nonzero and n. If A is the zero matrix or n = 1, all the statements are trivial. 3. Necessary and sufficient conditions for half-radial matrices Now we derive necessary and sufficient conditions for a nonzero matrix A to satisfy A = r(a). Using the decomposition (5) we define the set Θ A {z C n : z = 1, r(a) = Az, z, Ax, x = 0}, (6) i.e., the set of all unit norm maximizers of Az, z such that Ax, x = 0. In the following we prove that a matrix is half-radial if and only if Θ A { }, and provide a complete characterization of the non-empty set Θ A based on the maximum singular subspaces of A. Then, algebraic structure of A is studied Basic conditions A sufficient condition for A to be half-radial is formulated in the following lemma. Parts of the proofs of Lemma and Lemma 3 are inspired by [5, p.11, Theorem 1.3 4, Theorem 1.3 5]. Lemma. If Θ A is non-empty, then A = r(a). Moreover, for each z Θ A of the form (5) it holds that x = y = 1, Ax = A x and Ax, y = Ax y. 4

5 Proof. Let z be a unit norm vector. Considering its decomposition (5), it holds that Az, z = Ax, y + Ax, x. (7) Since 1 = z = x + y and 0 ( x y ), we find out that x y 1, with the equality if and only if x = y. If Θ A { }, then for any unit norm vector z Θ A we get A r(a) = Az, z = Ax, y Ax y A x y A. Hence all terms are equal, and therefore r(a) = A and x = y = 1. In the following lemma we prove the opposite implication. Lemma 3. If A = r(a), then Θ A is non-empty. Moreover, for any pair u, v of maximum singular vectors of A it holds that v N (A ), u N (A), u v and the vector z = 1 (u + v) satisfies z Θ A. Proof. Assume without loss of generality that A = σ max = 1. Consider now any unit norm maximum right singular vector v V max (A) and denote u = Av the corresponding maximum left singular vector. Then Moreover, for any θ R we obtain u = Av = A v = v = 1. r(a) = r ( e iθ A ) = 1. Then, since e iθ A + e iθ A is Hermitian, it holds that e iθ A + e iθ A = r ( e iθ A + e iθ A ) r ( e iθ A ) + r ( e iθ A ) = 1 and e iθ Au + e iθ A u 1, e iθ Av + e iθ A v 1. For the norm on the left we get e iθ Au + e iθ A u = e iθ A u, u + e iθ (A ) u, u + Au, Au + A u, A u = Re ( e iθ A u, u ) + Au + A u. 5

6 Recalling that v = A u = 1, we obtain Au Re ( e iθ A u, u ). Similarly from u = Av = 1, it can be proved that A v Re ( e iθ A v, v ). Since the inequalities hold for any θ R, we get A v = Au = 0, i.e., v N (A ) and u N (A). Hence, v R(A), u R(A ) and the maximum left and right singular vectors u, v are orthogonal. Furthermore, Clearly, the unit norm vector Av, v = Au, v = Au, u = 0. z = 1 (u + v) is in the form (5) with x = 1 v, y = 1 u. Moreover, z is a maximizer by Az, z = 1 A(u + v), (u + v) = 1 ( Av, v + Av, u + Au, v + Au, u ) = 1 Av, u yielding Az, z = 1 = r(a). Since also Ax, x = 1 Av, v = 0, it holds that z Θ A. Recall that for any pair u, v of singular vectors of A it holds that v R(A ), u R(A). Consequently, the previous lemmas imply the following result characterizing half-radial matrices by the properties of their maximum right singular subspace. Lemma 4. A = r(a) if and only if for any unit norm vector v V max (A) it holds that v R(A ) N (A ), and z = 1 ( v + Av ) A 6 Av R(A) N (A), maximizes Az, z. (8)

7 Equivalently, A = r(a) if and only if there exists a unit norm v V max (A) such that the condition (8) is satisfied. Proof. Let A = r(a). For any v V max (A), v = 1, u = Av A and v is a pair of maximum singular vectors of A. Thus from Lemma 3 it follows that z defined in (8) is a unit norm maximizer of Az, z. Furthermore, by the same lemma v N (A ) and Av = A u N (A). Moreover, since v and u are right and left singular vectors, we also have v R(A ), Av R(A) yelding (8). On the other hand, if there is a unit norm right singular vector v V max (A) such that the condition (8) is satisfied, then the vector z = 1 (v + u), where u = Av A, is a unit norm maximizer of Az, z. Moreover, v is the component of z in R(A ) and Av, v = 0. From (6), z is an element of Θ A. Hence, Θ A is non-empty and by Lemma we conclude that A = r(a). The previous lemma implies several SVD properties of half-radial matrices, which we summarize below. Lemma 5. Let A C n n be half-radial, i.e., A = r(a). Then: 1. V max (A) R(A ) N (A ), U max (A) R(A) N (A),. V max (A) U max (A), 3. the multiplicity m of σ max is not greater than n/, 4. the multiplicity of the zero singular value is at least m. Proof. The first property follows from the condition (8) in Lemma 4 and the fact that V max (A) R(A ) and U max (A) R(A). Combining V max (A) R(A ) with U max (A) N (A) yields the orthogonality of maximum singular subspaces. Furthermore, since V max (A), U max (A) C n and V max (A) U max (A), we directly get the restriction on the multiplicity of σ max. Finally, the relation V max (A) N (A ) implies that A is singular with the multiplicity of the zero singular value being equal to or greater than the multiplicity of σ max. 7

8 It is worth to note that neither the condition nor the stronger condition 1 in Lemma 5 are sufficient to ensure that a given matrix A is half-radial. This can be illustrated by the following examples. Example 1. Consider a half-radial matrix A such that the multiplicity of σ max (and thus also the dimension of maximum singular subspaces) is maximum possible, i.e., n/. Then V max (A) U max (A) = C n, and U max (A) = N (A), V max (A) = N (A ). Thus A has only two singular values σ max and 0 both with the multiplicity n/. Consequently, an SVD of half-radial A has in this case the form A = U [ σmax I ] V, I R n/ n/. (9) The unitary matrices above can be chosen as U = [U max V max ], V = [V max U max ], where the columns of U max and V max form an orthonormal basis of U max (A) and V max (A), respectively, and Ue i, V e i is for i = 1,..., n/ a pair of maximum singular vectors of A. However, assuming only that A has orthogonal maximum right and left singular subspaces, both of the dimension n/, we can just conclude that an SVD of A has in general the form A = U [ σmax I 0 0 Σ ] V, I R n/ n/, (10) with Σ R n/ n/ having singular values smaller than σ max on the diagonal. The previous example also shows that if the multiplicity of σ max is n/, then assuming only that the condition 1 from Lemma 5 holds, the matrix A is half-radial. However, this is not true in general as we illustrate by the next example. Example. Consider the matrix given by its SVD A = =

9 Obviously, U max (A) = span{e 3 }, V max (A) = span{e 1 }. Since Ae 3 = 0 and A e 1 = 0, we obtain U max (A) N (A) and V max (A) N (A ). The vector e is a (unit norm) eigenvector of A. Consequently, r(a) Ae, e = 0.9 e, e = 0.9. Since A = 1, we see that A is not half-radial, even though the condition 1 from Lemma 5 holds. 3.. Set of maximizers Θ A Now we study the set of maximizers Θ A. Lemma implies that for any z Θ A its component x in R(A ) satisfies x V max (A). On the other hand, Lemma 3 says that for half-radial matrices a maximizer from Θ A can be constructed as a linear combination of the vectors u, v representing a (necessarily orthogonal) pair of maximum singular vectors of A. We are ready to prove that the whole set Θ A is formed by linear combinations of such vectors, particularly we define the set { 1 ( Ω A e iα v + e iβ u ) : v V max (A), v = 1, u = Av } A, α, β R and get the following lemma. Lemma 6. Θ A is either empty or Θ A = Ω A. Proof. Let Θ A be non-empty. Consider for simplicity a unit norm vector z Θ A in the scaled form (5), z = 1 (x + y), i.e., x y, x = y = 1, Ax, x = 0. We first show that z Ω A. From Lemma we know that x satisfies Ax = A, and Ax and y are linearly dependent. Hence, Ax = γy, γ = Ax, y with γ = Ax, y = Ax y = A x y = A = σ max. 9

10 Now we can rotate the vector y in order to get the pair of maximum singular vectors. Define the unit norm vector ỹ by ỹ e iβ y, e iβ Ax, y A, then Ax = γy = ( A e iβ) ( e iβ ỹ ) = A ỹ. Thus x, ỹ is a pair of maximum singular vectors of A. Hence, it holds that z = 1 (x + y) = 1 ( x + e iβỹ ), x V max (A), ỹ = Ax/ A, giving z Ω A. Therefore, Θ A Ω A. Now we prove the opposite inclusion Ω A Θ A. Since Θ A { }, it holds that A = r(a) and any vector z of the form z = 1 (v + u), v V max (A), v = 1, u = Av/ A is an element of Θ A, see Lemma 3. Consider now a unit norm vector z Ω A of the form z = 1 ( e iα v + e iβ u ). We know that u = v = 1, u v, and e iα v R(A ), e iβ u N (A). Moreover, A z, z = 1 ( A e iα v ), ( e iβ u ) = 1 A Av, u = = r(a), so that z is a maximizer. Since also Ae iα v, e iα v = Av, v = 0, we get z Θ A. Note that in general, the set of maximizers Θ A need not represent the set of all maximizers of Az, z. We illustrate that by an example. Example 3. Consider the matrix given by its SVD A = =

11 It holds that A = 1 and r(a) = 1. From Lemma 6, we see that Θ A is generated by the maximum right singular vector e and the corresponding maximum left singular vector e 1. But obviously, the right singular vector e 3 corresponding to the singular value 1 is not in Θ A and also represents a maximizer, since e T 3 Ae 3 = 1 = 1 A. We have seen that for half-radial matrices the vectors of the form ( e iα v + e iβ u ) e iα v + e iβ u, with v V max(a), v = 1, u = Av, α, β R, A are maximizers of Az, z. This is also true for Hermitian, and, more generally, for normal and radial matrices. It remains an open question, whether it is possible to find larger classes of matrices such that the maximizers are of the form above Algebraic structure of half-radial matrices Denote first [ ] 0 1 J = R (11) 0 0 the two-dimensional Jordan block corresponding to the zero eigenvalue (the shift). It is well known that W (J) is the disk with the center at zero and the radius 1. Since J = 1, the matrix J is half-radial. Inspired partly by [5, p.11, Theorem 1.3 5] showing that a matrix satisfying A = r(a) has a two-dimensional reducing subspace on which it is the shift J, we now give a full characterization of half-radial matrices from the point of view of their algebraic structure. Lemma 7. Let A C n n be a nonzero matrix such that dim V max (A) = m. It holds that A = r(a) if and only if A is unitarily similar to the matrix ( A I m J) B, where B is a matrix satisfying B < A and r(b) 1 A. Proof. Assume that A = r(a) and let u i, v i, i = 1,..., m be pairs of maximum singular vectors of A such that {v 1,..., v m } is an orthonormal basis of V max (A) and {u 1,..., u m } is an orthonormal basis of U max (A). From 11

12 Lemma 5 it follows that all the vectors v 1,..., v m, u 1,..., u m are mutually orthogonal. Let the couple u, v be any of the couples u i, v i defined above. Denote Q [u, v, P ], where P is any matrix such that Q is unitary, and consider the unitary similarity transformation Q AQ. Since also v N (A ), u N (A), see Lemma 4, it holds that v R(A), u R(A ), and therefore v AQ = 0, Q Au = 0, i.e., the second row and the first column of Q AQ are the zero vectors. Next, using A u = Av, A u = A v, v Av = 0, u Au = 0 we obtain where Therefore, u AQ = [0, A, u AP ], Q Av = [ A, 0, P Av] T, Q AQ = u AP = A v P = 0, P Av = A P u = 0. [ A J 0 0 B ] = ( A J) B, B = P AP. Let us apply the same approach using all the vectors u i and v i, i.e., define Q [u 1, v 1, u, v,..., u m, v m, P ], where P is any matrix such that Q is unitary. Then Q AQ = ( A I m J) B, B = P AP. Trivially, it holds that r(b) r(a) = 1 A, and B < A. On the other hand, let A be unitarily similar to the matrix ( A I m J) B with r(b) 1 A. Recall that for any square matrices C and D it holds that W (C D) = cvx(w (C) W (D)); see, e.g., [1, p. 1]. Since W ( A J) is the disk with the center at zero and the radius 1 A, and r(b) 1 A, it holds that r(a) = max{r( A J), r(b)} = 1 A, Hence, A is half-radial. 1

13 In words, Lemma 7 shows that a matrix A is half-radial if and only if it is unitarily similar to a block diagonal matrix with one block being a halfradial matrix with maximum multiplicity of σ max (see (9)) and the other block having smaller norm and numerical radius. We have discussed previously that orthogonality of maximum singular subspaces does not ensure that a given matrix A is half-radial (see Example 1). Note that assuming only U max (A) V max (A), it is possible to follow the same construction as in the proof of Lemma 7 yielding the block structure of A = ( A I m J) B with B < A. However, if W (B) is not contained in W ( A J), then r(a) = r(b) > 1 A, and A is not half-radial. This can be illustrated on Example, where clearly B = 0.9 and thus B = r(b) = 0.9 while A = 1. Based on the previous lemma, half-radial matrices can be characterized using properties of their field of values W (A). Lemma 8. It holds that A = r(a) if and only if W (A) is the disk with the center at zero and the radius 1 A. Proof. If A = r(a), then by Lemma 7 W (A) = W (( A I m J) B) = cvx(w ( A J) W (B)). Since r(b) 1 A, and W ( A J) is the disk with the center at zero and the radius 1 A, we obtain W (A) = A W (J). Therefore, W (A) is the disk with the radius 1 A. Trivially, if W (A) is the disk of the given properties, then A = r(a). Note that since W ( ) A +A = Re (W (A)), the previous lemma implies that for a half-radial A we always have r ( ) A +A = r(a), i.e., A + A = A. To summarize the main results, we conclude this section by the following theorem giving several necessary and sufficient conditions characterizing halfradial matrices. 13

14 Theorem 9. Let A C n n be a nonzero matrix. Then the following conditions are equivalent: 1. A = r(a),. Θ A is non-empty, 3. Θ A = Ω A, 4. v V max (A), v = 1, it holds that v R(A ) N (A ), and z = 1 ( v + Av ) A Av R(A) N (A), maximizes Az, z, (1) 5. v V max (A), v = 1, such that (1) holds, 6. A is unitarily similar to the matrix ( A I m J) B, where m is the dimension of V max (A), B < A and r(b) 1 A, 7. W (A) is the disk with the center at zero and the radius 1 A. 4. Consequences on Crouzeix s conjecture In [6], Crouzeix proved that for any square matrix A and any polynomial p p(a) c max p(z), (13) z W (A) where c = Later in [7] he conjectured that the constant could be replaced by c =. Recently, it has been proven in [8] that the inequality (13) holds with the constant c = 1 +. Still, it remains an open question whether Crouzeix s inequality p(a) max p(z) (14) z W (A) is satisfied for any square matrix A and any polynomial p. Based on the results in [9], it has been shown in [10] that if the field of values W (A) is a disk, then (14) holds. Thus, for half-radial matrices we conclude the following. 14

15 Lemma 10. Half-radial matrices satisfy Crouzeix s inequality (14). Moreover, the bound with the constant is attained for the polynomial p(z) = z. Proof. Let A be half-radial. By Lemma 8, W (A) is a disk giving the inequality (14). Furthermore, for p(z) = z we have p(a) = A = r(a) = max z = max p(z). z W (A) z W (A) There are other matrices for which the inequality (14) holds and the bound is attainable for some polynomial p. In particular, for the Choi- Crouzeix matrix of the form C = 0... R n n, n >, (15) the polynomial is p(z) = z n 1 ; see [11] and [1]. Note that C n 1 = e 1 e T n, so that C n 1 is unitarily similar (via a permutation matrix) to the matrix J 0 for J defined in (11). Hence and thus C n 1 is half-radial. result. Lemma 11. It holds that C n 1 = r(c n 1 ), More generally, we can prove the following p(a) = max p(z) (16) z W (A) for p(z) = z k and some k 1, if and only if A k is half-radial. Moreover, if A k is half-radial, then r(a k ) = r(a) k. Proof. If (16) holds for p(z) = z k, then using r(a k ) r(a) k we obtain A k r(a k ) r(a) k = max z W (A) zk. Hence r(a k ) = A k and A k is half-radial. The opposite implication follows from Lemma 10. As a consequence, if A k is half-radial then (16) holds and thus also r(a k ) = r(a) k. 15

16 The previous lemma can be applied to nilpotent matrices having the following structure (17) 0 In particular, if A C n n of the form (17) is nilpotent with the index n, i.e., A n = 0 while A n 1 0, then A n 1 = αe 1 e T n for some α 0. Obviously, A n 1 is half-radial and (16) holds for p(z) = z n 1. We can conclude that if Crouzeix s inequality (14) holds for matrices having the structure (17), then the upper bound in (14) is attained for the polynomial p(z) = z n 1. Note that this result is not in agreement with the conjecture by Greenbaum and Overton [1, p. 39] predicting that if the bound (16) is attained for the polynomial p(z) = z n 1, then αa (for some scalar α) is unitarily similar to the Choi-Crouzeix matrix (15). 5. Conclusions In this paper, we have investigated half-radial matrices, and provided several equivalent characterizations based on their properties. Our results reveal that half-radial matrices represent a very special class of matrices. This leads us to the question, whether the constant in the inequality A r(a) can be improved for some larger classes of nonnormal matrices. For example, since the constant corresponds to matrices with 0 W (A), one can think about an improvement of the bound if 0 / W (A). Finally, our results suggest that half-radial matrices are related to the case when the upper bound in Crouzeix s inequality can be attained for some polynomial. This is supported by one of the conjectures of Greenbaum and Overton [1, p. 4] predicting that the upper bound in Crouzeix s inequality can be attained only if p is a monomial; see also our Lemma 11. A deeper analysis of this phenomenon needs further research which is, however, beyond the scope of this paper. 6. Acknowledgment This research was supported by the Grant Agency of the Czech Republic under the grant no J. We thank Marie Kubínová for careful reading of an earlier version of this manuscript and for pointing us to Example. 16

17 References [1] R. A. Horn, C. R. Johnson, Topics in matrix analysis, Cambridge University Press, Cambridge, 1994, corrected reprint of the 1991 original. [] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge University Press, Cambridge, 1990, corrected reprint of the 1985 original. [3] R. Winther, Zur theorie der beschränkten bilinearformen 17 (1) (199) [4] M. Goldberg, G. Zwas, On matrices having equal spectral radius and spectral norm, Linear Algebra and its Applications 8 (5) (1974) [5] K. E. Gustafson, D. K. M. Rao, Numerical range: The field of values of linear operators and matrices, Universitext, Springer-Verlag, New York, doi: / [6] M. Crouzeix, Numerical range and functional calculus in Hilbert space, J. Funct. Anal. 44 () (007) doi: /j.jfa [7] M. Crouzeix, Bounds for analytical functions of matrices, Integral Equations Operator Theory 48 (4) (004) doi: /s [8] M. Crouzeix, C. Palencia, The numerical range is a (1 + )-spectral set, SIAM J. Matrix Anal. Appl. 38 () (017) [9] K. Okubo, T. Ando, Constants related to operators of class C ρ, Manuscripta Math. 16 (4) (1975) doi: /bf [10] C. Badea, M. Crouzeix, B. Delyon, Convex domains and K-spectral sets, Math. Z. 5 () (006) doi: /s y. [11] D. Choi, A proof of Crouzeix s conjecture for a class of matrices, Linear Algebra Appl. 438 (8) (013) doi: /j.laa [1] A. Greenbaum, M. L. Overton, Numerical investigation of Crouzeix s conjecture, Linear Algebra and its Applications 54 (018) doi: /j.laa

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values

Key words. GMRES method, convergence bounds, worst-case GMRES, ideal GMRES, field of values THE FIELD OF VALUES BOUNDS ON IDEAL GMRES JÖRG LIESEN AND PETR TICHÝ 27.03.2018) Abstract. A widely known result of Elman, and its improvements due to Starke, Eiermann and Ernst, gives a bound on the worst-case

More information

Wavelets and Linear Algebra

Wavelets and Linear Algebra Wavelets and Linear Algebra 4(1) (2017) 43-51 Wavelets and Linear Algebra Vali-e-Asr University of Rafsanan http://walavruacir Some results on the block numerical range Mostafa Zangiabadi a,, Hamid Reza

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Analysis Preliminary Exam Workshop: Hilbert Spaces

Analysis Preliminary Exam Workshop: Hilbert Spaces Analysis Preliminary Exam Workshop: Hilbert Spaces 1. Hilbert spaces A Hilbert space H is a complete real or complex inner product space. Consider complex Hilbert spaces for definiteness. If (, ) : H H

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Operators with numerical range in a closed halfplane

Operators with numerical range in a closed halfplane Operators with numerical range in a closed halfplane Wai-Shun Cheung 1 Department of Mathematics, University of Hong Kong, Hong Kong, P. R. China. wshun@graduate.hku.hk Chi-Kwong Li 2 Department of Mathematics,

More information

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas

INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES. Aikaterini Aretaki, John Maroulas Opuscula Mathematica Vol. 31 No. 3 2011 http://dx.doi.org/10.7494/opmath.2011.31.3.303 INVESTIGATING THE NUMERICAL RANGE AND Q-NUMERICAL RANGE OF NON SQUARE MATRICES Aikaterini Aretaki, John Maroulas Abstract.

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

PRODUCT OF OPERATORS AND NUMERICAL RANGE

PRODUCT OF OPERATORS AND NUMERICAL RANGE PRODUCT OF OPERATORS AND NUMERICAL RANGE MAO-TING CHIEN 1, HWA-LONG GAU 2, CHI-KWONG LI 3, MING-CHENG TSAI 4, KUO-ZHONG WANG 5 Abstract. We show that a bounded linear operator A B(H) is a multiple of a

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis. Volume 41, pp. 159-166, 2014. Copyright 2014,. ISSN 1068-9613. ETNA MAX-MIN AND MIN-MAX APPROXIMATION PROBLEMS FOR NORMAL MATRICES REVISITED JÖRG LIESEN AND

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Compression, Matrix Range and Completely Positive Map

Compression, Matrix Range and Completely Positive Map Compression, Matrix Range and Completely Positive Map Iowa State University Iowa-Nebraska Functional Analysis Seminar November 5, 2016 Definitions and notations H, K : Hilbert space. If dim H = n

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Functional Analysis Review

Functional Analysis Review Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why? KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices Why? Lecture 5 Ch. 5, Norms for vectors and matrices Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Problem: Measure size of

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Frame Diagonalization of Matrices

Frame Diagonalization of Matrices Frame Diagonalization of Matrices Fumiko Futamura Mathematics and Computer Science Department Southwestern University 00 E University Ave Georgetown, Texas 78626 U.S.A. Phone: + (52) 863-98 Fax: + (52)

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Basic Calculus Review

Basic Calculus Review Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2 Norwegian University of Science and Technology Department of Mathematical Sciences TMA445 Linear Methods Fall 07 Exercise set Please justify your answers! The most important part is how you arrive at an

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

Tutorials in Optimization. Richard Socher

Tutorials in Optimization. Richard Socher Tutorials in Optimization Richard Socher July 20, 2008 CONTENTS 1 Contents 1 Linear Algebra: Bilinear Form - A Simple Optimization Problem 2 1.1 Definitions........................................ 2 1.2

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Interpolating the arithmetic geometric mean inequality and its operator version

Interpolating the arithmetic geometric mean inequality and its operator version Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Means of unitaries, conjugations, and the Friedrichs operator

Means of unitaries, conjugations, and the Friedrichs operator J. Math. Anal. Appl. 335 (2007) 941 947 www.elsevier.com/locate/jmaa Means of unitaries, conjugations, and the Friedrichs operator Stephan Ramon Garcia Department of Mathematics, Pomona College, Claremont,

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n

Inner products and Norms. Inner product of 2 vectors. Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y x n y n in R n Inner products and Norms Inner product of 2 vectors Inner product of 2 vectors x and y in R n : x 1 y 1 + x 2 y 2 + + x n y n in R n Notation: (x, y) or y T x For complex vectors (x, y) = x 1 ȳ 1 + x 2

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T. Notes on singular value decomposition for Math 54 Recall that if A is a symmetric n n matrix, then A has real eigenvalues λ 1,, λ n (possibly repeated), and R n has an orthonormal basis v 1,, v n, where

More information

MATH Linear Algebra

MATH Linear Algebra MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

The Jordan Canonical Form

The Jordan Canonical Form The Jordan Canonical Form The Jordan canonical form describes the structure of an arbitrary linear transformation on a finite-dimensional vector space over an algebraically closed field. Here we develop

More information

MATH 5524 MATRIX THEORY Problem Set 5

MATH 5524 MATRIX THEORY Problem Set 5 MATH 554 MATRIX THEORY Problem Set 5 Posted Tuesday April 07. Due Tuesday 8 April 07. [Late work is due on Wednesday 9 April 07.] Complete any four problems, 5 points each. Recall the definitions of the

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

Introduction to Numerical Linear Algebra II

Introduction to Numerical Linear Algebra II Introduction to Numerical Linear Algebra II Petros Drineas These slides were prepared by Ilse Ipsen for the 2015 Gene Golub SIAM Summer School on RandNLA 1 / 49 Overview We will cover this material in

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60 On the Hu-Hurley-Tam Conjecture Concerning The Generalized Numerical Range Che-Man Cheng Faculty of Science and Technology, University of Macau, Macau. E-mail: fstcmc@umac.mo and Chi-Kwong Li Department

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Optimization Theory. A Concise Introduction. Jiongmin Yong

Optimization Theory. A Concise Introduction. Jiongmin Yong October 11, 017 16:5 ws-book9x6 Book Title Optimization Theory 017-08-Lecture Notes page 1 1 Optimization Theory A Concise Introduction Jiongmin Yong Optimization Theory 017-08-Lecture Notes page Optimization

More information

Maximizing the numerical radii of matrices by permuting their entries

Maximizing the numerical radii of matrices by permuting their entries Maximizing the numerical radii of matrices by permuting their entries Wai-Shun Cheung and Chi-Kwong Li Dedicated to Professor Pei Yuan Wu. Abstract Let A be an n n complex matrix such that every row and

More information

Key words. conjugate gradients, normwise backward error, incremental norm estimation.

Key words. conjugate gradients, normwise backward error, incremental norm estimation. Proceedings of ALGORITMY 2016 pp. 323 332 ON ERROR ESTIMATION IN THE CONJUGATE GRADIENT METHOD: NORMWISE BACKWARD ERROR PETR TICHÝ Abstract. Using an idea of Duff and Vömel [BIT, 42 (2002), pp. 300 322

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Part 1a: Inner product, Orthogonality, Vector/Matrix norm

Part 1a: Inner product, Orthogonality, Vector/Matrix norm Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Mohammad Emtiyaz Khan CS,UBC A Review of Linear Algebra p.1/13 Basics Column vector x R n, Row vector x T, Matrix A R m n. Matrix Multiplication, (m n)(n k) m k, AB BA. Transpose

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia

Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form. Jay Daigle Advised by Stephan Garcia Determining Unitary Equivalence to a 3 3 Complex Symmetric Matrix from the Upper Triangular Form Jay Daigle Advised by Stephan Garcia April 4, 2008 2 Contents Introduction 5 2 Technical Background 9 3

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

There are six more problems on the next two pages

There are six more problems on the next two pages Math 435 bg & bu: Topics in linear algebra Summer 25 Final exam Wed., 8/3/5. Justify all your work to receive full credit. Name:. Let A 3 2 5 Find a permutation matrix P, a lower triangular matrix L with

More information

Algebra C Numerical Linear Algebra Sample Exam Problems

Algebra C Numerical Linear Algebra Sample Exam Problems Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for 1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally

More information