Iyad T. Abu-Jeib and Thomas S. Shores

Size: px
Start display at page:

Download "Iyad T. Abu-Jeib and Thomas S. Shores"

Transcription

1 NEW ZEALAND JOURNAL OF MATHEMATICS Volume 3 (003, 9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS Iyad T. Abu-Jeib and Thomas S. Shores (Received February 00 Abstract. In this paper, we study the determinant and eigen properties of I (, an important Toeplitz matri used in Sinc methods. Some Sinc method applications depend on the non-singularity of this matri and on the location of its eigenvalues. Among the theorems we prove is that I ( is nonsingular. We also show that if λ is a pure imaginary eigenvalue of I (, then λ >.. Introduction Our goal in this paper is to study properties of an important Toeplitz matri in the theory of Sinc indefinite integration and Sinc convolution. This matri is denoted as I (. Actually, this is an infinite family of Toeplitz matrices, generated by an infinite sequence of diagonal coefficients; definitions are given below. Although much is known about the global and asymptotic behavior of the eigenvalues of such families in general, as noted in [3, p. 87], much less is known about the behavior of individual eigenvalues. We are particularly interested in a conjecture of Frank Stenger: I ( is diagonalizable and its eigenvalues are located in the open right half plane (see [6]. Both properties are needed for the convergence of certain convolution algorithms and differential equation solvers based on Sinc theory developed in [7]. As noted in [7], it has been determined numerically that these properties hold for all such matrices of order at most 5. Although this is sufficient for most practical purposes, the general problem remains open. It follows rather easily from the definition of I ( that its eigenvalues lie in the closed right half-plane. One of the results we will prove in this paper is that if λ is a pure imaginary eigenvalue of I (, then λ >. It follows that I ( is always an invertible matri. Moreover, we will show that I ( is at least close to diagonalizable in the sense that it is a rank one update of a highly structured skew-symmetric matri with simple eigenvalues. We use the following notation: ( The square constant matri whose elements are all equal to ω is denoted by by [ω]. ( If (λ, is an eigenpair of the matri A, then we write λ evals(a and evecs(a. Also, we use the notation λ(a to mean that λ is an eigenvalue of A. 99 Mathematics Subject Classification 65F40, 65F5, 5A8, 5A4, 5A57. Key words and phrases: Sinc, eigenvalue, eigenvector, singular value, determinant, skew symmetric, Toeplitz.

2 IYAD T. ABU-JEIB AND THOMAS S. SHORES (3 The transpose (Hermitian transpose of the matri A is denoted by A t (A and the tuple notation = (,,... n represents the column vector [,,... n ] t in C n. Net, we introduce the sinc function, Sinc methods and I (. Definition.. The sinc function is defined as follows: { sin(π sinc( =, π for 0, for = 0. Sinc methods are a family of formulas based on the sinc function which provide accurate approimation of derivatives, definite and indefinite integrals and convolutions. These methods were developed etensively by Frank Stenger and his students (see [8]. Sinc methods can be used to numerically solve differential equations with boundary layers, integrals with infinite intervals or with singular integrands, and ordinary differential equations or partial differential equations that have coefficients with singularities. These are two numerical methods which depend on the sinc function defined above. For more information, see [4], [7], and [6]. For convolutions and integration, the following family of matrices is especially important. Definition.. I ( is the n n matri defined as follows: where η ij = e i j, e k = + s k, and s k = I ( = [η ij ] n i,j= k 0 sinc(d Thus, I ( can be epressed in the form [ ] I ( = + S, where S is the following skew-symmetric Toeplitz matri of dimension n: 0 s s... s n s 0 s... s n S = s s 0... s n s n s n s n Throughout this paper, S refers to the matri defined above. Observe that S is a real skew-symmetric Toeplitz matri. If we want to emphasize the dimension of S, we write S n in place of S. Also note that the matri [ ] = uu, with u = (,,...,, is a rank one matri. This decomposition of I ( is key to our analysis of its behavior.. Localization Results for I ( We have verified computationally (for a dimension up to 000 that the eigenvalues of I ( lie in the open right half plane. I (. The fact that in all cases they lie in the closed right half plane follows easily from the following elementary, but key result.

3 ON PROPERTIES OF MATRIX I ( OF SINC METHODS 3 Lemma.. Let (λ, be an eigenpair of A = [ω] + M, ω real and M skew- Hermitian, = (,,... n and =. Then Re(λ = n j and Im(λ = M j= Proof. With the given notation we have from A = λ that λ = [ω] + M. However, [ω] = ω n j= j is real and M is pure imaginary since M is skew-hermitian. The result follows. In particular, it follows from this lemma that the real part of an eigenvalue λ of I ( is nonnegative. The lemma can also be used to further localize the eigenvalues of I ( as follows: the real part of each eigenvalue of I ( is between 0 and n/. To see this, note that the eigenvalues of the rank matri [ ] are n/ and 0 (with multiplicity n. Since =, it follows that [ ] n/. Thus the issue of eigenvalues of I ( lying in the open half right plane is reduced to showing that no eigenvalues of I ( lie on the imaginary ais. The main result of this section shows that if there is a purely imaginary eigenvalue λ = bi of I (, then b > /π. Definition.. If v = [v 0, v,..., v n ] t is an eigenvector of A, then the corresponding eigenpolynomial is V (z = v 0 + v z v n z n. A function f( is said to be a generator for a matri M = [m ij ] of dimension n, if m ij = c i j, for all i, j n, where c k = π f(e ik d, k = 0,,..., n. In the following lemma, we prove that S is the Toeplitz matri generated by the comple function i/. In what follows, the integral sign denotes the Cauchy Principal Value and not the Lebesgue integral which has been used by other authors in dealing with generators. Lemma.3. i is a generator of S. Proof. We want f( = i to satisfy: Now we claim that r f(e ir sin(π d = d, r = 0,,..., n. π 0 π i π cos(r i sin(r r d = 0 sin(π d. ( π

4 4 IYAD T. ABU-JEIB AND THOMAS S. SHORES First notice that if r = 0, then the left hand side of ( is i π π to zero. And if r 0, then cos(r i π d, which is equal is odd. So the left hand side of ( is now i sin(r d, ( which is an ordinary integral since the integrand is continuous. In fact, the integrand is even. Thus, ( becomes sin(r d (3 π 0 Now substitute = π r y in (3, to get r sin(πy dy, πy which proves the lemma. 0 To study the determinant and eigen properties of S, it suffices to study the matri T is instead. T is generated by /. It is Hermitian and has the advantage of being generated by a real-valued function. These facts are helpful in the following theorem, the proof of which is modeled along the lines of Theorem. of [9]. Theorem.4. If µ is a pure imaginary eigenvalue of I (, then µ >. Proof. Let (µ, u be an eigenpair of I (, where µ = ib, b is real, and without loss of generality, assume that b is nonnegative. It follows from the identity I ( u = ibu that [ ] u u + u Su = ib u. Since the right hand side and u Su are purely imaginary, it follows that 0 = u [ ] u = n j= u j. Therefore [ ] u = 0, from which it follows that (µ, u is also an eigenpair of S and (0, u is an eigenpair of [ ]. This means that (b, u is an eigenpair of T. Now if U(z is the eigenpolynomial associated with the eigenvector u, then it follows that is a zero of U(z. For the fact that (0, u is an eigenpair of [ ] implies that the sum of the coordinates of u is zero, which means that the sum of the coefficients of U(z is zero. Thus, we can write U(z = (z Û(z, for some polynomial Û(z. Now let V (z be a polynomial corresponding to a vector v and which has - as a zero. Then V (z can be factored as V (z = (z + V (z. Thus, U(zV (z = (z (z + Û(z V (z. Now notice that if z = e i, then (z (z + = i sin(. Since f( = / is a generator of T, it follows that Thus, i π sin( Û(e i V (e i d = T u, v = λu, v = iλ π sin(û(ei V (e i d. ( sin( λ sin( Û(e i V (e i d = 0.

5 ON PROPERTIES OF MATRIX I ( OF SINC METHODS 5 Observe that V (z can be chosen to be any polynomial of degree at most n. Make the choice V (z = Û(z to obtain ( sin( b sin( Û(ei d = 0. Since u is an eigenvector, the polynomial Û(z is nonzero. It follows that the integrand above is positive on the open interval (, 0. Moreover, the factor sin( Û(ei is also positive on the interval (0, π. It follows that b < 0 for some in the interval (0, π. This implies that b > /π and completes the proof. It follows from Theorem.4 that zero is ecluded as a possible eigenvalue of I (. Corollary.5. For all dimensions n the matri I ( is nonsingular. 3. Diagonalizability of S n The principal theorem of this section is an analogue to Theorem. of [9]. Trench s theorem is valid only for real-valued Lebesgue-integrable functions on (,π, which are not constant on a set of measure π, so it cannot be applied to f( = / and hence, to the matri generated by if(. In the following theorem, we shall use the notation S n instead of S to emphasize the dimension (which is n, and use T n to represent is n. Theorem 3.. The eigenvalues of is n = T n (t r s n r,s= are simple, where t k = π e ik d, k = 0,,..., n. (4 Proof. Let f( = /. First, note that f( is a generator of T n. We will show that if λ r is an eigenvalue of T n of multiplicity greater than or equal to, then f( λ r must change sign at least 3 times in (, π. Clearly λ r does not change sign more than once in (, π for any choice of λ r. This will imply that all eigenvalues of T n are simple. Note that all of the following integrals are finite, because the Cauchy principal part of any integral whose integrand is a product of a polynomial and / is finite. Now for each vector v = [v, v,..., v n ] t in C n, associate the polynomial n V (z = [, z, z,..., z n ]v = v j z j. If u and v are in C n, then the standard inner product on C n satisfies u, v = π where z = e i. Also from (4, we obtain that T n u, v = π j= U(zV (zd, (5 U(zV (zd. (6

6 6 IYAD T. ABU-JEIB AND THOMAS S. SHORES Now let λ λ... λ n be the eigenvalues of the Hermitian matri T n, with corresponding eigenvectors,,..., n, and corresponding eigenpolynomials X i (z = [, z,..., z n ] i, i n. From (5 we have X i (zx j (zd = δ ij, i, j n. (7 π From (6 we also have π X i(zx j (zd = δ ij λ i, i, j n, (8 where δ ij is the Kronecker delta. Thus, if λ r is an eigenvalue of multiplicity one, then we claim that f( λ r must change sign at some point. This point must be /λ r (if λ r 0 or 0 (if λ r = 0. Notice that multiplying (7 by λ r, and taking i = j = r, yields λ r X r (z d = λ r. (9 π And by taking i = j = r in (8, we get π Now subtract (9 from (0 to obtain that ( π λ r X r (z d = 0. X r(z d = λ r. (0 Now if f( λ r is of constant sign in (, π, say it is positive, then the integral ( λ r X r (z d should be positive. This is impossible, which proves our assertion about the sign change of f( λ r. Net suppose that m. The epression ( λ r / changes sign only at and, where = 0 and = λ r. We will show that this assumption (that m leads to a contradiction. Notice that it suffices to handle the case when λ r 0, since nonzero eigenvalues of S n are of the form ia, a 0, a R, and they occur in pairs. Therefore, if a (assume a > 0 is an eigenvalue of T n of multiplicity greater than or equal to, then a is an eigenvalue of multiplicity greater than or equal to. So assume that λ r 0, and define g( = π (f( λ r. ( We assert that the function h( defined by ( ( h( = g( sin sin ( does not change sign in (, π. To see why this is true, notice first that since and i, i =,, are all in (, π, then i is also in (, π. Thus, sin( i defined on (, π can change sign

7 ON PROPERTIES OF MATRIX I ( OF SINC METHODS 7 only when i = 0; i.e. when = i, for i =,. But, g( changes sign only at these points. Consequently, h( can change sign only at these points also. To show that h does not change sign at these two points, it suffices to prove that the sign of h( does not change in a small neighborhood of i, for i =,. Notice that for each i, we can choose this neighborhood not to include j, for j i. Now write h as h( = π h (h (. where h ( = sin(/ ( and h ( = ( λ r sin (. Recall that = 0 and = λ r. First, let check the sign of h about. But, h does not change sign at, nor does h. Thus, h does not change sign at. It remains to check the sign of h about. Since h does not change sign at, we need only to check the sign of h about. But, when <, we have λ r > 0 and sin (( / < 0. Thus when <, h( is negative. On the other hand, when >, λ r < 0, and sin(( / > 0. Thus, when >, h( is negative. Therefore, h does not change sign at, and hence h does not change sign in (, π. Now suppose that λ r = λ r+ is of multiplicity greater than or equal to. For i = r, r + and j n, we have that g(x i(zx j (zd = 0, because g(x i (zx j (zd = π = λ i δ ij λ r δ ij = (λ i λ r δ ij. f(x i (zx j (zd λ r π X i (zx j (zd Now if i = r, then the above integral is zero. It is also zero if i = r +, because λ r+ = λ r. Therefore, where g(p(zx j (zd = 0, j n, p(z = c 0 X r (z + c X r+ (z and c 0 and c are constants, which are not both zero, and which satisfy c 0 X r ( + c X r+ ( = 0. This means that p(z is a nonzero polynomial. Now set Q(z = p(z(z ei, which is a polynomial of degree less than or equal z to n, and obtain that Thus, g(p(zq(zd = 0. (3 ( z e g( p(z i d = 0. z

8 8 IYAD T. ABU-JEIB AND THOMAS S. SHORES If we let then the above equation becomes Now if z = e i, then Notice that l=0 g ( = g( c lx r+l (z, z g ((z (z e i d = 0. (4 (z (z e i = 4e i sin ( sin (. ( e i e iφ +φ i( = e φ i( e φ i( e +φ i( = e sin( φ. Therefore, (4 becomes Thus, 4 This implies that where Therefore, g ( ( ( e i sin sin g ( sin ( sin ( ( α(zg( sin sin ( ( ( α(z = p(z z. α(zh(d = 0. d = 0. (5 d = 0. (6 d = 0. Since h does not change sign in (, π and α(z 0, then it must be that p(z/(z 0. Therefore, p 0, a contradiction proving the multiplicity of any eigenvalue is. The equalities S = T and I ( = [ ] + S show that I ( is at most a rank one update away from a matri with simple eigenvalues, as stated in the introduction. Corollary 3.. The matri I ( can be epressed as the sum of a rank one matri and a skew-symmetric matri with simple eigenvalues. We conclude by relating the determinants of S and I (. The result that we need can be phrased in a somewhat more general contet. Theorem 3.3. Let M be a non-singular real skew-hermitian matri and A = [ω] + M, where ω is a nonzero real number. Then det(a = det(m.

9 ON PROPERTIES OF MATRIX I ( OF SINC METHODS 9 Proof. The rank one matri M [ω] has at most one nonzero eigenvalue. which is real. Hence all eigenvalues of M [ω] are real. Let (λ, be an eigenpair for this matri. From M [ω] = λ we see that [ω] = λ M. The right hand side of this identity is pure imaginary and the left hand side real, so each is zero. Thus [ω] = 0, so that 0 = [ω] = λm. Since M is non-singular, it follows that λ = 0. Therefore all eigenvalues of M [ω] are zero. Thus the only eigenvalues of I + M [ω] are and det(a = det(m(i + M [ω] = det(m. Corollary 3.4. Let n be even. Then det(i ( n = det(s n. Proof. All eigenvalues of S n occur in signed pairs ±ib, where b is real. If zero were an eigenvalue of S n, it would be repeated, which does not occur by Theorem 3.. Therefore, S n is nonsingular and Theorem 3.3 applied to the equality I ( = [ ]+S gives the desired result. References. Fabio Di Benedetto, Generalized updating problems and computation of the eigenvalues of rational toeplitz matrices, Linear Algebra and Its Applications, 67 (997, Douglas N. Clark, Perturbation and similarity of toeplitz operators, Operator Theory: Advances and Applications, 48 (990, T. Kailath and A. Sayed, Fast Reliable Algorithms for Matrices with Structure Equations, SIAM, Philadelphia ( J. Lund and K. Bowers, Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia, John Makhoul, On the eigenvectors of symmetric toeplitz matrices, IEEE Transactions on Acoustics, Speach and Signal Processing, 9 (98, Frank Stenger, Matrices of sinc methods, Journal of Computational and Applied Mathematics, 86 (997, Frank Stenger, Collocating convolutions, Mathematics of Computation, 64 (995, Frank Stenger, Numerical Methods Based on Sinc and Analytic Functions, Springer, New York ( William F. Trench, Some Special Properties of Hermitian Toeplitz Matrices, Siam J. Matri Anal. Appl. 5 (994, Iyad T. Abu-Jeib Department of Mathematics and Computer Science SUNY College at Fredonia Fredonia, NY 4063 U.S.A abu-jeib@cs.fredonia.edu Thomas S. Shores Department of Mathematics and Statistics University of Nebraska Lincoln Lincoln, NE U.S.A tshores@math.unl.edu

ALGORITHMS FOR CENTROSYMMETRIC AND SKEW-CENTROSYMMETRIC MATRICES. Iyad T. Abu-Jeib

ALGORITHMS FOR CENTROSYMMETRIC AND SKEW-CENTROSYMMETRIC MATRICES. Iyad T. Abu-Jeib ALGORITHMS FOR CENTROSYMMETRIC AND SKEW-CENTROSYMMETRIC MATRICES Iyad T. Abu-Jeib Abstract. We present a simple algorithm that reduces the time complexity of solving the linear system Gx = b where G is

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Math 489AB Exercises for Chapter 1 Fall Section 1.0

Math 489AB Exercises for Chapter 1 Fall Section 1.0 Math 489AB Exercises for Chapter 1 Fall 2008 Section 1.0 1.0.2 We want to maximize x T Ax subject to the condition x T x = 1. We use the method of Lagrange multipliers. Let f(x) = x T Ax and g(x) = x T

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018 MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Lecture Notes in Linear Algebra

Lecture Notes in Linear Algebra Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................

More information

Geometric Modeling Summer Semester 2010 Mathematical Tools (1)

Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Geometric Modeling Summer Semester 2010 Mathematical Tools (1) Recap: Linear Algebra Today... Topics: Mathematical Background Linear algebra Analysis & differential geometry Numerical techniques Geometric

More information

MINIMAL NORMAL AND COMMUTING COMPLETIONS

MINIMAL NORMAL AND COMMUTING COMPLETIONS INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 5 59 c 8 Institute for Scientific Computing and Information MINIMAL NORMAL AND COMMUTING COMPLETIONS DAVID P KIMSEY AND

More information

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

systems of linear di erential If the homogeneous linear di erential system is diagonalizable, G. NAGY ODE October, 8.. Homogeneous Linear Differential Systems Section Objective(s): Linear Di erential Systems. Diagonalizable Systems. Real Distinct Eigenvalues. Complex Eigenvalues. Repeated Eigenvalues.

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Chapter 3. Determinants and Eigenvalues

Chapter 3. Determinants and Eigenvalues Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory

More information

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J Class Notes 4: THE SPECTRAL RADIUS, NORM CONVERGENCE AND SOR. Math 639d Due Date: Feb. 7 (updated: February 5, 2018) In the first part of this week s reading, we will prove Theorem 2 of the previous class.

More information

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det What is the determinant of the following matrix? 3 4 3 4 3 4 4 3 A 0 B 8 C 55 D 0 E 60 If det a a a 3 b b b 3 c c c 3 = 4, then det a a 4a 3 a b b 4b 3 b c c c 3 c = A 8 B 6 C 4 D E 3 Let A be an n n matrix

More information

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions Final Exam Linear Algebra Summer Math SX (3) Corrin Clarkson August th, Name: Solutions Instructions: This is a closed book exam. You may not use the textbook, notes or a calculator. You will have 9 minutes

More information

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero. Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

A Note on Simple Nonzero Finite Generalized Singular Values

A Note on Simple Nonzero Finite Generalized Singular Values A Note on Simple Nonzero Finite Generalized Singular Values Wei Ma Zheng-Jian Bai December 21 212 Abstract In this paper we study the sensitivity and second order perturbation expansions of simple nonzero

More information

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Chap 3. Linear Algebra

Chap 3. Linear Algebra Chap 3. Linear Algebra Outlines 1. Introduction 2. Basis, Representation, and Orthonormalization 3. Linear Algebraic Equations 4. Similarity Transformation 5. Diagonal Form and Jordan Form 6. Functions

More information

Interlacing Inequalities for Totally Nonnegative Matrices

Interlacing Inequalities for Totally Nonnegative Matrices Interlacing Inequalities for Totally Nonnegative Matrices Chi-Kwong Li and Roy Mathias October 26, 2004 Dedicated to Professor T. Ando on the occasion of his 70th birthday. Abstract Suppose λ 1 λ n 0 are

More information

3 Matrix Algebra. 3.1 Operations on matrices

3 Matrix Algebra. 3.1 Operations on matrices 3 Matrix Algebra A matrix is a rectangular array of numbers; it is of size m n if it has m rows and n columns. A 1 n matrix is a row vector; an m 1 matrix is a column vector. For example: 1 5 3 5 3 5 8

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3. SYMMETRIES 88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

More information

Cayley-Hamilton Theorem

Cayley-Hamilton Theorem Cayley-Hamilton Theorem Massoud Malek In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n Let A be an n n matrix Although det (λ I n A

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

GENERAL ARTICLE Realm of Matrices

GENERAL ARTICLE Realm of Matrices Realm of Matrices Exponential and Logarithm Functions Debapriya Biswas Debapriya Biswas is an Assistant Professor at the Department of Mathematics, IIT- Kharagpur, West Bengal, India. Her areas of interest

More information

Multiple eigenvalues

Multiple eigenvalues Multiple eigenvalues arxiv:0711.3948v1 [math.na] 6 Nov 007 Joseph B. Keller Departments of Mathematics and Mechanical Engineering Stanford University Stanford, CA 94305-15 June 4, 007 Abstract The dimensions

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

CHAPTER 10 Shape Preserving Properties of B-splines

CHAPTER 10 Shape Preserving Properties of B-splines CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially

More information

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS EE0 Linear Algebra: Tutorial 6, July-Dec 07-8 Covers sec 4.,.,. of GS. State True or False with proper explanation: (a) All vectors are eigenvectors of the Identity matrix. (b) Any matrix can be diagonalized.

More information

Convexity of the Joint Numerical Range

Convexity of the Joint Numerical Range Convexity of the Joint Numerical Range Chi-Kwong Li and Yiu-Tung Poon October 26, 2004 Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement. Abstract Let A = (A 1,..., A m ) be an

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis (Numerical Linear Algebra for Computational and Data Sciences) Lecture 14: Eigenvalue Problems; Eigenvalue Revealing Factorizations Xiangmin Jiao Stony Brook University Xiangmin

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Linear Algebra Primer

Linear Algebra Primer Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes. Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a

More information

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU LU Factorization A m n matri A admits an LU factorization if it can be written in the form of Where, A = LU L : is a m m lower triangular matri with s on the diagonal. The matri L is invertible and is

More information

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit V: Eigenvalue Problems Lecturer: Dr. David Knezevic Unit V: Eigenvalue Problems Chapter V.2: Fundamentals 2 / 31 Eigenvalues and Eigenvectors Eigenvalues and eigenvectors of

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is, 65 Diagonalizable Matrices It is useful to introduce few more concepts, that are common in the literature Definition 65 The characteristic polynomial of an n n matrix A is the function p(λ) det(a λi) Example

More information

CENTROSYMMETRIC MATRICES: PROPERTIES AND AN ALTERNATIVE APPROACH

CENTROSYMMETRIC MATRICES: PROPERTIES AND AN ALTERNATIVE APPROACH CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 10 Number 4 Winter 2002 CENTROSYMMETRIC MATRICES: PROPERTIES AND AN ALTERNATIVE APPROACH IYAD T. ABU-JEIB ABSTRACT. We present a simple approach to deriving

More information

Control Systems. Linear Algebra topics. L. Lanari

Control Systems. Linear Algebra topics. L. Lanari Control Systems Linear Algebra topics L Lanari outline basic facts about matrices eigenvalues - eigenvectors - characteristic polynomial - algebraic multiplicity eigenvalues invariance under similarity

More information

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore

Eigenpairs and Diagonalizability Math 401, Spring 2010, Professor David Levermore Eigenpairs and Diagonalizability Math 40, Spring 200, Professor David Levermore Eigenpairs Let A be an n n matrix A number λ possibly complex even when A is real is an eigenvalue of A if there exists a

More information

Introduction to Matrices

Introduction to Matrices POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder

More information

Orthogonal similarity of a real matrix and its transpose

Orthogonal similarity of a real matrix and its transpose Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 382 392 www.elsevier.com/locate/laa Orthogonal similarity of a real matrix and its transpose J. Vermeer Delft University

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Basics of Calculus and Algebra

Basics of Calculus and Algebra Monika Department of Economics ISCTE-IUL September 2012 Basics of linear algebra Real valued Functions Differential Calculus Integral Calculus Optimization Introduction I A matrix is a rectangular array

More information

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues

More information

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA Mahmut Kuzucuoğlu Middle East Technical University matmah@metu.edu.tr Ankara, TURKEY March 14, 015 ii TABLE OF CONTENTS CHAPTERS 0. PREFACE..................................................

More information

HW2 Solutions

HW2 Solutions 8.024 HW2 Solutions February 4, 200 Apostol.3 4,7,8 4. Show that for all x and y in real Euclidean space: (x, y) 0 x + y 2 x 2 + y 2 Proof. By definition of the norm and the linearity of the inner product,

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

w T 1 w T 2. w T n 0 if i j 1 if i = j

w T 1 w T 2. w T n 0 if i j 1 if i = j Lyapunov Operator Let A F n n be given, and define a linear operator L A : C n n C n n as L A (X) := A X + XA Suppose A is diagonalizable (what follows can be generalized even if this is not possible -

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

On families of anticommuting matrices

On families of anticommuting matrices On families of anticommuting matrices Pavel Hrubeš December 18, 214 Abstract Let e 1,..., e k be complex n n matrices such that e ie j = e je i whenever i j. We conjecture that rk(e 2 1) + rk(e 2 2) +

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Study Notes on Matrices & Determinants for GATE 2017

Study Notes on Matrices & Determinants for GATE 2017 Study Notes on Matrices & Determinants for GATE 2017 Matrices and Determinates are undoubtedly one of the most scoring and high yielding topics in GATE. At least 3-4 questions are always anticipated from

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors week -2 Fall 26 Eigenvalues and eigenvectors The most simple linear transformation from R n to R n may be the transformation of the form: T (x,,, x n ) (λ x, λ 2,, λ n x n

More information

k is a product of elementary matrices.

k is a product of elementary matrices. Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation

More information

Symmetric and self-adjoint matrices

Symmetric and self-adjoint matrices Symmetric and self-adjoint matrices A matrix A in M n (F) is called symmetric if A T = A, ie A ij = A ji for each i, j; and self-adjoint if A = A, ie A ij = A ji or each i, j Note for A in M n (R) that

More information