Non-Commutative Subharmonic and Harmonic Polynomials

Size: px
Start display at page:

Download "Non-Commutative Subharmonic and Harmonic Polynomials"

Transcription

1 Non-Commutative Subharmonic and Harmonic Polynomials by Joshua A. Hernandez Dr. J. William Helton, Honors Advisor UCSD Department of Mathematics June 9,

2 Contents 1 Introduction Non-Commutative Algebra in Engineering NC Polynomials NC-Symmetric Polynomials Matrix Positivity Non-Commutative Differentiation Non-Commutative Convexity Non-Commutative Laplacian and Subharmonicity Main Theorem (Two Variables) 8 4 Existence Proofs Product Rules for Derivatives Product Rule for DirD The Laplacian of a Product Formulas Involving Derivatives of γ d Harmonics degree > 2: Proof of Theorem 1 part (1) Subharmonics degree > 4: Proof of Theorem 1 (2)(3) Subharmonics of Odd Degree Classification when Degree is Four or Less The Matrix Representation Commutative Coefficients Non-commutative coefficients Convexity of p vs. Positivity of its Middle Matrix The Zeroes Lemma The Spanning Polynomials for Subharmonics The Laplacian Matrix Decomposition Conditions for Positivity Future Work 22 7 Acknowledgments 24 2

3 1 Introduction 1.1 Non-Commutative Algebra in Engineering Inequalities, involving polynomials in matrices and their inverses, and associated optimization problems have become very important in engineering. When the said polynomials are matrix convex, interior point methods apply directly. In the last few years, the approaches that have been proposed in the field of optimization and control theory based on linear matrix inequalities and semidefinite programming have become very important and promising, since the same framework can be used for a large set of problems. Matrix inequalities provide a nice setup for many engineering and related problems, and if they are convex the optimization problem is well behaved and interior point methods provide efficient algorithms which are effective on moderate sized problems. Unfortunately, the class of convex polynomials in matrix variables is very small - it has been proven, in fact, that they are all of degree two or less [HS04]. It is in our interest, then, to discover conditions similar to convexity, but not as restrictive, and to characterize these classes thoroughly. 2 NC Polynomials Definition 2.1 Non-Commutative Monomials A non-commutative monomial m of degree d on the indexed free variables x 1,..., x g is a product x a1 x a2 x ad of these variables, corresponding to a unique sequence of a i. A convenient convention is to write m = x w, where w is the d-tuple a 1,...,a d. We treat w as a word on the alphabet [g] ={1,..., g} and m, consequently, as a word on x = {x 1,..., x g }. Some word-conventions: w = d the length of w (w) i = a i the i th letter of w w T = a d,...,a 1 the transpose of w φ = the empty word (word of length zero) Finally, for any two words w 1 = a 1,..., a d1 and w 2 = b 1,..., b d2, we have w 1 w 2 = a 1,..., a d1,b 1,..., b d2, called w 1 concatenate w 2. 3

4 Definition 2.2 Non-Commutative Polynomials A non-commutative polynomial p(x) = p(x 1,...,x g ) is the sum of noncommutative monomials (on g variables) m, to which we associate scalar coefficients A m R ; m is a word ranging over a given set M p. The degree of p is equal to the length of the longest word in M p. p(x) = Example 2.1 A non-commutative polynomial m M p A m m. (1) p(x) =p(x 1,x 2 )=x 2 1x 2 x 1 +x 1 x 2 x 2 1+x 1 x 2 x 2 x 1 +7 (on commutative variables, this would be equivalent to 2x 2 1x 2 + 7). In this example, M p = { x 1,x 1,x 2,x 1, x 1,x 2,x 1,x 1, x 1,x 2, x 2,x 1,φ}. 2.1 NC-Symmetric Polynomials The transpose of a polynomial p, denoted p T, has the following properties: (1) (p T ) T = p (2) (p 1 + p 2 ) T = p T 1 + p T 2 (3) (αp) T = αp T (α R) (4) (p 1 p 2 ) T = p T 2 p T 1. In this paper, we shall consider only polynomials in symmetric variables. That is, we consider variables x i where x T i = x i. Transposition of a polynomial in such variables amounts to reversing the spelling of each of its terms. We know that Assuming that x T a 1 = x a1 (x a1...x ad 1 ) T =x ad 1...x a1,wehave (x a1...x ad ) T =x T a d (x a1...x ad 1 ) T =x ad (x ad 1...x a1 )=x ad...x a1. 4

5 Returning to our indexed-word notation, we find the identity (x w ) T = x wt. (3) Symmetric (or self-adjoint) polynomials are those that are equal to their transposes. We will consider this class of functions in our investigation. 2.2 Matrix Positivity Definition 2.3 Matrix Positive Symmetric Polynomials A symmetric polynomial p(x) =p(x 1,...,x g ) is considered matrix-positive over a domain D of square symmetric matrices X =(X 1,...,X g ) iff, for any X D, we have that p(x) is a positive-semidefinite matrix. Note that if p(x) = 1, then p(x) =I D. For instance: Example 2.2 p(x) =x x 2 g is matrix-positive. Substituting in symmetric matrices X i for x i, we have for any vector v v T (X X 2 g)v =v T X 2 1v+...+v T X 2 gv =(X 1 v) T (X 1 v)+...+(x g v) T (X g v) = X 1 v X g v 2 0, that is, p(x) is positive-semidefinite, for all X. matrix-positive. By definition p(x) is 2.3 Non-Commutative Differentiation For our non-commutative purposes, we take directional derivatives in x i with regard to an indeterminate direction parameter h [CHSY03]. DirD[p(x 1,...,x g ),x i,h]:= d dt [p(x 1,...,(x i +th),...,x g )] t=0. (4) We say that this is the directional deriative of p(x) =p(x 1,...,x g )onx i in the direction h. Example 2.3 The directional derivative 5

6 DirD[x 2 1x 2,x 1,h]= d[(x dt 1 + th) 2 x 2 ] t=0 = d dt [x2 1x 2 + thx 1 x 2 + tx 1 hx 2 + t 2 h 2 x 2 ] t=0 =[hx 1 x 2 + x 1 hx 2 +2th 2 x 2 ] t=0 = hx 1 x 2 + x 1 hx 2. As this example shows, the directional derivative of p on x i in the direction h is the sum of the terms produced by replacing one instance of x i with h. Lemma 2.1 Linearity of the directional derivative of NC polynomials DirD[ap(x)+bq(x),x i,h]=adird[p(x),x i,h]+bdird[q(x),x i,h]. Proof: DirD[ap(x)+bq(x),x i,h] = d dt [(ap(x)+bq(x))(x 1,...,(x i +th),...,x g )] t=0 = d dt [ap(x 1,...,(x i +th),...,x g )+bq(x 1,...,(x i +th),...,x g )] t=0. By linearity of the d dt operator, =(a d dt [p(x 1,...,(x i +th),...,x g )] + b d dt [q(x 1,...,(x i +th),...,x g )]) t=0 = a d dt [ap(x 1,...,(x i +th),...,x g )] t=0 + b d dt [bq(x 1,...,(x i +th),...,x g )] t=0 = a DirD[p(x),x i,h]+bdird[q(x),x i,h]. 2.4 Non-Commutative Convexity The non-commutative Hessian is defined as: NCHessian[p(x 1,...,x g ),{x 1,η 1 },...,{x g,η g }]:= d2 dt [p(x 1+tη 2 1,...,x g +t g η g )] t=0. Note that this is composed of several independent direction parameters, η i and that if p is a polynomial, then its Hessian is a polynomial in x and η which is homogeneous of degree 2 in η. Such notation is unique to this subsection. A non-commutative polynomial is considered convex wherever its Hessian is matrix-positive. Equivalently, 6

7 Definition 2.4 Geometrical NC Symmetric Convexity A polynomial p(x) = p(x 1,...,x d ) is convex over a convex domain D of ordered d-tuples X = (X 1,...,X d ) of square matrices if and only if, for every X and Y in D, ( ) 1( ) X+Y p(x)+p(y) p 2 2 is positive-semidefinite. It is proved in [HM98] that convexity is equivalent to geometric convexity. A crucial fact regarding these polynomials (proven by Helton, et. al [HS04]) is that they are all of degree two or less. The commutative analog of this directional Hessian is the quadratic function H ( p(x) ) η 1 η 1.. (7) η g where H ( p(x) ) is the Hessian matrix: x1 x 1 p(x) x1 x g p(x) (8) xgx1 p(x) xgxg p(x) If this Hessian is positive- (resp. negative-) semidefinite at some critical point (x 1,...,x g ), then f has a local minimum (resp. maximum) at that point, and f is said to be concave (resp. convex ) there. This corresponds, somewhat, to our intuitive physical notions of convexity. 2.5 Non-Commutative Laplacian and Subharmonicity The condition of subharmonicity is much less restrictive than that of convexity. It is determined by a polynomial s non-commutative Laplacian, which we define as: g Lap[p(x 1,...,x g ),h]:= DirD[DirD[p(x), {x i,h}],{x i,h}] (9) = i=1 η g g d 2 dt [p(x 1,...,(x 2 i +th),...,x g )] t=0. (10) i=1 7

8 Note that (9) defines Lap as a finite sum of compositions of linear functions (DirD); thus, it too must be linear. Analogous to the directional Laplacian, in commutative variables, is: where ( p(x) ) is the standard Laplacian, namely: ( p(x) ) h 2 (11) ( p(x) ) := g xi x i p(x). (12) i=1 A polynomial is considered harmonic if its Laplacian is zero, and subharmonic if its Laplacian is matrix-positive ( properly subharmonic is used to describe a polynomial which is subharmonic but not harmonic - that is, having a nonzero, matrix-positive Laplacian). 3 Main Theorem (Two Variables) For our special homogeneous polynomials on two variables, define where i is the imaginary number. γ := x 1 + ix 2 (13) Theorem 1 For d>2, the homogeneous NC symmetric polynomials in two symmetric variables which are (1) harmonic of degree d include the linear combinations of Re(γ d ) and Im(γ d ), (2) subharmonic of degree 2d, include the linear combinations: c 0 [Re(γ d )] 2 + c 1 Re(γ 2d )+c 2 Im(γ 2d ) = c 0 [Im(γ d )] 2 +(c 0 +c 1 ) Re(γ 2d )+c 2 Im(γ 2d ) (15) where c 0 0, 8

9 (3) subharmonic of odd degree do not exist. All NC symmetric harmonic polynomials of degree 2, and the subharmonic polynomials of degree 4 or less are listed: Degree 2: Spanning Polynomial A 1 x A 2 x 2 2 +A 3 (x 1 x 2 + x 2 x 1 ) Characteristics Region of Subharmonicity A 1 + A 2 > 0 Region of Harmonicity A 1 + A 2 =0 Degree 3: Spanning Polynomial A 1 (x 3 1 x 1 x 2 2 x 2 x 2 1) +A 2 x 2 x 1 x 2 + A 3 x 1 x 2 x 1 +A 4 (x 3 2 x 2 1x 2 x 2 x 2 1) Characteristics Region of Subharmonicity (A 1 + A 2 )x 1 +(A 3 +A 4 )x 2 >0 Region of Harmonicity A 1 + A 2 = A 3 + A 4 =0 1 Degree 4: Spanning Polynomial A 1 (x 4 1 x 2 1x 2 2 x 2 2x x 4 2) +A 2 x 1 x 2 2x 1 + A 3 x 2 x 2 1x 2 +A 4 (x 1 x 2 x 1 x 2 + x 2 x 1 x 2 x 1 ) +A 5 (x 2 2x 1 x 2 x 3 1x 2 x 2 x x 2 x 1 x 2 2) +A 6 (x 2 1x 2 x 1 + x 1 x 2 x 2 1 x 1 x 3 2 x 3 2x 1 ) Characteristics Region of Subharmonicity (A 1 + A 2 )(A 1 + A 3 ) > (A 1 + A 4 ) 2 +(A 5 +A 6 ) 2 A 1 +A 2 >0 or, equivalently, A 1 + A 3 > 0 Region of Harmonicity A 1 + A 2 = A 1 + A 3 = A 1 + A 4 =0 A 5 =A 6 =0 Conjecture 3.1 The subharmonic and harmonic polnomials are none other than those described above. 1 In the degree 3 case, note that the subharmonic region depends on x 1 and x 2. Therefore, there is no polynomial of degree three which is subharmonic over all values of x 1 and x 2 9

10 This is a result we believe we have proven but there was not time to write the proof out carefully in this thesis. 10

11 4 Existence Proofs 4.1 Product Rules for Derivatives Product Rule for DirD Lemma 4.1 The product rule for the directional derivative of NC polynomials is DirD[p 1 p 2,x i,h] = DirD[p 1,x i,h]p 2 + p 1 DirD[p 2,x i,h]. Proof: The directional derivative DirD[m, x i,h] of a product m = m 1 m 2 of non-commutative monomials m 1 and m 2 is the sum of terms produced by replacing one instance of x i in m by h. This sum can be divided into two parts: µ 1, the sum of terms whose h lie in the first m 1 letters, i.e. DirD[m 1,x i,h]m 2 µ 2, the sum of terms whose h lie in the last m 2 letters, i.e. m 1 DirD[m 2,x i,h]. Therefore DirD[m 1 m 2,x i,h] = DirD[m 1,x i,h]m 2 +m 1 DirD[m 2,x i,h]. Noting the linearity of the directional derivative, we can extend this product rule to the product of any two non-commutative polynomials p 1 and p 2. [( )( DirD[p 1 p 2,x i,h] = DirD A m1 m 1 m 1 M p1 = A m1 A m2 DirD[m 1 m 2,x i,h] m 1 M p1 m 2 M p2 = m 2 M p2 A m2 m 2 ) ],x i,h A m1 A m2 DirD[m 1,x i,h]m 2 +A m1 A m2 m 1 DirD[m 2 x i,h] m 1 M p1 m 2 M p2 [ ] [ ] = DirD A m1 m 1,x i,h A m2 m 2 + A m1 m 1 DirD A m2 m 2,x i,h m 1 M p1 m 2 M p2 m 1 M p1 m 2 M p2 = DirD[p 1,x i,h]p 2 +p 1 DirD[p 2,x i,h]. (19) 11

12 4.1.2 The Laplacian of a Product Lemma 4.2 The product rule for the Laplacian of NC polynomials is g ( Lap[p 1 p 2,h] = Lap[p 1,h]p 2 +p 1 Lap[p 2,h]+2 DirD[p1,x i,h] DirD[p 2,x i,h] ). Proof: Lap[p 1 p 2,h]= = = i=1 g DirD[DirD[p 1 p 2,x i,h],x i,h] i=1 g DirD [p 1 DirD[p 2,x i,h] + DirD[p 1,x i,h]p 2,x i,h] ) i=1 g ( p1 DirD[DirD[p 2,x i,h],x i,h] + DirD[DirD[p 1,x i,h],x i,h]p 2 i=1 + 2 DirD[p 1,x i,h],dird[p 2,x i,h] ) = Lap[p 1,h]p 2 +p 1 Lap[p 2,h] Formulas Involving Derivatives of γ d Recall γ := x 1 + ix 2. g ( DirD[p1,x i,h] DirD[p 2,x i,h] ). Lemma 4.3 The derivatives of of γ d exhibit the following symmetries. and Proof: It is easily seen that i=1 DirD[Re(γ d ),x 1,h] = DirD[Im(γ d ),x 2,h] DirD[Re(γ d ),x 2,h]= DirD[Im(γ d ),x 1,h]. DirD[Re(γ),x 1,h] = DirD[Im(γ),x 2,h]=h DirD[Im(γ),x 1,h]= DirD[Re(γ),x 2,h]=0. 12

13 Assume that DirD[Re(γ d 1 ),x 1,h] = DirD[Im(γ d 1 ),x 2,h] DirD[Im(γ d 1 ),x 1,1] = DirD[Re(γ d 1 ),x 2,h]. Then DirD[ Re(γ d ),x 1,h] = DirD[x 1 Re(γ d 1 ) x 2 Im(γ d 1 ),x 1,h] =x 1 DirD[Re(γ d 1 ),x 1,h]+hRe(γ d 1 ) x 2 DirD[Im(γ d 1 ),x 1,h] DirD[Im(γ d ),x 2,h] = DirD[x 1 Im(γ d 1 )+x 2 Re(γ d 1 ),x 2,h] =x 1 DirD[Im(γ d 1 ),x 2,h]+x 2 DirD[Re(γ d 1 ),x 2,h]+hRe(γ d 1 ), so DirD[Re(γ d ),x 1,h] = DirD[Im(γ d ),x 2,h] (23) which satisfies the first half of our inductive hypothesis. For the next half compute DirD[ Re(γ d ),x 2,h] = DirD[x 1 Re(γ d 1 ) x 2 Im(γ d 1 ),x 2,h] =x 1 DirD[Re(γ d 1 ),x 2,h] x 2 DirD[Im(γ d 1 ),x 2,h] him(γ d 1 ) DirD[Im(γ d ),x 1,h] = DirD[x 1 Im(γ d 1 )+x 2 Re(γ d 1 ),x 1,h] =x 1 DirD[Im(γ d 1 ),x 1,h]+hIm(γ d 1 )+x 2 DirD[Re(γ d 1 ),x 2,h], so DirD[Re(γ d ),x 2,h]= DirD[Im(γ d ),x 1,h]. (24) Also note that γ = γ T and therefore that γ d =(γ d ) T.So ( Re(γ d ) ) T ( = Re((γ d ) T ) = Re(γ d ) and Im(γ d ) ) T = Im((γ d ) T ) = Im(γ d ). 4.3 Harmonics degree > 2: Proof of Theorem 1 part (1) Since no double-substitutions can be made on words of length 1, Lap[Re(γ),h] = Lap[x 1,h] = 0 and Lap[Im(γ),h] = Lap[x 2,h]=0. (26) 13

14 Now, assume that Lap[Re(γ d 1 ),h] = Lap[Im(γ d 1 ),h]=0. (27) Pushing ahead, Re(γ d ) = Re((x 1 + ix 2 ) γ d 1 )=x 1 Re(γ d 1 ) x 2 Im(γ d 1 ) (28) Im(γ d ) = Im((x 1 + ix 2 ) γ d 1 )=x 1 Im(γ d 1 )+x 2 Re(γ d 1 ). (29) Applying our product rule to (28): Lap[Re(γ d ),h] = Lap[x 1 Re(γ d 1 ),h] Lap[x 2 Im(γ d 1 ),h] =x 1 Lap[Re(γ d 1 ),h] + Lap[x 1,h]Re(γ d 1 ) + 2 DirD[x 1,x 1,h] DirD[Re(γ d 1 ),x 1,h] + 2 DirD[x 1,x 2,h] DirD[Re(γ d 1 ),x 2,h] x 2 Lap[Im(γ d 1 ),h] Lap[x 2,h]Im(γ d 1 ) 2 DirD[x 2,x 1,h] DirD[Im(γ d 1 ),x 1,h] 2 DirD[x 2,x 2,h] DirD[Im(γ d 1 ),x 2,h]. Use (26) and (27) to obtain that the Lap[] terms are 0, and that cross partials are 0 to get Lap[Re(γ d ),h]=hdird[re(γ d 1 ),x 1,h] hdird[im(γ d 1 ),x 2,h]. By symmetry Lemma 4.3, this means Re(Lap[γ d,h]) = Lap[Re(γ d ),h]=0. (30) By a similar argument, applying the product rule to (29), Im(Lap[γ d,h]) = Lap[Im(γ d ),h]=0. (31) Harmonic Polynomials are further discussed in [McA04]. 4.4 Subharmonics degree > 4: Proof of Theorem 1 (2)(3) matrix-positive for all d. 14

15 By the product rule for the Laplacian Lap[ ( Re(γ d ) ) 2,h] = Lap[Re(γ d ),h]re(γ d ) + Re(γ d ) Lap[Re(γ d ),h] +2 ( (DirD[Re(γ d ),x 1,h]) 2 + (DirD[Re(γ d ),x 2,h]) 2) =2 ( (DirD[Re(γ d ),x 1,h]) 2 + (DirD[Re(γ d ),x 2,h]) 2) which is a sum of squares (as Re(γ d ) is symmetric). Thus we have shown ( Re(γ d ) ) 2 ( is subharmonic. The same argument applies to Im(γ d ) ) 2. By Helton s [Hel02], we know that a matrix positive polynomial must be expressible as a sum of squares, and therefore must have even degree. Since the directional Laplacian preserves degree, subharmonic polynomials, having matrix-positive Laplacians, must have even degree. Now we prove the formula (15) relating subharmonics. We use from the elementary result γ 2d = (Re(γ d )+iim(γ d )) 2 = (Re(γ d )) 2 (Im(γ d )) 2 + i(re(γ d )Im(γ d ) + Im(γ d ) Re(γ d )). Therefore Re(γ 2d )= ( Re(γ d ) ) 2 ( Im(γ d ) ) 2. So [ c 0 Re(γ d ) ] 2 + c1 Re(γ 2d )+c 2 Im(γ 2d ) = c 0 [ Im(γ d ) ] 2 +(c0 +c 1 ) (Re(γ 2d )) + c 2 Im(γ 2d ). 4.5 Subharmonics of Odd Degree To see that there are no subharmonics of odd degree note that the Laplacian of an odd degree polynomial is itself an odd degree polynomial. However, by [Hel02] this polynomial being positive makes it a sum of squares. Any sum of squares has even degree, thereby producing a contradiction. 5 Classification when Degree is Four or Less 5.1 The Matrix Representation Each polynomial quadratic in the variables x 1,...,x g can be expressed as a product q T Pq, where P is independent of all x i. We shall call P the middle 15

16 matrix and q the border vector for this representation Commutative Coefficients Consider the generalized quadratic polynomial Q(x) = g g A ij x i x j i=1 j=1 with scalar coefficients A ij. Define vector q = q(x) and matrix P such that then q T Pq = g x i i=1 j=1 p ij = A ij and q =(x 1,...,x g ), (34) g P ij x j = g i=1 j=1 g x i A ij x j = g i=1 j=1 g A ij x i x j = Q(x). It is easy to see that when Q(x) is symmetric (i.e. when the coefficient of x i x j equals that of x j x i ) P must also be symmetric: p ij = A ij = A ji = p ji. Example 5.1 The Matrix Representation of the general quadratic, g =2 ( ) ( A B x1 x 2 C D )( x1 x 2 ) =Ax 2 1 +Bx 1 x 2 +Cx 2 x 1 +Dx Non-commutative coefficients The generalized quadratic polynomial with non-commutative coefficients is a trifle more complicated. However, in this paper, we will be concerned mostly with the case g = 1. That is, we will need to extract only one letter, h, from the middle matrix P. The general polynomial quadratic in h, with non-commutative coefficients is written n n Q(h) = A ij (hm i ) T c ij (hm j ) i=1 j=1 here m i are the terms applied outside the target letters h, and c ij, are the inside terms and A ij are scalars. These terms do not necessarily commute, 16

17 so they can not be grouped together and simplified as above. Define A as the N-by-N matrix whose i, j th element is A ij c ij, and define q as Then we see that n n q T Pq = (q) T i P ij (q) j = i=1 j=1 q = h(m 1,m 2,..., m N ). n i=1 j=1 n A ij, (hm i ) T c ij (hm j ). Example 5.2 A matrix representation with non-commutative coefficients 3 x 1 hx 2 2hx 1 + hx 1 x 2 x 1 h h 2 x hx 3 2x 1 hx x 1 x 2 hx 2 hx 2 x 1 T h x 1 x 2 x h = hx 1 0 3x hx 1 hx 2 x x 2 0 hx 2 x 1 = qt Pq. hx 2 2 x 3 2x hx 2 2 It can be easily verified that Q is symmetric if and only if the matrix P is symmetric Convexity of p vs. Positivity of its Middle Matrix Given the polynomial Q(x, h) which is quadratic in h there may be several q T Pq representations of Q. However, once q is chosen, with entries which are monomials that do not repeat, the middle matrix P is uniquely determined (up to adding on rows and columns of zeroes). Furthermore, the polynomial Q in x and h is matrix positive if and only if P (x) is a positive semidefinite matrix whenever symmetric matrices are substituted for x. Thus checking matrix positivity of Q is equivalent to checking matrix positivity of P. For discussion and proofs of all this, see [CHSY03]. 5.2 The Zeroes Lemma The following is useful in our analysis of subharmonics. Lemma 5.1 Let A R N N be any symmetric matrix with entries in some domain R of polynomials on free variables with real coefficents. If there exists some diagonal entry a ii =0and corresponding off-diagonal entries a ij = a T ji 0, then A is not matrix-positive semidefinite 17

18 Proof: Let e i and e j be standard basis vectors for R N (i.e. e T i Ae j =a ij ) and define v := β 1 e i + β 2 e j where β 1,β 2 R. Then, v T Av =((a ij + a ji )β 1 + a jj β 2 )β 2 =(2a ij β 1 + a jj β 2 )β 2 Given β 2 > 0, we can choose β 1 such that is either matrix-positive or -negative. 2 β 1 a ij β 2 + β 2 a jj β 2 This lemma is very useful when applied to our matrix representation of the Laplacian of a symmetric polynomial on free variables. Note that we can express the Laplacian, Lap(p, h), of any polynomial p(x) as: Lap[p, h] = n n A ij (hm i ) T c ij (hm j ) i=1 j=1 and that Lap[p, h] =q T Pq, where p ij = A ij c ij and For any i,if then q =(hm 1,..., hm n ) T. A ii (hm i ) T c ii (hm i )=0, p ii = A ii c ii =0. By the Zeroes Lemma, either P is not positive-semidefinite (Lap[p, h] is not matrix-positive) or in the latter case, p ij = p ji = 0 for 1 j n A ij (hm i ) T c ij (hm j )=A ji (hm j ) T c ji (hm i ) = 0 for 1 j n. 18

19 5.3 The Spanning Polynomials for Subharmonics We begin with a basis for the set of degree 4 homogeneous polynomials in symmetric free variables p = A 1 x 4 1x A 2 (x 3 1x 2 + x 2 x 3 1) + A 3 (x 2 1x 2 x 1 + x 1 x 2 x 2 1)+A 4 (x 2 1x 2 2+x 2 2x 2 1) +A 5 (x 1 x 2 x 1 x 2 +x 2 x 1 x 2 x 1 )+A 6 x 1 x 2 2x 1 We calculate the Laplacian of p: +A 7 (x 1 x 3 2+x 3 2x 1 )+A 8 x 2 x 2 1x 2 +A 9 (x 2 x 1 x 2 2+x 2 2x 1 x 2 )+A 10 x A 1 (h 2 x hx 1 hx 1 + hx 2 1h + x 1 h 2 x 1 + x 1 hx 1 h + x 2 1h 2 ) +2A 2 (h 2 x 1 x 2 +hx 1 hx 2 + x 1 h 2 x 2 + x 2 h 2 x 1 + x 2 hx 1 h + x 2 x 1 h 2 ) +2A 3 (h 2 x 2 x 1 +hx 1 x 2 h + hx 2 hx 1 + hx 2 x 1 h + x 1 hx 2 h + x 1 x 2 h 2 ) +2A 4 (h 2 x 2 1 +h 2 x 2 2 +x 2 1h 2 +x 2 2h 2 ) +2A 5 (hx 1 hx 1 + hx 2 hx 2 + x 1 hx 1 h + x 2 hx 2 h)+ 2A 6 (hx 2 2h + x 1 h 2 x 1 ) +2A 7 (h 2 x 2 x 1 +hx 2 hx 1 + x 1 h 2 x 2 + x 1 hx 2 h + x 1 x 2 h 2 + x 2 h 2 x 1 )+ 2A 8 (hx 2 1h + x 2 h 2 x 2 ) +2A 9 (h 2 x 1 x 2 +hx 1 hx 2 + hx 1 x 2 h + hx 2 x 1 h + x 2 hx 1 h + x 2 x 1 h 2 ) +2A 10 (h 2 x hx 2 hx 2 + hx 2 2h + x 2 h 2 x 2 + x 2 hx 2 h + x 2 2h 2 ) The Laplacian Matrix Decomposition The directional Laplacian is quadratic in h, and so can be represented by the matrix product q T Pq, where q and P/2 are ( h x1 h x 2 h x 2 1h x 1 x 2 h x 2 x 1 h x 2 2h ) T and 19

20 (A 1 + A 8 ) x 2 1 +(A 6+A 10 ) x 2 2 +(A 3 +A 9 )(x 1 x 2 +x 2 x 1 ) (A 1 +A 5 )x 1 (A 2 +A 9 )x 1 A 1 +A 4 A 3 +A 7 A 2 +A 9 A 4 +A 10 +(A 3 +A 7 )x 2 +(A 5 +A 10 ) x 2 (A 1 + A 5 ) x 1 +(A 3 +A 7 )x 2 A 1 +A 6 A 2 +A (A 2 +A 9 )x 1 +(A 5 +A 10 ) x 2 A 2 + A 7 A 8 + A A 1 +A A 3 +A A 2 +A A 4 +A respectively. We wish to find the conditions on {A 1,...,A 10 } that ensure that this matrix is positive-semidefinite. By the Zeroes Lemma, the zeroes on the last four diagonals force all entries in the last four rows or columns to be zero. This corresponds to the following assignments: A 4 = A 1, A 10 = A 1, A 9 = A 2, A 7 = A 3. (44) Applying these conditions to the matrix above, and ignoring the zeroed-out rows and columns, we have: (A 1 + A 8 ) x 2 1 +(A 1+A 6 )x 2 2 +(A 2 A 3 )(x 2 x 1 x 1 x 2 ) (A 1 +A 5 )x 1 (A 1 +A 5 )x 2 (A 1 +A 5 )x 1 A 1 +A 6 A 2 A 3 (A 1 +A 5 )x 2 A 2 A 3 A 1 +A 8. This matrix can be simplified by substitution of recurring pairs by single letters: G = A 1 + A 6, H = A 1 +A 8 J = A 2 A 3, K = A 1 +A 5. Now we have a matrix small enough that we can find its LDL T (Cholesky) decomposition Hx 2 1 J(x 1 x 2 + x 2 x 1 )+Gx 2 2 Kx 1 Kx 2 Kx 1 G J. Kx 2 J H 20

21 5.3.2 Conditions for Positivity It is now possible to discover the exact conditions for matrix positivity. Each diagonal entry of D must be equal to or greater than zero for the Laplacian to be matrix-positive). This will reveal the values of G, H, J and K such that the above is positive-semidefinite. G H J2 G Hx J(x 1 x 2 + x 2 x 1 )+Gx 2 2 K2 x 2 1 G ( JKx1 G +Kx 2)( JKx 1 H J2 G G +Kx 2) And so we are left with three inequalities, which must be satisfied for matrix positivity: G>0, Hx J(x 1 x 2 + x 2 x 1 )+Gx 2 2 K2 x 2 1 G H J2 G > 0, ( JKx1 G + Kx 2)( JKx 1 G + Kx 2) H J2 G > 0. The last condition involves x 1 and x 2, and therefore must itself be converted into a matrix to reveal the conditions on G, H, J and K under which it is positive. [CHSY03] G H2K2 K2 (G H2 )J 2 J HK2 (G H2 H + J H + HK2 (G H2 J )J G H2 J J )J J K2.. Again we perform the LDL T decomposition: G K2 0 H J2 JG 0 H J2 K 2 Although the inequality (H J2 G K2 )G2 G ) 2 (J JK2 (H J2 G )G G K2 H J2 G. ( J 2 K 2 J JK 2 K2 H (H J 2 )G2 G G G 21 (H J2 G )G ) 2 K2 H J2 G > 0 (45)

22 is quite complicated, we can simplify it some by multiplying it by expressions which are known to be positive, such as: G, H J 2 G, and G K2 H J 2 G which we encountered earlier. This gives a polynomial inequality equivalent to (45), which, after some simplification, gives us: (GH J 2 K 2 ) 2 > 0. Which, considering only the case of all real coefficients, is rather vacuous, informing us only that GH J 2 K 2 0. Bringing all our inequalities together (simplifying each as we did above), we notice that two are redundant : (I) G>0, (II) GH > J 2, (III) GH > J 2 +K 2, (IV ) GH H 2 +K 2 as (III) implies (II) and (IV). Therefore, we conclude that the set of polynomials making the Laplacian matrix positive is exactly those of the form: f =A 1 (x 4 1 x 2 1x 2 2 x 2 2x x 4 2) (46) + A 2 (x 3 1x 2 + x 2 x 3 1 x 2 x 1 x 2 2 x 2 2x 1 x 2 ) + A 3 (x 2 1x 2 x 1 + x 1 x 2 x 2 1 x 1 x 3 2 x 3 2x 1 ) + A 5 (x 1 x 2 x 1 x 2 + x 2 x 1 x 2 x 1 ) + A 6 x 1 x 2 2x 1 + A 8 x 2 x 2 1x 2 with coefficients satisfying the inequalities: (A 1 + A 8 )(A 1 + A 6 ) > (A 3 A 2 ) 2 +(A 1 +A 5 ) 2 (47) and A 1 + A 8 > 0 (or, equivalently A 1 + A 6 > 0). (48) 6 Future Work The subharmonic polynomials may serve as substitutes for convex polynomials in special cases. Currently, our group is looking at other weakenings 22

23 of convexity, including quasiconvexity, in which the number of negative eigenvalues are tolerably small. The proofs in progress that Re(γ d ) and Im(γ d ) (resp. [Re(γ d )] 2 and [Im(γ d )] 2 ) are the only harmonic (resp. subharmonic) polynomials of degree d (resp. 2d when d>2) relies on an observation on the words generated by substituting pairs of identical letters (as with the Laplacian). These proofs should follow shortly. 23

24 7 Acknowledgments Thanks to Jan Bitmead for her support and for organizing the Honors Program, Thanks to Danny MacAllaster for the recursion for the harmonics, and Thanks to everybody in the lab: Nick Slinglend, John Shopple and Mark Stankus, and Thanks especially to John Farina for looking over the manuscript a couple of times Thanks to the Math Department for reviewing this paper, and Very much thanks to Professor Helton, who was there to help, from the very beginning, right down to the end. Thanks to the McNair Foundation and to Professor Helton s NSF grant for finanacial support. Thanks go from Professor Helton to Professor Roger Howe for very helpful conversations about the classical commutative analog of the noncommutative results here. 24

25 References [CHSY03] Juan F. Camino, J. W. Helton, R.E. Skelton, and Jieping Ye. Matrix inequalities: a symbolic procedure to determine convexity automatically. Integral Equations Operator Theory, 46(4): , [Hel02] [HM98] [HS04] [McA04] J. W. Helton. Positive noncommutative polynomials are sums of squares. Ann. of Math. (2), 156(2): , J. W. Helton and Orlando Merino. Classical Control Using H Methods. SIAM, J. W. Helton and McCullough Scott Skelton, R. E. Convex noncommutative polynomials have degree two or less. Siam J. Matrix Anal. Appl, 25(4): , Daniel P. McAllaster. Homogeneous harmonic noncommutative polynomials from degree 3 to 7 (includes a recurrence relation for higher degrees). dmcallas/papers.html, July

Changing Variables in Convex Matrix Ineqs

Changing Variables in Convex Matrix Ineqs Changing Variables in Convex Matrix Ineqs Scott McCullough University of Florida Nick Slinglend UCSD Victor Vinnikov Ben Gurion U of the Negev I am Bill Helton NCAlgebra a UCSD a Helton, Stankus (CalPoly

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Chapter 1 Vector Spaces

Chapter 1 Vector Spaces Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

LAYERED NETWORKS, THE DISCRETE LAPLACIAN, AND A CONTINUED FRACTION IDENTITY

LAYERED NETWORKS, THE DISCRETE LAPLACIAN, AND A CONTINUED FRACTION IDENTITY LAYERE NETWORKS, THE ISCRETE LAPLACIAN, AN A CONTINUE FRACTION IENTITY OWEN. BIESEL, AVI V. INGERMAN, JAMES A. MORROW, AN WILLIAM T. SHORE ABSTRACT. We find explicit formulas for the conductivities of

More information

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March SIAM REVIEW. c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp. 93 97, March 1995 008 A UNIFIED PROOF FOR THE CONVERGENCE OF JACOBI AND GAUSS-SEIDEL METHODS * ROBERTO BAGNARA Abstract.

More information

Roots and Coefficients Polynomials Preliminary Maths Extension 1

Roots and Coefficients Polynomials Preliminary Maths Extension 1 Preliminary Maths Extension Question If, and are the roots of x 5x x 0, find the following. (d) (e) Question If p, q and r are the roots of x x x 4 0, evaluate the following. pq r pq qr rp p q q r r p

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Chapter 2. Error Correcting Codes. 2.1 Basic Notions

Chapter 2. Error Correcting Codes. 2.1 Basic Notions Chapter 2 Error Correcting Codes The identification number schemes we discussed in the previous chapter give us the ability to determine if an error has been made in recording or transmitting information.

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

MIT Algebraic techniques and semidefinite optimization February 16, Lecture 4

MIT Algebraic techniques and semidefinite optimization February 16, Lecture 4 MIT 6.972 Algebraic techniques and semidefinite optimization February 16, 2006 Lecture 4 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture we will review some basic elements of abstract

More information

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play? Overview Last week introduced the important Diagonalisation Theorem: An n n matrix A is diagonalisable if and only if there is a basis for R n consisting of eigenvectors of A. This week we ll continue

More information

ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS

ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS INTEGERS: ELECTRONIC JOURNAL OF COMBINATORIAL NUMBER THEORY 4 (2004), #A21 ON MULTI-AVOIDANCE OF RIGHT ANGLED NUMBERED POLYOMINO PATTERNS Sergey Kitaev Department of Mathematics, University of Kentucky,

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

ALGEBRAIC PROPERTIES OF OPERATOR ROOTS OF POLYNOMIALS

ALGEBRAIC PROPERTIES OF OPERATOR ROOTS OF POLYNOMIALS ALGEBRAIC PROPERTIES OF OPERATOR ROOTS OF POLYNOMIALS TRIEU LE Abstract. Properties of m-selfadjoint and m-isometric operators have been investigated by several researchers. Particularly interesting to

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric

More information

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations: Homework Exercises 1 1 Find the complete solutions (if any!) to each of the following systems of simultaneous equations: (i) x 4y + 3z = 2 3x 11y + 13z = 3 2x 9y + 2z = 7 x 2y + 6z = 2 (ii) x 4y + 3z =

More information

Spectrally arbitrary star sign patterns

Spectrally arbitrary star sign patterns Linear Algebra and its Applications 400 (2005) 99 119 wwwelseviercom/locate/laa Spectrally arbitrary star sign patterns G MacGillivray, RM Tifenbach, P van den Driessche Department of Mathematics and Statistics,

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Math (P)refresher Lecture 8: Unconstrained Optimization

Math (P)refresher Lecture 8: Unconstrained Optimization Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions

More information

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

Notes on the Matrix-Tree theorem and Cayley s tree enumerator Notes on the Matrix-Tree theorem and Cayley s tree enumerator 1 Cayley s tree enumerator Recall that the degree of a vertex in a tree (or in any graph) is the number of edges emanating from it We will

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math 321: Linear Algebra

Math 321: Linear Algebra Math 32: Linear Algebra T. Kapitula Department of Mathematics and Statistics University of New Mexico September 8, 24 Textbook: Linear Algebra,by J. Hefferon E-mail: kapitula@math.unm.edu Prof. Kapitula,

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

8. Prime Factorization and Primary Decompositions

8. Prime Factorization and Primary Decompositions 70 Andreas Gathmann 8. Prime Factorization and Primary Decompositions 13 When it comes to actual computations, Euclidean domains (or more generally principal ideal domains) are probably the nicest rings

More information

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism

be any ring homomorphism and let s S be any element of S. Then there is a unique ring homomorphism 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UFD. Therefore

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

Polynomial Harmonic Decompositions

Polynomial Harmonic Decompositions DOI: 10.1515/auom-2017-0002 An. Şt. Univ. Ovidius Constanţa Vol. 25(1),2017, 25 32 Polynomial Harmonic Decompositions Nicolae Anghel Abstract For real polynomials in two indeterminates a classical polynomial

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections ) c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

3. Coding theory 3.1. Basic concepts

3. Coding theory 3.1. Basic concepts 3. CODING THEORY 1 3. Coding theory 3.1. Basic concepts In this chapter we will discuss briefly some aspects of error correcting codes. The main problem is that if information is sent via a noisy channel,

More information

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016

Legendre s Equation. PHYS Southern Illinois University. October 18, 2016 Legendre s Equation PHYS 500 - Southern Illinois University October 18, 2016 PHYS 500 - Southern Illinois University Legendre s Equation October 18, 2016 1 / 11 Legendre s Equation Recall We are trying

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors LECTURE 3 Eigenvalues and Eigenvectors Definition 3.. Let A be an n n matrix. The eigenvalue-eigenvector problem for A is the problem of finding numbers λ and vectors v R 3 such that Av = λv. If λ, v are

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Quadrature for the Finite Free Convolution

Quadrature for the Finite Free Convolution Spectral Graph Theory Lecture 23 Quadrature for the Finite Free Convolution Daniel A. Spielman November 30, 205 Disclaimer These notes are not necessarily an accurate representation of what happened in

More information

March Algebra 2 Question 1. March Algebra 2 Question 1

March Algebra 2 Question 1. March Algebra 2 Question 1 March Algebra 2 Question 1 If the statement is always true for the domain, assign that part a 3. If it is sometimes true, assign it a 2. If it is never true, assign it a 1. Your answer for this question

More information

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES D. Katz The purpose of this note is to present the rational canonical form and Jordan canonical form theorems for my M790 class. Throughout, we fix

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013

Convex Optimization. (EE227A: UC Berkeley) Lecture 28. Suvrit Sra. (Algebra + Optimization) 02 May, 2013 Convex Optimization (EE227A: UC Berkeley) Lecture 28 (Algebra + Optimization) 02 May, 2013 Suvrit Sra Admin Poster presentation on 10th May mandatory HW, Midterm, Quiz to be reweighted Project final report

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis Math 24 4.3 Linear Independence; Bases A. DeCelles Overview Main ideas:. definitions of linear independence linear dependence dependence relation basis 2. characterization of linearly dependent set using

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations

Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations Mathematics Course 111: Algebra I Part I: Algebraic Structures, Sets and Permutations D. R. Wilkins Academic Year 1996-7 1 Number Systems and Matrix Algebra Integers The whole numbers 0, ±1, ±2, ±3, ±4,...

More information

1 Positive definiteness and semidefiniteness

1 Positive definiteness and semidefiniteness Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Diane Maclagan and Damiano Testa 2017-18 Term 2 Contents 1 Introduction 3 2 Matrix review 3 3 Gaussian Elimination 5 3.1 Linear equations and matrices.......................

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Second Order Elliptic PDE

Second Order Elliptic PDE Second Order Elliptic PDE T. Muthukumar tmk@iitk.ac.in December 16, 2014 Contents 1 A Quick Introduction to PDE 1 2 Classification of Second Order PDE 3 3 Linear Second Order Elliptic Operators 4 4 Periodic

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Recitation 8: Graphs and Adjacency Matrices

Recitation 8: Graphs and Adjacency Matrices Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 3 Review of Matrix Algebra Vectors and matrices are essential for modern analysis of systems of equations algebrai, differential, functional, etc In this

More information

Regular and Positive noncommutative rational functions

Regular and Positive noncommutative rational functions Regular and Positive noncommutative rational functions J. E. Pascoe WashU pascoej@math.wustl.edu June 4, 2016 Joint work with Igor Klep and Jurij Volčič 1 / 23 Positive numbers: the start of real algebraic

More information

6.3 Partial Fractions

6.3 Partial Fractions 6.3 Partial Fractions Mark Woodard Furman U Fall 2009 Mark Woodard (Furman U) 6.3 Partial Fractions Fall 2009 1 / 11 Outline 1 The method illustrated 2 Terminology 3 Factoring Polynomials 4 Partial fraction

More information

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout

More information

Math 324 Summer 2012 Elementary Number Theory Notes on Mathematical Induction

Math 324 Summer 2012 Elementary Number Theory Notes on Mathematical Induction Math 4 Summer 01 Elementary Number Theory Notes on Mathematical Induction Principle of Mathematical Induction Recall the following axiom for the set of integers. Well-Ordering Axiom for the Integers If

More information

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. Math 5327 Fall 2018 Homework 7 1. For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable. 3 1 0 (a) A = 1 2 0 1 1 0 x 3 1 0 Solution: 1 x 2 0

More information

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal.

Key words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal. ALGEBRAIC CHARACTERIZATION OF FREE DIRECTIONS OF SCALAR n-d AUTONOMOUS SYSTEMS DEBASATTAM PAL AND HARISH K PILLAI Abstract In this paper, restriction of scalar n-d systems to 1-D subspaces has been considered

More information

Chapter 6 Inner product spaces

Chapter 6 Inner product spaces Chapter 6 Inner product spaces 6.1 Inner products and norms Definition 1 Let V be a vector space over F. An inner product on V is a function, : V V F such that the following conditions hold. x+z,y = x,y

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

Mathematics Department Stanford University Math 61CM/DM Inner products

Mathematics Department Stanford University Math 61CM/DM Inner products Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector

More information

A NICE PROOF OF FARKAS LEMMA

A NICE PROOF OF FARKAS LEMMA A NICE PROOF OF FARKAS LEMMA DANIEL VICTOR TAUSK Abstract. The goal of this short note is to present a nice proof of Farkas Lemma which states that if C is the convex cone spanned by a finite set and if

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

0.1 Rational Canonical Forms

0.1 Rational Canonical Forms We have already seen that it is useful and simpler to study linear systems using matrices. But matrices are themselves cumbersome, as they are stuffed with many entries, and it turns out that it s best

More information

A classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger

A classification of sharp tridiagonal pairs. Tatsuro Ito, Kazumasa Nomura, Paul Terwilliger Tatsuro Ito Kazumasa Nomura Paul Terwilliger Overview This talk concerns a linear algebraic object called a tridiagonal pair. We will describe its features such as the eigenvalues, dual eigenvalues, shape,

More information

Serge Ballif January 18, 2008

Serge Ballif January 18, 2008 ballif@math.psu.edu The Pennsylvania State University January 18, 2008 Outline Rings Division Rings Noncommutative Rings s Roots of Rings Definition A ring R is a set toger with two binary operations +

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Matrix Multiplication

Matrix Multiplication 228 hapter Three Maps etween Spaces IV2 Matrix Multiplication After representing addition and scalar multiplication of linear maps in the prior subsection, the natural next operation to consider is function

More information

Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract

Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract Let F be a field, M n (F ) the algebra of n n matrices over F and

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

MATH 324 Summer 2011 Elementary Number Theory. Notes on Mathematical Induction. Recall the following axiom for the set of integers.

MATH 324 Summer 2011 Elementary Number Theory. Notes on Mathematical Induction. Recall the following axiom for the set of integers. MATH 4 Summer 011 Elementary Number Theory Notes on Mathematical Induction Principle of Mathematical Induction Recall the following axiom for the set of integers. Well-Ordering Axiom for the Integers If

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Lacunary Polynomials over Finite Fields Course notes

Lacunary Polynomials over Finite Fields Course notes Lacunary Polynomials over Finite Fields Course notes Javier Herranz Abstract This is a summary of the course Lacunary Polynomials over Finite Fields, given by Simeon Ball, from the University of London,

More information

Powers and Products of Monomial Ideals

Powers and Products of Monomial Ideals U.U.D.M. Project Report 6: Powers and Products of Monomial Ideals Melker Epstein Examensarbete i matematik, 5 hp Handledare: Veronica Crispin Quinonez Examinator: Jörgen Östensson Juni 6 Department of

More information

Matrix Algebra: Introduction

Matrix Algebra: Introduction Matrix Algebra: Introduction Matrices and Determinants were discovered and developed in the eighteenth and nineteenth centuries. Initially, their development dealt with transformation of geometric objects

More information

Real Symmetric Matrices and Semidefinite Programming

Real Symmetric Matrices and Semidefinite Programming Real Symmetric Matrices and Semidefinite Programming Tatsiana Maskalevich Abstract Symmetric real matrices attain an important property stating that all their eigenvalues are real. This gives rise to many

More information

Convex Noncommutative Polynomials Have Degree Two or Less

Convex Noncommutative Polynomials Have Degree Two or Less Convex Noncommutative Polynomials Have Degree Two or Less J William Helton Scott McCullough Abstract A polynomial p (with real coefficients) in noncommutative variables is matrix convex provided p(tx +(1

More information

Linear algebra. S. Richard

Linear algebra. S. Richard Linear algebra S. Richard Fall Semester 2014 and Spring Semester 2015 2 Contents Introduction 5 0.1 Motivation.................................. 5 1 Geometric setting 7 1.1 The Euclidean space R n..........................

More information

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.

Vector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n. Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:

More information