Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach
|
|
- Philomena Norman
- 6 years ago
- Views:
Transcription
1 Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian, Swanand R Khare and Madhu N Belur Abstract This paper formulates the problem of determining the degrees of all polynomial entries in a minimal polynomial basis of a polynomial matrix by using only the degree structure of the given matrix Using a block Toeplitz matrix structure corresponding to the given polynomial matrix, it is shown that the degrees of the elements in its minimal polynomial basis, in the generic case, depend only on the degree structure of the given matrix, and not explicitly on the polynomial coefficients The genericity assumption ensures that numerical issues in determination of the nullspace do not arise like for a specific case Instead of the generic case, when dealing with a specific polynomial matrix, this method gives an upper bound on the degree structure of its minimal polynomial basis The Toeplitz structure effectively reduces the problem of the computation of a right annihilator of a polynomial matrix to a relatively simpler, equivalent problem of computing an annihilator of a constant matrix Index Terms minimal polynomial basis, right annihilator, genericity, degree structure I INTRODUCTION The problem of computating a minimal polynomial basis of a given polynomial matrix is of importance in systems theory (see for instance, [Kai8], [Var91], [Kuc79]) In this paper, an approach to determine the degrees of polynomial entries in each polynomial vector in a minimal polynomial basis of a given polynomial matrix is discussed The work done in this field so far has involved prior knowledge of the entries of the polynomial matrix, after which algorithms have been developed Most notable among these have been [AH9], [AVV5], [DD83], [BV87], [BD88] In [DD83], [BV87], [BD88], the matrix pencil approach is used, which involves converting the given polynomial matrix into an equivalent matrix pencil The problem of computing a nullspace basis is solved for this matrix pencil and the results are transformed to the polynomial matrix case In [AH9], [AVV5], using the relation between the nullspace of a given polynomial matrix and the kernels of associated Toeplitz matrices, an algorithm to compute a minimal polynomial basis for the nullspace of the given polynomial matrix is proposed Here, the LQ factorization (see [GL7]) of these Toeplitz matrices is used to obtain kernels of these matrices and in turn, a minimal polynomial Bhaskar Ramasubramanian is in the Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA Swanand R Khare is in the Faculty of Engineering, University of Alberta, Canada Madhu N Belur is in the Department of Electrical Engineering, Indian Institute of Technology Bombay, India Fax: address: belur@eeiitbacin (corresponding author) basis The primary concern in these references is numerically robust algorithms and it is acknowledged that numerically reliable/stable rank-revealing algorithms are central to these methods In this paper, we do not pursue numerical aspects of the degree structure determination Instead, we restrict ourselves to the so-called generic case and we propose an algorithm for determining the degree structure working with only the degrees of the entries in the original polynomial matrix The next section gives a brief summary of the topics that would be used in the paper after which the problem is formulated The following sections discuss the construction of the block Toeplitz matrix and an algorithm for determining the degree structure of the minimal polynomial basis of a given polynomial matrix (see [KPB1b]), which is then modified for the generic case The paper ends with a few examples which verify the results proposed II PRELIMINARIES AND PROBLEM FORMULATION This section provides a brief introduction to the wellknown notions of a polynomial matrix, genericity of a property (Definition 4) and annihilators of polynomial matrices The problem to be solved is stated at the end of the section A Polynomial matrices The commutative ring of polynomials in the indeterminate s with coefficients from the field of real numbers is denoted by R[s] Then, R [s] denotes the set of matrices with m rows and n columns, with each entry being an element from R[s] In this paper, m < n shall be assumed, unless stated otherwise The set {q 1 (s),, q k (s)} R n [s] is a polynomially independent set if for polynomials α 1 (s),, α k (s), the polynomial combination i α i(s)q i (s) = implies that the polynomials α i (s) =, identically for all i = 1,, k The rank, also called the normal rank, of a polynomial matrix R(s) R [s], is defined as the number of polynomially independent rows in R(s) B Minimal polynomial basis Definition 1: The degree of a polynomial vector v(s) R n [s], denoted by d v, is the maximum amongst degrees of its polynomial components The degree of a polynomial matrix R(s) R [s] is also defined in a similar way Given a polynomial matrix R(s) R [s] of degree d we can write R(s) as a matrix polynomial as R(s) = R + R 1 s + R s + + R d s d
2 The (right) null space of R(s) is defined as {q(s) R n [s] R(s)q(s) = } (1) For a given polynomial matrix R(s) R [s], with rank m, any set of n m polynomially independent vectors satisfying Equation (1) forms a basis for the nullspace Definition : Minimal Polynomial Basis Let R R [s] be a given polynomial matrix of degree d and rank m Let q 1, q,, q n m be polynomially independent vectors in the nullspace of R arranged in non-decreasing order of their degrees a 1, a,, a n m respectively Then the set {q 1, q,, q n m } is said to be a minimal polynomial basis if for any other set of n m polynomially independent vectors from the nullspace with degrees b 1 b b n m, it turns out that a i b i for i = 1,,, n m The minimal polynomial basis is unique as far as the degrees of the polynomial vectors are concerned The degrees of the vectors in a minimal polynomial basis are called the right minimal indices or Forney indices of R(s) (see [For75], [MMMM6]) C Genericity of parameters Genericity of parameters is a key assumption made in this paper Definition 3: An algebraic variety is the set of solutions E q R n, to a system of polynomial equations If we consider an algebraic variety S R n, the zero equation in the variables will render the variety trivial The fact that a non-trivial algebraic variety in R n (or C n ) is a set of measure zero is used to define genericity of a property Definition 4: ([Wil97, Page 344]) Consider a property P in terms of variables p 1, p,, p n R Property P is said to be satisfied generically if the set of values p 1, p,, p n that do not satisfy P form a non-trivial algebraic variety in R n Example 5: A simple illustration of the property of genericity is the fact that any two nonzero polynomials are generically coprime Another instance is when one chooses all the entries in a square matrix generically from R, the matrix is generically non-singular Left primeness of a wide polynomial matrix is also a generic property III PROBLEM FORMULATION AND SUMMARY OF MAIN RESULTS Given R R [s], define the matrix D Z such that an entry in D is defined as the degree of the corresponding entry in R Notice that all entries in D are nonnegative The only exception is when an entry in R is the zero polynomial, in which case the corresponding entry in D is assigned the value We now define the following sets: Z + = {z Z z } Z + = Z + { } Thus, given R R [s], one can construct a unique D Z + Definition 31: The matrix D constructed from the polynomial matrix R as described above is called the degree structure of the polynomial matrix R We now pose the following question: Problem 3: Let R R [s] be a given polynomial matrix with degree structure D Z + Let Q R n (n m) [s] be such that the columns of Q form a minimal n (n m) polynomial basis for the nullspace of R Let K Z + denote the degree structure of Q Then, can we determine K from D? This problem is the same as computing right minimal indices for a given polynomial matrices and is addressed in the papers [AVV5], [AH9], [KPB1a] Given R R [s], it is clear that it has a unique degree structure D Z +, associated to it It is also clear that given a degree structure D Z +, there exist many R s with appropriate dimension such that their associated degree structures is D Two given polynomial matrices R 1, R R [s] are said to be related, denoted R 1 R, if their degree structures D is same It can be easily checked that this is an equivalence relation The equivalence class for R 1 R [s] is defined as: D(R 1 ) = {R R [s] R 1 R} We now state another interesting problem about inferring the degree structure for minimal polynomial basis for any matrix in a given equivalence class D(R) Problem 33: Let R R [s] be a given polynomial matrix with degree structure D Z + Let R 1, R D(R) and Q 1, Q R n (n m) [s] be such that the columns of Q 1 and Q form minimal polynomial bases for R 1 and R respectively Let K 1 and K denote the degree structures of Q 1 and Q respectively Then, is K 1 = K generically? Problem 33 is settled in positive in the following theorem Theorem 34: Given degree structures K 1, K of minimal polynomial bases corresponding to the same degree structure D, K 1 = K In the following section, we describe the construction of block Toeplitz matrices from a given polynomial matrix, and the relation of kernels of these Toeplitz matrices with the nullspace of the given polynomial matrix IV CONSTRUCTION OF TOEPLITZ MATRICES A polynomial matrix R R [s] with degree d can be written as a matrix polynomial as: R = R + R 1 s + + R d s d where R i R for i =, 1,, d We construct a sequence of real structured matrices from the given polynomial matrix as follows: R R 1 A A =,A 1 =, A = A R d A A A, ()
3 where s in the above equation are zero matrices of size m n and A i R (d+i+1)m (i+1)n for i The kernels of these structured matrices are related to the nullspace of the given polynomial matrix Consider a polynomial vector v of degree i in the nullspace of R Writing this polynomial vector as a vector polynomial as v = v + v 1 s + v i s i, we have Rv = d i R j s j v j s j = (3) j= j= Comparing the coefficients on both sides in Equation (3), we have A i v c = where v c = v v 1 R(i+1)n Thus, any vector in the kernel of A i is a degree i vector in the nullspace of R, and vice-versa Further, given a vector v with degree i in the nullspace of R, the vectors sv, s v, also belong to the nullspace of R This fact is reflected in the following observation- for any i, if v c R (i+1)n is in the kernel of A i,[ then ] from[ the ] structure of these vc matrices, it is clear that and belong to the kernel v c of A i+1 These constant vectors are related to v and sv in the nullspace of R with degrees i and i+1 respectively Thus, the problem of computing the nullspace of a polynomial matrix is equivalent to computing the kernels of constant matrices (Refer [AH9], [KPB1a] for a detailed analysis) We now elaborate on the computation of a minimal polynomial nullspace basis for the case of polynomial matrices of size 1 3 Let R R 1 3 [s] be a given polynomial matrix with degree d The structured matrices A i as in Equation () are constructed For each i, the matrix A i R (d+i+1) 3(i+1) The dimensions of matrices A i for different i are listed in the following table We now compute a stage i for which v i i size of A i (d + 1) 3 1 (d + ) 6 (d + 3) 9 i (d + i + 1) 3(i + 1) the matrix A i becomes a wide matrix In such a case, the following relation is satisfied 3(i + 1) d + i + 1 i d For instance, when d = 3, from Equation (4), the matrix A i is wide for i 1 In fact, for i = 1, A i is of size 5 6 and A is of the size 6 9 Thus if the matrix A is full (4) rank, then at stage 1, the dimension of the nullspace of A 1 is 1 This vector corresponds to a vector of degree 1 in the nullspace of the polynomial matrix R Further, the nullspace dimension of A is 3 Then from above discussion, it is clear that there is a degree polynomial vector in a minimal polynomial nullspace basis of R In general, at stage i + 1 such that i satisfies condition in Equation (4), one obtains both the vectors in a minimal polynomial nullspace basis of R Therefore, i + 1 is an upper bound for the highest right minimal index For more general cases and the algorithm to compute a minimal polynomial nullspace basis, the reader is referred to [AH9], [KPB1a] These facts are illustrated in the following examples Example 41: Let R = [ s + 1 s 4 s 3 + s 1 ] be the given polynomial matrix Construct matrix A as in Equation () as A = R4 3 Note that A is full column rank and A 1 R 5 6 has 1-dimensional nullspace spanned by [ ] T The corresponding degree 1 polynomial vector in a minimal 4 1 nullspace basis is given by s Further, A R 6 9 has a 3-dimensional nullspace spanned by the vectors v 1, v, v 3 given by v 1 = [ ] T v = [ ] T v 3 = [ ] T Clearly, the first two vectors v 1 and v correspond to the degree 1 vector in the minimal polynomial basis The vector v 3 corresponds to a degree vector in the minimal polynomial nullspace basis of R V DEGREE STRUCTURE OF A MINIMAL POLYNOMIAL NULLSPACE BASIS FOR A MATRIX OF SIZE 1 3 In this section, the block Toeplitz approach is used to find the degree structure of the minimal polynomial basis when 1 3 D Z + This leads to a closed form solution for the degree structure of the minimal polynomial basis for the D of given dimensions, which is presented at the end of the section Let R R 1 3 [s] be a given polynomial matrix with degree structure D = [ a b c ] Without loss of generality, we assume that a, b, c are in non-decreasing order Further, it is assumed that none of a, b or c corresponds to the zero polynomial Let Q R 3 [s] be such that the columns of Q constitute a minimal polynomial basis for the nullspace of R Let K be the degree of structure of Q Clearly, the matrix K is a 3 matrix Then we want to infer the degree structure K from the degree structure D In the following subsections,
4 we discuss cases when c is even and odd separately While investigating these cases, it is assumed that b c The case when b < c is dealt with separately A Case 1: when c is even The matrix A is constructed as outlined in Equation () It is of the following form: deg a b c 1 a A = a + 1 b b + 1 c The structure of A has certain features, listed below: (a) A has dimensions (c + 1) 3 (b) The first (a + 1) rows of A has all elements nonzero (c) The next (b a) rows correspond to terms greater than degree a, and will have the first column, while the remaining two columns will be nonzero (d) The last (c b) rows will have the first two columns, and only the last column will be nonzero Every subsequent A i will be a matrix of dimension (c + i + 1) 3(i + 1) The construction is continued till a stage i when A i is a wide matrix At this juncture, c + i + 1 < 3(i + 1) In fact, c being even ensures the following: 3(i + 1) (c + i + 1) = (5) i + c = (6) i = c It is already known that at this stage, the degree of the minimal polynomial basis vector is i Theorem 51: Let R R 1 3 [s] be a given polynomial matrix with the degree structure D = [a b c] where a b c and c is even Let Q R 3 [s] with the degree structure K be such that the columns of Q constitute a minimal polynomial nullspace basis for R Then, c K = c c (8) b c b c Proof: We construct the matrix A i as in Equation () for i satisfying Equation (7) As mentioned earlier, A i is a matrix of size (c + i + 1) 3(i + 1) with structure as shown c (7) below: A i = A A A Consider the last row of A i The only nonzero element in this row is at the (c + i + 1)(3(i + 1)) th position This means that the annihilator of the constant matrix A i has to necessarily have a zero at the corresponding position(s) This effectively eliminates the last column of A i as it will get multiplied to zero Now, consider the second last row of the original A i After the last column has been eliminated, the only nonzero element in this row is at the (c+i)(3i) th position The annihilator will have a zero at the corresponding position This effectively eliminates column number 3i Continuing in a similar fashion, the modified A i will continue to have only one nonzero element in a row for (c b) steps, thus effectively eliminating (c b) columns at the indices 3(i+1), 3i, 3(i 1) and so on The annihilator in this case has dimensions 3(i+1) The rows of this matrix can be partitioned into (i+1) blocks, each of dimension 3, corresponding to the coefficients of various degree terms in the nullspace polynomial, ie the first block corresponds to the degree zero terms, the second block corresponds to the coefficients of s, and so on The positions corresponding to the columns eliminated will have a zero, while the others will be nonzero (c b) columns have been eliminated from A i Noticing carefully, it is seen that the columns eliminated all correspond to those that will be getting multiplied to the highest degree term c, in the specified degree structure D Thus the degree of the term(s) in K getting multiplied to c will be equal to i (c b), while the other terms will have degree i It has already been shown in Equation (7) that i = c, thus completing the proof B Case : when c is odd We construct the matrix A as in Equation () The construction of A i s is continued till a stage when the A i is a wide matrix At this stage: (9) c + i + 1 < 3(i + 1) (1) In fact, c being odd leads to the following At the i th stage, 3(i + 1) (c + i + 1) = 1 (11) i + c = 1 (1) i = c 1 (13) At this stage, however, only one vector in the kernel has been got To get the other vector, A i+1 is constructed At
5 this stage, 3(i + 1) (c + i + 1) = 3 (14) i + c = 3 (15) i = c + 1 (16) Theorem 5: The degree structure of the minimal polynomial basis is given by c 1 c 1 c+1 c+1 K = (17) c 1 c+1 (c b) (c b) Proof: The proof is similar to that of Theorem 51 C Case 3: when b c Let c = b + k, where k is a positive integer When c is even, i = c c 1 (Equation (7)), and when c is odd, i = (Equation (13)) In each case, i (c b) < This means that the minimal polynomial basis will have one of its terms as the zero polynomial, and the corresponding entry in the degree structure of the minimal polynomial will be Once the presence of the zero polynomial in the minimal polynomial basis is confirmed, the problem reduces to that of finding the degree structure of the minimal polynomial basis of a 1 matrix The examples in the following section illustrate the various cases outlined in V-A, V-B and V-C D Examples Example 53: Given D = [ 1 ], to find K =, such that DK = 1) A = ) A 1 =, ie A 1 = A A 3) A 1 is a wide matrix Thus, we stop the iteration here We now have (D + D 1 s + D s )K =, where D i corresponds to the coefficients of the degree i terms in the given polynomial matrix 4) We need a constant matrix P such that A 1 P = Observing the last row of A 1, we find that for this row to be annihilated, the corresponding element(s) in K must be zero This effectively eliminates the last column of A 1, as it will get multiplied to zero This is shown below: * * * * * * * * * * * * The resulting 3 5 matrix got after eliminating the last row and last column of A 1 will have a 5 kernel with non-zero elements 5) Thus P =, thus yielding the degrees of the 1 1 elements in the right kernel of D as K = 1 1 Example 54: Consider D = [ 4 ] 1) In this case, we get A =, a 7 9 matrix ) From the structure of the last row of A, last term of P will be zero After removing the last row and last column of A, and observing the resulting 6 8 matrix, it is seen that the 8 th row has only one non-zero element in the 6 th column, thus forcing the 6 th element in P also to be zero The residual 5 7 matrix will have a right annihilator with all non-zero elements This is shown below: * * * * * * * * * * * * * * * * * * * * * * * * * * * 3) Thus P =, yielding K = Example 55: Consider D = [ 1 3 ]
6 1) A 1 =, a 5 6 matrix The element in P corresponding to the last column of A 1 will be zero ) We get P 1 = K 1 = 1 1 * * * * * * * * * * * * * * * * * *, hence the first column of K is However, from the given dimensions of D, we need two vectors in the right kernel A 1 however gives us only one vector Thus we proceed and construct A 3) A =, a 6 9 matrix There are three vectors in the kernel of A Two of them are however, exactly P 1, appropriately padded with zeros After elimination of the last row of A as shown below, * * * * * * * * * * * * * * * * * * * * * * * * * * * we get the third vector as P = second column, K = 1, giving the VI SATURATION Given a degree structure D, one can get the degree structure of its minimal polynomial basis, K as described above One can use the same algorithm to determine the degree structure of the minimal left indices of K and the corresponding degree structure, say D 1 While it is seen that the highest value in the row vectors (ie the degree of the polynomial vectors) in D and D 1 remain the same, D and D 1 are not necessarily identical We introduce the notion of saturation in this context Definition 61: Let D be a given degree structure and K be the corresponding degree structure for its minimal polynomial nullspace basis Then, the degree structure D is said to be saturated if any degree structure D 1 such that K is the corresponding degree structure for the minimal nullspace basis satisfies 1 D D 1 Remark 6: If the degree structure of the minimal polynomial basis of the polynomial matrix corresponding to D is given by K, then the degree structure of the minimal polynomial basis of the polynomial matrix corresponding to K T is given by D T (D K D K ) In this case, an unsaturated D would imply that there exists some degree of freedom to change one or more coefficients from to a nonzero value, That is, the degree of certain entries can be raised up to a certain limit while maintaining the same degree structure of the minimal polynomial basis The case when the given D is a 1 3 matrix is discussed in some detail below The following theorem [KPB1b] is a useful result: (n 1) n Theorem 63: Given D Z +, the degree structure of a polynomial matrix R R (n 1) n [s] The degree structure of its minimal polynomial basis is: K = [ deg(det(r 1 )) deg(det(r )) deg(det(r n )) ] T (18) where R i is the minor obtained from R by removing the i th column Proof: The reader is referred to [KPB1b] for the proof in greater detail Only Case 1 discussed there is relevant to this paper as genericity ensures coprimeness of the elements in R and Q As mentioned earlier, saturation can be interpreted as the degree of freedom offered to replace zeros by nonzeros in the degree structure of D while maintaining that of K Proposition 64: When D = [ a b c ] 1 3 Z +, and c b, the saturated D is given by D sat = [ b b c ] Proof: From V-A and V-B, the degree structure of K does not depend on a Further, the degrees in the first two rows of K are the same Now, using 63, it can be seen that the degree structure of D sat is as mentioned An alternate method of proving the same result is by constructing the block Toeplitz matrices for K, and finding the degree structure of its minimal polynomial basis 1 The relation here is a component-wise relation
7 Remark 65: It is to be noted that the idea of saturation is based on the premise that the degree structures of the minimal polynomial bases of D and D sat are identical The above fact can be used to check the veracity of 64 by replacing as many by in the degree structure, D, at the same time ensuring that no change is effected in K In fact, if the degree structure is a 1 3 matrix, we can conclude it is saturated if the first two columns have the same structure of s and s A Illustrations of saturation Example 66: For D = [ 1 ], D sat = [ 1 1 ] We have not been able to devise a method of determining the degree structure of D sat for D of higher dimensions by 1 4 mere inspection Two cases for D Z + are presented below: Example 67: For D = [ 1 3 ] [ ] [, ] D sat = [ 3 ] However for D = 1 4, Dsat = Forming the block Toeplitz matrices corresponding to the given D s will confirm the correctness of the D sat matrices as indicated above The presence of the zero polynomial in the minimal polynomial basis could be a possible influence on the degree structure of D sat VII CONCLUDING REMARKS It has been shown that the degree structure of the minimal polynomial basis of a given polynomial matrix depends only on the degree structure of the given polynomial matrix and not on the coefficients While in the generic case, the degree structure got by the use of the block Toeplitz algorithm discussed is exact, the same degree structure serves as an upper bound for non-generic (ie specific) cases with the same degree structure The upper bound on the degree structure is particularly helpful in the sense that it limits the dimension of the space in which the solution (the minimal polynomial basis) is sought The case when the specified degree structure was a 1 3 matrix was studied in detail and a closed form solution was given for the degree structure of its minimal polynomial basis The case of saturation of a degree structure was also examined for this case, and this can be subsequently extended to other cases The key assumption of genericity of parameters ensured that at all stages, the matrices in consideration had full rank This meant that the results did not have to be modified for cases when the matrices were not full rank or lost rank at some point(s) in the solution space The methods adopted in this work were dependent only on the zero-nonzero nature of the coefficients and did not involve extensive numerical computations at any stage Absence of numerical investigation thus makes our results just generic, instead of the specific case: nevertheless, degree structures obtained using our approach is valid for a dense set of polynomial matrices with the given polynomial degree structure ACKNOWLEDGEMENTS We thank Dr C Praagman for making available the Scilab codes (see [Pra11]) to compute the minimal polynomial basis of a given polynomial matrix REFERENCES [AH9] J C Zúñiga Anaya and D Henrion An improved Toeplitz algorithm for polynomial matrix null-space computation Applied Mathematics and Computation, 7(1):56 7, 9 [AVV5] E N Antoniou, A I G Vardulakis, and S Vologiannidis Numerical computation of minimal polynomial bases: a generalized resultant approach Linear Algebra and its Applications, 45:64 78, 5 [BD88] T Beelen and P Van Dooren A pencil approach for embedding a polynomial matrix into a unimodular matrix SIAM Journal of Matrix Analysis and Applications, 9(1):77 89, 1988 [BV87] T Beelen and G W Veltkamp Numerical computation of a coprime factorization of a transfer function matrix Systems & Control Letters, 9(4):81 88, 1987 [DD83] P V Dooren and P Dewilde The eigenstructure of an arbitrary polynomail matrix: Computational aspects Linear Algebra and its Applications, 5: , 1983 [For75] G D Forney Minimal bases of rational vector spaces with applications to multivariable linear systems SIAM Journal on Control, 13:493 5, 1975 [GL7] G H Golub and C H Van Loan Matrix Computations TRIM Hindustan Book Agency, 3rd edition, 7 [Kai8] T Kailath Linear Systems Prentice-Hall Inc, Englewood Cliffs, NJ, 198 [KPB1a] S R Khare, H K Pillai, and M N Belur Algorithm to [KPB1b] [Kuc79] compute minimal nullspace basis of a polynomial matrix In Proceedings of the 19 th International Symposium on Mathematical Theory of Networks and Systems, pages 19 4, July 1 S R Khare, H K Pillai, and M N Belur Real radius of controllability for systems described by polynomial matrices: SIMO case In Proceedings of the 19 th International Symposium on Mathematical Theory of Networks and Systems (MTNS), pages 517 5, July 1 V Kucera Discrete Linear Control: The Polynomial Equation Approach Wiley, Chichester, 1979 [MMMM6] D S Mackey, N Mackey, C Mehl, and V Mehrmann Vector spaces of linearizations for matrix polynomials SIAM Journal of Matrix Analysis, 8:971 14, 6 [Pra11] C Praagman Scilab code for minimal polynomial basis computation (Available upon request), 11 [Var91] [Wil97] A I G Vardulakis Linear Multivariable Control: Algebraic Analysis and Synthesis Methods John Wiley & Sons, Chichester, 1991 JC Willems Generic eigenvalue assignability by real memoryless output feedback made simple In Communications, Computation, Control and Signal Processing, pages Springer, 1997
Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach
Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian 1, Swanand R. Khare and Madhu N. Belur 3 December 17, 014 1 Department of Electrical
More informationAlgorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix
Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S.
More informationOn the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.
On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed
More informationInfinite elementary divisor structure-preserving transformations for polynomial matrices
Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki
More informationOn some matrices related to a tree with attached graphs
On some matrices related to a tree with attached graphs R. B. Bapat Indian Statistical Institute New Delhi, 110016, India fax: 91-11-26856779, e-mail: rbb@isid.ac.in Abstract A tree with attached graphs
More informationStabilization, Pole Placement, and Regular Implementability
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002 735 Stabilization, Pole Placement, and Regular Implementability Madhu N. Belur and H. L. Trentelman, Senior Member, IEEE Abstract In this
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationELA
SHARP LOWER BOUNDS FOR THE DIMENSION OF LINEARIZATIONS OF MATRIX POLYNOMIALS FERNANDO DE TERÁN AND FROILÁN M. DOPICO Abstract. A standard way of dealing with matrixpolynomial eigenvalue problems is to
More informationNumerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach
Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach E.N. Antoniou, A.I.G. Vardulakis and S. Vologiannidis Aristotle University of Thessaloniki Department of Mathematics
More informationA q x k+q + A q 1 x k+q A 0 x k = 0 (1.1) where k = 0, 1, 2,..., N q, or equivalently. A(σ)x k = 0, k = 0, 1, 2,..., N q (1.
A SPECTRAL CHARACTERIZATION OF THE BEHAVIOR OF DISCRETE TIME AR-REPRESENTATIONS OVER A FINITE TIME INTERVAL E.N.Antoniou, A.I.G.Vardulakis, N.P.Karampetakis Aristotle University of Thessaloniki Faculty
More informationPOLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK
1 POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK H.L. Trentelman, Member, IEEE, R. Zavala Yoe, and C. Praagman Abstract In this paper we will establish polynomial algorithms
More informationLinear Algebra. Grinshpan
Linear Algebra Grinshpan Saturday class, 2/23/9 This lecture involves topics from Sections 3-34 Subspaces associated to a matrix Given an n m matrix A, we have three subspaces associated to it The column
More informationOn the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 2: block displacement structure algorithms.
On the application of different numerical methods to obtain null-spaces of polynomial matrices Part 2: block displacement structure algorithms JC Zúñiga and D Henrion Abstract Motivated by some control
More informationNotes on n-d Polynomial Matrix Factorizations
Multidimensional Systems and Signal Processing, 10, 379 393 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Notes on n-d Polynomial Matrix Factorizations ZHIPING LIN
More informationComputing Minimal Nullspace Bases
Computing Minimal Nullspace ases Wei Zhou, George Labahn, and Arne Storjohann Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada {w2zhou,glabahn,astorjoh}@uwaterloo.ca
More informationMIT Algebraic techniques and semidefinite optimization February 16, Lecture 4
MIT 6.972 Algebraic techniques and semidefinite optimization February 16, 2006 Lecture 4 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture we will review some basic elements of abstract
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More informationLesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method
Module 1: Matrices and Linear Algebra Lesson 3 Inverse of Matrices by Determinants and Gauss-Jordan Method 3.1 Introduction In lecture 1 we have seen addition and multiplication of matrices. Here we shall
More informationLetting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.
1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a
More informationMATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.
MATRICES After studying this chapter you will acquire the skills in knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns. List of
More informationNumerical computation of minimal polynomial bases: A generalized resultant approach
Linear Algebra and its Applications 405 (2005) 264 278 wwwelseviercom/locate/laa Numerical computation of minimal polynomial bases: A generalized resultant approach EN Antoniou,1, AIG Vardulakis, S Vologiannidis
More informationCHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998
CHAPTER 0 PRELIMINARY MATERIAL Paul Vojta University of California, Berkeley 18 February 1998 This chapter gives some preliminary material on number theory and algebraic geometry. Section 1 gives basic
More informationChapter 7. Linear Algebra: Matrices, Vectors,
Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.
More informationStationary trajectories, singular Hamiltonian systems and ill-posed Interconnection
Stationary trajectories, singular Hamiltonian systems and ill-posed Interconnection S.C. Jugade, Debasattam Pal, Rachel K. Kalaimani and Madhu N. Belur Department of Electrical Engineering Indian Institute
More informationCa Foscari University of Venice - Department of Management - A.A Luciano Battaia. December 14, 2017
Ca Foscari University of Venice - Department of Management - A.A.27-28 Mathematics Luciano Battaia December 4, 27 Brief summary for second partial - Sample Exercises Two variables functions Here and in
More informationLinearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition
Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix
More informationOn the computation of the Jordan canonical form of regular matrix polynomials
On the computation of the Jordan canonical form of regular matrix polynomials G Kalogeropoulos, P Psarrakos 2 and N Karcanias 3 Dedicated to Professor Peter Lancaster on the occasion of his 75th birthday
More informationMATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT
MATHEMATICS COMPREHENSIVE EXAM: IN-CLASS COMPONENT The following is the list of questions for the oral exam. At the same time, these questions represent all topics for the written exam. The procedure for
More informationPaul Van Dooren s Index Sum Theorem: To Infinity and Beyond
Paul Van Dooren s Index Sum Theorem: To Infinity and Beyond Froilán M. Dopico Departamento de Matemáticas Universidad Carlos III de Madrid, Spain Colloquium in honor of Paul Van Dooren on becoming Emeritus
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More informationCANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM
CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,
More informationAlgebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:
Chapter 2. 2.0 Introduction Solution of Linear Algebraic Equations A set of linear algebraic equations looks like this: a 11 x 1 + a 12 x 2 + a 13 x 3 + +a 1N x N =b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 +
More information1. Introduction. Throughout this work we consider n n matrix polynomials with degree k of the form
LINEARIZATIONS OF SINGULAR MATRIX POLYNOMIALS AND THE RECOVERY OF MINIMAL INDICES FERNANDO DE TERÁN, FROILÁN M. DOPICO, AND D. STEVEN MACKEY Abstract. A standard way of dealing with a regular matrix polynomial
More informationPALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION.
PALINDROMIC LINEARIZATIONS OF A MATRIX POLYNOMIAL OF ODD DEGREE OBTAINED FROM FIEDLER PENCILS WITH REPETITION. M.I. BUENO AND S. FURTADO Abstract. Many applications give rise to structured, in particular
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationA Note on Eigenvalues of Perturbed Hermitian Matrices
A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.
More informationPositive systems in the behavioral approach: main issues and recent results
Positive systems in the behavioral approach: main issues and recent results Maria Elena Valcher Dipartimento di Elettronica ed Informatica Università dipadova via Gradenigo 6A 35131 Padova Italy Abstract
More informationOn the adjacency matrix of a block graph
On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit
More informationVector Space Concepts
Vector Space Concepts ECE 174 Introduction to Linear & Nonlinear Optimization Ken Kreutz-Delgado ECE Department, UC San Diego Ken Kreutz-Delgado (UC San Diego) ECE 174 Fall 2016 1 / 25 Vector Space Theory
More informationa s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula
Syllabus for Math 308, Paul Smith Book: Kolman-Hill Chapter 1. Linear Equations and Matrices 1.1 Systems of Linear Equations Definition of a linear equation and a solution to a linear equations. Meaning
More informationKey words. n-d systems, free directions, restriction to 1-D subspace, intersection ideal.
ALGEBRAIC CHARACTERIZATION OF FREE DIRECTIONS OF SCALAR n-d AUTONOMOUS SYSTEMS DEBASATTAM PAL AND HARISH K PILLAI Abstract In this paper, restriction of scalar n-d systems to 1-D subspaces has been considered
More informationCOMMUTING PAIRS AND TRIPLES OF MATRICES AND RELATED VARIETIES
COMMUTING PAIRS AND TRIPLES OF MATRICES AND RELATED VARIETIES ROBERT M. GURALNICK AND B.A. SETHURAMAN Abstract. In this note, we show that the set of all commuting d-tuples of commuting n n matrices that
More informationTHE STABLE EMBEDDING PROBLEM
THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationA NOTE ON THE JORDAN CANONICAL FORM
A NOTE ON THE JORDAN CANONICAL FORM H. Azad Department of Mathematics and Statistics King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia hassanaz@kfupm.edu.sa Abstract A proof of the Jordan
More informationTues Feb Vector spaces and subspaces. Announcements: Warm-up Exercise:
Math 2270-004 Week 7 notes We will not necessarily finish the material from a given day's notes on that day. We may also add or subtract some material as the week progresses, but these notes represent
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationMATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.
MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:
More informationRecitation 8: Graphs and Adjacency Matrices
Math 1b TA: Padraic Bartlett Recitation 8: Graphs and Adjacency Matrices Week 8 Caltech 2011 1 Random Question Suppose you take a large triangle XY Z, and divide it up with straight line segments into
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationResearch Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field
Complexity, Article ID 6235649, 9 pages https://doi.org/10.1155/2018/6235649 Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Jinwang Liu, Dongmei
More informationMODULE 8 Topics: Null space, range, column space, row space and rank of a matrix
MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x
More informationComputational Approaches to Finding Irreducible Representations
Computational Approaches to Finding Irreducible Representations Joseph Thomas Research Advisor: Klaus Lux May 16, 2008 Introduction Among the various branches of algebra, linear algebra has the distinctions
More informationSpectra of Semidirect Products of Cyclic Groups
Spectra of Semidirect Products of Cyclic Groups Nathan Fox 1 University of Minnesota-Twin Cities Abstract The spectrum of a graph is the set of eigenvalues of its adjacency matrix A group, together with
More informationA linear algebraic view of partition regular matrices
A linear algebraic view of partition regular matrices Leslie Hogben Jillian McLeod June 7, 3 4 5 6 7 8 9 Abstract Rado showed that a rational matrix is partition regular over N if and only if it satisfies
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study
More informationOn The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point
Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationProduct distance matrix of a tree with matrix weights
Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian
More informationUniversity of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm
University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions
More informationMODEL ANSWERS TO THE FIRST HOMEWORK
MODEL ANSWERS TO THE FIRST HOMEWORK 1. Chapter 4, 1: 2. Suppose that F is a field and that a and b are in F. Suppose that a b = 0, and that b 0. Let c be the inverse of b. Multiplying the equation above
More informationPolynomials, Ideals, and Gröbner Bases
Polynomials, Ideals, and Gröbner Bases Notes by Bernd Sturmfels for the lecture on April 10, 2018, in the IMPRS Ringvorlesung Introduction to Nonlinear Algebra We fix a field K. Some examples of fields
More informationTHESIS. Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University
The Hasse-Minkowski Theorem in Two and Three Variables THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By
More informationMath 121 Homework 4: Notes on Selected Problems
Math 121 Homework 4: Notes on Selected Problems 11.2.9. If W is a subspace of the vector space V stable under the linear transformation (i.e., (W ) W ), show that induces linear transformations W on W
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course
More informationRelation of Pure Minimum Cost Flow Model to Linear Programming
Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m
More informationContents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124
Matrices Math 220 Copyright 2016 Pinaki Das This document is freely redistributable under the terms of the GNU Free Documentation License For more information, visit http://wwwgnuorg/copyleft/fdlhtml Contents
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationDaily Update. Math 290: Elementary Linear Algebra Fall 2018
Daily Update Math 90: Elementary Linear Algebra Fall 08 Lecture 7: Tuesday, December 4 After reviewing the definitions of a linear transformation, and the kernel and range of a linear transformation, we
More informationCODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE
Bol. Soc. Esp. Mat. Apl. n o 42(2008), 183 193 CODE DECOMPOSITION IN THE ANALYSIS OF A CONVOLUTIONAL CODE E. FORNASINI, R. PINTO Department of Information Engineering, University of Padua, 35131 Padova,
More informationSome Notes on Linear Algebra
Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More information= W z1 + W z2 and W z1 z 2
Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of
More informationEfficient Algorithms for Order Bases Computation
Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms
More informationIntroduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX
Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationUniversity of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018
University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018 Name: Exam Rules: This exam lasts 4 hours. There are 8 problems.
More informationWhere is matrix multiplication locally open?
Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?
More informationChapter 3. Vector spaces
Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say
More informationMinimal indices and minimal bases via filtrations. Mackey, D. Steven. MIMS EPrint:
Minimal indices and minimal bases via filtrations Mackey, D. Steven 2012 MIMS EPrint: 2012.82 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available
More informationFACTORIZATION AND THE PRIMES
I FACTORIZATION AND THE PRIMES 1. The laws of arithmetic The object of the higher arithmetic is to discover and to establish general propositions concerning the natural numbers 1, 2, 3,... of ordinary
More informationCOMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK
Séminaire Lotharingien de Combinatoire 52 (2004), Article B52f COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK MARC FORTIN AND CHRISTOPHE REUTENAUER Dédié à notre
More informationFor δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])
LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and
More informationSMITH MCMILLAN FORMS
Appendix B SMITH MCMILLAN FORMS B. Introduction Smith McMillan forms correspond to the underlying structures of natural MIMO transfer-function matrices. The key ideas are summarized below. B.2 Polynomial
More informationM. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium
MATRIX RATIONAL INTERPOLATION WITH POLES AS INTERPOLATION POINTS M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium B. BECKERMANN Institut für Angewandte
More informationPreface. Figures Figures appearing in the text were prepared using MATLAB R. For product information, please contact:
Linear algebra forms the basis for much of modern mathematics theoretical, applied, and computational. The purpose of this book is to provide a broad and solid foundation for the study of advanced mathematics.
More informationReview of Controllability Results of Dynamical System
IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 13, Issue 4 Ver. II (Jul. Aug. 2017), PP 01-05 www.iosrjournals.org Review of Controllability Results of Dynamical System
More informationKey words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design.
BLOCK TOEPLITZ ALGORITHMS FOR POLYNOMIAL MATRIX NULL-SPACE COMPUTATION JUAN CARLOS ZÚÑIGA AND DIDIER HENRION Abstract In this paper we present new algorithms to compute the minimal basis of the nullspace
More informationTopic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C
Topic 1 Quiz 1 text A reduced row-echelon form of a 3 by 4 matrix can have how many leading one s? choice must have 3 choice may have 1, 2, or 3 correct-choice may have 0, 1, 2, or 3 choice may have 0,
More informationTest 3, Linear Algebra
Test 3, Linear Algebra Dr. Adam Graham-Squire, Fall 2017 Name: I pledge that I have neither given nor received any unauthorized assistance on this exam. (signature) DIRECTIONS 1. Don t panic. 2. Show all
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationA Solution of a Tropical Linear Vector Equation
A Solution of a Tropical Linear Vector Equation NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave., St. Petersburg, 198504 RUSSIA nkk@math.spbu.ru
More informationUNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if
UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More informationZero controllability in discrete-time structured systems
1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationQR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS
QR FACTORIZATIONS USING A RESTRICTED SET OF ROTATIONS DIANNE P. O LEARY AND STEPHEN S. BULLOCK Dedicated to Alan George on the occasion of his 60th birthday Abstract. Any matrix A of dimension m n (m n)
More information