A Fortran 77 package for column reduction of polynomial matrices

Size: px
Start display at page:

Download "A Fortran 77 package for column reduction of polynomial matrices"

Transcription

1 A Fortran 77 package for column reduction of polynomial matrices AJ Geurts Department of Mathematics and Computing Sciences Eindhoven University of Technology POBox 513, 5600 MB Eindhoven, The Netherlands fax C Praagman 1 Department of Econometrics University of Groningen POBox 800, 9700 AV Groningen, The Netherlands cpraagman@ecorugnl, fax: July 10, corresponding author

2 Abstract A polynomial matrix is called column reduced if its column degrees are as low as possible in some sense Two polynomial matrices P and R are called unimodularly equivalent if there exists a unimodular polynomial matrix U such that PU = R Every polynomial matrix is unimodularly equivalent to a column reduced polynomial matrix In this paper a subroutine is described that takes a polynomial matrix P as input and yields on output a unimodular matrix U and a column reduced matrix R such that PU = R, actually PU R is near zero The subroutine is based on an algorithm, described in a paper by Neven and Praagman The subroutine has been tested with a number of examples on different computers, with comparable results The performance of the subroutine on every example tried is satisfactory in the sense that the magnitude of the elements of the residual matrix P U R is about P U EPS, where EPS is the machine precision To obtain these results a tolerance, used to determine the rank of some (sub)matrices, has to be set properly The influence of this tolerance on the performance of the algorithm is discussed, from which a guideline for the usage of the subroutine is derived AMS subject classification: 65F30, 15A23, 93B10, 15A22, 15A24, 15A33, 93B10, 93B17, 93B25 CR subject classification: Algorithms, Reliability, F 21 Computations on matrices, Computations on polynomials, G13Linear systems, G4Algorithm analysis Keywords: Column Reduction, Fortran subroutine

3 1 Introduction Polynomial matrices, ie matrices of which the elements are polynomials (or equivalently polynomials with matrix coefficients), play a dominant role in modern systems theory Consider for instance a set of n variables w =(w 1,,w n ) Suppose that these variables are related by a set of m higher order linear differential equations with constant coefficients: d P 0 w + P 1 dt w + P d 2 2 dt w + +P d q 2 q w =0, (1) dtq where P k, k =0,,q are m n constant matrices Define the m n polynomial matrix P by q P (s) = P k s k, k=0 then this set of equations 1 can be written concisely as P ( d )w =0 (2) dt In the behavioral approach to systems theory equation 2, together with a specification of the function class to which the solutions should belong, describes a system For example (C (R + ),P(d/dt)w = 0) describes a system The behavior of this system consists of all w R n, of which the components are C functions on [0, ), that satisfy equation 2 Many properties of the behavior of the system can be derived immediately from properties of the polynomial matrix P (see Willems [17, 18, 19]) The information contained in P can be redundant, in which case the polynomial matrix can be replaced by another polynomial matrix of lower degree or smaller size without changing the set of trajectories satisfying the equations For several reasons a minimal description can be convenient This could be purely a matter of simplicity, but there are also cases in which a minimal description has additional value For instance if one looks for state space or input-output representations of the system Depending on the goal one has in mind, minimality can have different meanings, but in important situations a row or column reduced description supplies a minimal description For instance in the work of Willems on behaviors one finds the search for a row reduced P that describes the same behavior as equation 2 Other examples that display the importance of column or row reduced polynomial matrices may be found in articles on inversion of polynomial matrices [7, 10, 24], or in papers on a polynomial matrix version of the Euclidean algorithm [4, 16, 21, 23, 25], where a standing assumption is that one of the matrices involved is column reduced Since P is row reduced if and only if its transpose P is column reduced, the problems of row reduction and column reduction are equivalent In this paper a subroutine is presented that determines a column reduced matrix unimodularly equivalent to a given polynomial matrix The algorithm underlying the subroutine has been developed in several stages: the part of the algorithm in which a minimal 1

4 basis of the right null space of a polynomial matrix is calculated is an adaptation of the algorithm described in Beelen [2] The original idea of the algorithm has been described in Beelen, van den Hurk, Praagman [3], and successive improvements have been reported in Neven [11], Praagman [13, 14] The paper by Neven and Praagman [12] gives the algorithm in its most general form, exploiting the special structure of the problem in the calculation of the kernel of a polynomial matrix The subroutine we describe here is an implementation of the algorithm in the latter paper The algorithm was intended to perform well in all cases But it turned out that the routine has difficulties in some special cases, that in some sense might be described as singular We give examples in section 8, and discuss these difficulties in detail in section 9 A crucial role is played by a tolerance, TOL, used in rank determining steps, which has to be set properly 2 Preliminaries Let us start with introducing some notations and recalling some definitions: Let R[s] denote the ring of polynomials in s with real coefficients, let R n [s] betheset of column vectors of length n with elements in R[s] Clearly R n [s] isafree module over R[s] Remark Recall that, loosely speaking, a set M is called a module over a ring R if any two elements in M can be added, and any element of M can be multiplied by an arbitrary element from R Asetm 1,,m r Mis called a set of generators of M if any element m Mcan be written as a linear combination of the generators: m = r i m i for some r i R The module is free if there exists a set of generators such that the r i are unique Note that if R is a field, then a module is just a vector space Let R m n [s] be the ring of m n matrices with elements in R[s] Let P R m n [s] Then d(p ), the degree of P, is defined as the maximum of the degrees of the elements of P,andd j (P), the j-th column degree of P, as the maximum of the degrees in the j-th column δ(p ) is the array of integers obtained by arranging the column degrees of P in non-decreasing order The leading column coefficient matrix of P,Γ c (P),is the constant matrix, of which the j-th column is obtained by taking from the j-th column of P the coefficients of the term with degree d j (P ) Let P j (s) = d j (P) k=0 P jks k be the j-th column of P, then the j-th column of Γ c (P )equals Γ c (P) j =P jdj (P) Definition 1 Let P R m n [s] P is called column reduced, if there exists a permutation matrix T, such that P =(0,P 1 )T,whereΓ c (P 1 ) has full column rank 2

5 Remark Note that we do not require Γ c (P ) to be of full column rank In the literature there is some ambiguity about column properness and column reducedness We follow here the definition of Willems [17]: P is column proper if Γ c (P ) has full column rank, and column reduced if the conditions in the definition above are satisfied A square polynomial matrix U R n n [s] isunimodular if det(u) R\{0} or, equivalently, if U 1 exists and is also polynomial Two polynomial matrices P R m n [s] and R R m n [s] are called unimodularly equivalent if there exists a unimodular matrix U R n n [s] such that PU = R It is well known that every regular polynomial matrix is unimodularly equivalent to a column proper matrix, see Wolovich [20] Kailath [8] states that the above result can be extended to polynomial matrices of full column rank without changing the proof In fact the proof in [20] is sufficient to establish that any polynomial matrix is unimodularly equivalent to a column reduced matrix Furthermore, Wolovich s proof implies immediately that the column degrees of the column reduced polynomial matrix do not exceed those of the original matrix, see Neven and Praagman [12] Theorem 1 Let P R m n [s], then there exists a U R n n [s], unimodular, such that R = PU is column reduced Furthermore δ(r) δ(p ) totally Although the proof of this theorem in the above sources is constructive, it is not suited for practical computations, for reasons explained in a paper by Van Dooren [15] Example 3 in section 8 illustrates this point In Neven and Praagman [12] an alternative, constructive proof is given on which the algorithm, underlying the subroutine we describe here, is based The most important ingredient of the algorithm is the calculation of a minimal basis of the right null space of a polynomial matrix associated with P Definition 2 Let M be a submodule of R n [s] Then Q R n r [s] is called a minimal basis of M if Q is column proper and the columns of Q span M In the algorithm a minimal basis is calculated for the module ker(p, I m )={v R n+m [s] (P, I m )v =0}, see[3]hereandinthesequeli m will denote the identity matrix of size m The first observation is that if (U, R ) is such a basis, with U R n n [s], then U is unimodular, see [3, 12] Of course, R = PU, but although (U, R ) is minimal and hence column reduced, this does not necessarily hold for R Take for example ( ) 1 s P (s) = 0 1 Then ( U(s) R(s) ) = s 0 1 3

6 is a minimal basis for ker(p, I m ), but R is clearly not column reduced The next observation is that if (U, R ) is a minimal basis for ker(p, I m ), then (U,s b R ) is a basis for ker(s b P, I m ), but not necessarily a minimal basis Especially, if b>d(u), (U,s b R ) is minimal if and only if R is column reduced On the other hand, for any minimal basis (U b,r b ) of ker(s b P, I m ), R b is divisible by sb, and PU b = s b R b In [12] it is proved that for b>(n 1)d(P ), the calculation of a minimal basis of ker(s b P, I m ) yields a pair (U b,r b )inwhichr b is column reduced 3 Linearization The calculation of ker(s b P, I m ) is done in the spirit of the procedure explained in [2]: we calculate a minimal basis of the kernel of the following linearization of (s b P, I m ) Let P be given by P(s) =P d s d +P d 1 s d 1 ++P 0 Define the pencil H b by H b (s) =sa b E b = sp d I m 0 0 sp d 1 si m I m 0 sp 0 si m I m si m I m, where A b,e b R ma na, with m a =(d+b)m, and n a = n + m a With I m 0 0 si m I m C b (s) = 0 s b+d 1 I m si m I m we see that C b (s)h b (s) = sp d I m 0 0 s 2 P d +sp d 1 0 I m 0 s b P (s) 0 I m so if V is a basis of ker(h b ), then ( ) ( U In 0 0 = R 0 0 I m ) V, 4

7 is a basis for ker(s b P, I m ) ([2]) In [12] it is proved that δ(v )=δ((u, R ) )andthatv is minimal if and only if (U, R ) is minimal So the problem is to calculate a minimal basis for ker(h b ) 4 The calculation of a minimal basis A minimal basis is calculated by transforming the pencil H b by orthogonal pre- and posttransformations to a form where E b has not changed and A b is in upper staircase form Definition 3 A matrix A R ma na is in upper staircase form if there exists an increasing sequence of integers s i, 1 s 1 <s 2 < < s r n a, such that A i,si 0andA ij =0if i>ror j<s i The elements A i,si are called the pivots of the staircase form Note that i s i and that the submatrix of the first i rows and the first s i (or more) columns of A is right invertible for 1 i r In [2, 12] it has been shown that there exist orthogonal, symmetric matrices Q 1,Q 2,,Q r such that Ā b = Q r Q 1 A b Q <n> 1 Q <n> r is in upper staircase form The matrices Q k are elementary Householder reflections, and we define Q <n> k =diag(i n, Q k ) Note that E b is not changed by this transformation: We partition Āb as follows: E b = Q r Q 1 E b Q <n> 1 Q <n> r A 11 A 12 A 13 A 1,l+2 0 A 22 A 23 Ā b = 0, 0 A ll A l,l+1 A l,l A l+1,l+2 with A jj R m j m j 1 right invertible and in upper staircase form, j =1,,l,andA j,j+1 a square matrix, for j =1,,l+1 We take m 0 =n Then the dimensions m j,j =1,,l, are uniquely determined and H b = sāb E b = sa 11 sa 12 I sa 13 sa 1,l+2 0 sa 22 sa 23 I 0 0 sa ll sa l,l+1 I sa l,l sa l+1,l+2 I 5

8 Let A b be the submatrix of A(k 1) b = Q k 1 Q 1 A b Q <n> 1 Q <n> k 1 obtained by deleting the first k 1 rows, and let s k be the column index of the first nonzero column in A b Then Q k transforms this (sub)column into the first unit vector and lets the first k 1rowsand the first s k 1 columns of A (k 1) b invariant Consequently, post-multiplication with Q <n> k leaves the first n + k 1 columns of A (k 1) b unaltered Because of the staircase form of Hb it is easy to see that the equation H b y =0has m i 1 m i independent solutions of the form y = y 11 y 12 y 1i 0 y 22 y ii 0 0 where y ii R m i 1 is a null vector of A ii,andy jk R m j 1,k = j,,i It is clear that v = Q <n> 1 Q <n> r y is then a null vector of H b and taking the top and bottom part of v yields a column of U b and R b, respectively, of degree i 1 Note that this implies that l b + d +1,since(I n,s b P ) is a basis of ker(s b P, I m ) and the degrees of the minimal bases of ker(h b )andker(s b P, I m ) are the same, see the end of section 3 5 Increasing b 1 s s i 1 The computational effort to calculate a minimal basis for ker(s b P, I m ) increases quickly with the growth of b From experiments, however, we may assume that in many cases a small b already leads to success Therefore the algorithm starts with b = 1 and increases b by one until a column reduced R b has been found With the transition from b to b +1the computations need not start from scratch, as we will explain in this section The transformation of A b into Āb is split up into steps, where in the j-th step the diagonal block matrix A jj is formed, j =1,,l Let µ j denote the row index of the first row of A jj ThenA jj is formed by the Householder reflections Q µj,,q µj+1 1 Let N k R m (n+(d+k)m) denote the matrix (0,,0,I m )Then as well as A b+1 = A b+1 = ( Ab 0 N b 0 ), A N I (b 1)m 0 Observe that the first n columns of A b+1 are just the first n columns of A 1 augmented with zeros Since post-multiplication with any Q <n> does not affect the first n columns, clearly the reflection vectors of the first µ 2 1 Householder reflections of A b+1 (the ones, 6

9 involved in the computation of A 11 ) are the reflection vectors of the first µ 2 1Householder reflections of A 1 augmented with zeros Let K 1;1 = Q µ2 1 Q 1 be the orthogonal transformation of the first step of the transformation for A 1 ThenK 1;b+1 =diag(k 1;1, I bm )is the corresponding transformation for A b+1 and the first step can be described by K 1:1 A 1 K <n> K 1;b+1 A b+1 K 1;b+1 <n> 1;1 0 0 = N 1 K 1;1 <n> I (b 1)m 0 K 1;2 A 2 K <n> 1;2 0 0 = N 2 0 0, 0 I (b 2)m 0 where K 1;j <n> has the obvious meaning From this we see that after the first step the second block column of width µ 2 1, ie the block column from which A 22 will be acquired, is exactly the corresponding block column of A 2 after the first step augmented with zeros, if b 2 In the second step, post-multiplication with the Householder reflections Q <n> k, for k = µ 2,,µ 3 1,does not affect the first n+µ 2 1 columns Therefore, the argumentation for the first step applies, mutatis mutandis, for the second step, if b 3 As a consequence, it can be concluded by induction that the transformations for the j-th step for A b+1 and A b are related by K j;b+1 = diag(k j;b,i m ), j =1,,b and that we can start the algorithm for A b+1 with A b transformed by K j,b, for j =1,,b, augmented with N b K b,b <n> at the bottom and m zero columns on the right 6 Description of the algorithm The algorithm consists of: - An outer loop in b, running from 1 to (n 1)d + 1 at most The termination criterion is that the calculated R b is column reduced -Anintermediateloopini, running from b to b + d +1 at most, in which A ii is determined and the null vectors of H b of degree i 1 are calculated The precondition is that A jj and the null vectors of H b of degree j 1, for j =1,,i 1, have been computed and are available The termination criterion is that the submatrix of computed columns of R b is not column reduced, or that all the null vectors of H b have been found - An inner loop in k, running from n + µ i 1 to n + µ i 1, (µ 0 = n) indexing the columns of A ii Foreachk, either a Householder reflection is generated and applied or a null vector of degree i 1 is computed If a null vector has been found, then U b 7

10 and R b are extended with one column and R b is checked on column reducedness The loop is terminated after k = n + µ i 1 or if the extended R b is not column reduced The algorithm uses a tolerance to determine the rank of the diagonal blocks A ii from which the null vectors of H b and consequently the computed solution is derived An appropriate tolerance is needed to find the solution But the value of the tolerance does not influence the accuracy of the computed solution 7 The implementation The algorithm has been implemented in the subroutine COLRED The subroutine is written according to the standards of SLICOT, see [22], with a number of auxiliaries It is based on the linear algebra packages BLAS [9, 5] and LAPACK [1] The complete code of an earlier (slightly different) version of the routines and an example program are included in a report by the authors [6] COLRED takes a polynomial matrix P as its input, and yields on output a unimodular matrix U, and a column reduced polynomial matrix R such that PU R is near zero Rank determination plays an important role in the algorithm, and the user is asked to specify on input a parameter TOL, that sets a threshold value below which matrix elements, that are essential in the rank determination procedure, are considered zero If TOL is set too small, it is replaced by a default value in the subroutine Concerning the numerical aspects we like to make the following remark: The algorithm used by the routine involves the construction of a special staircase form of a linearization of (s b P (s), I) with pivots considered to be non-zero when they are greater than or equal to TOL These pivots are then inverted in order to construct the columns of ker (s b P (s), I) The user is recommended to choose TOL of the order of the relative error in the elements of P (s) If TOL is chosen too small, then a very small element of insignificant value may be taken as pivot As a consequence, the correct null-vectors, and hence R(s), may not be found In the case that R(s) has not been found and in the case that the elements of the computed U(s) and R(s) are large relative to the elements of P (s) the user should consider trying several values of TOL 8 Examples In this section we describe a few examples All examples were run on four computers, a VAX-VMS, a VAX-UNIX, a SUN and on a 486 personal computer The numerical values that we present here were produced on the VAX-VMS computer Its machine precision is Unless explicitly mentioned, the examples were run with TOL = 0, forcing the subroutine to take the default value (see section 7) for the tolerance The first example is taken from the book of Kailath [8], and has been discussed before in [3] and [12] 8

11 Example 1 The polynomial matrix P is given by ( s P (s) = 4 +6s 3 +13s 2 +12s+4 s 3 4s 2 5s 2 0 s+2 ) In Kailath [8], page 386, we can find (if we correct a small typo) that PU 0 = R 0, with ( ) 1 0 U 0 (s) =, s+2 1 R 0 (s)= ( 0 (s 3 +4s 2 +5s+2) s 2 +4s+4 s+2 ), Clearly R 0 is column reduced, and U 0 unimodular This example was also treated in [3] The program yields the following solution: ( ) α βs γ U(s) = α(s +2) βs 2, δs ( 0 2γ(s R(s) = 3 +4s 2 ) +5s+2) α(s 2 +4s+4) ( βs 2, δs)(s +2) with α = , β= s, γ = and δ =2β+γ If we define P as the maximal absolute value of all coefficients of the polynomial matrix P, and introduce the notation P 10 p if 10 p 1 < P < 10 p, then it is easily checked that PU R Furthermore U is clearly unimodular: U(s) = ( 1 0 s+2 1 )( α βs γ 0 2γ This solution has been found without iterations, ie for b = 1, and equals the solution found in [3] As already mentioned in [3] one of the main motivations for the iterative procedure, ie starting with b = 1 and increasing b until the solution is found, is the (experimental) observation that in most examples increasing b is not necessary The following example, also treated in [3, 12], is included to show that sometimes a larger b is required ) Example 2 P (s) = s 4 s 2 s 6 +1 s 2 1 s Note that this matrix is unimodular and hence unimodularly equivalent to a constant, invertible matrix The program yields no column reduced R for b 4 For b = 5 the resulting U and R are 9

12 U(s) = 1 1 αs 2 s 2 s 4 + s 2 α( s 6 + s 4 1) 0 1 αs 2 R(s) = α 1 0 0, where α = The residual matrix satisfies PU R The above examples behave very well, in fact they are so regular that also the algorithm based on Wolovich s proof of Theorem 1 (from now on called the Wolovich algorithm) yields reliable answers In a forthcoming paper we will compare the results of both algorithms to a greater extent Here we restrict ourselves to two more examples, for which the Wolovich algorithm is likely to meet numerical difficulties Example 3 We take for P : P (s) = s 3 +s 2 εs s s with ε a small parameter Calculation by hand immediately shows that, taking U equal to, U(s) = δs 2 δ 0 δs 2 δ 1 with δ = ε 1, yields an R (equal to PU)givenby: R(s)= s 2 s 1 2s s Clearly R is column reduced, and U unimodular Note that U contains large elements as ε is small, a feature that will also be seen in the answers provided by the program If we take ε =10 2 and set the tolerance to 10 14, COLRED yields: U(s) =,, 0 0 2β 0 3αδ βδ(2s 2 + s) δ αs β((2δ 1)s 2 +2δs), 10

13 R(s) = δ α(2s +3δ) βδs δ α(s 3δ) β(5s 2 δs) δ α(s 3δ) β(5s 2 + δs) with α = and β = With a simple additional unimodular transformation the large elements in R can be removed, namely with U 1 (s) = 0 0 2β 0 3αδ βδ(2s 2 + s) 1 α(s +3δ) β((2δ 1)s 2 + δs) the corresponding column reduced R becomes R 1 (s) = 1 2αs 0 1 αs 5βs 2 1 αs 5βs 2 = U(s), ε 3α βs It is not possible to get rid of the large elements of U as well For smaller ε the default value of TOL, which is actually ,istoosmallandthe program ends with IERR = 2 To get results the tolerance has to be chosen approximately ε 1 In all cases PU R P U In the next section we will analyze this example in more detail An obvious thought is that the occurrence of the small parameter ε is the cause of the problem, but the next example shows that this is not necessarily the case Example 4 In this example we take for P: s 3 +s 2 +2s+1 εs 2 +2s+3 s 2 +s+1 s 1 s 1 s+2 2s P (s) = 2 +s 1 2s+1 s+3 2s 1 s 2 2s+1 s 2, 1 1 3s+1 3 with ε a small parameter By hand calculation we find the following solution:, U(s) = R(s) = s 2 εs 1 0 s 3 εs 2 s 1, 3s 2 +4s+2 (4 ε)s+6 2s+1 s 1 s 2 +2s 2 ( 2+ε)s+4 1 2s+1 s 2 +2s+6 (4 ε)s 2 1 s 2 s 2 +2 εs

14 We observe that neither U, nor Rcontains large elements It turns out that the same holds for U and R computed by the routine For instance, with ε =10 8 the computed U and R are and U(s) = ε ε ε ε ε ε s3, + s + s2 + R(s) = ε ε ε s2 + s + For other choices of ε the results are approximately the same In all cases the residual matrix PU R In the next section we also revisit this example We also investigated the sensitivity of the algorithm to perturbations in the data The conclusion, based on a number of experiments, is that if the tolerance is chosen such that 12

15 the perturbations lie within the tolerance, then the program retrieves the results of the unperturbed system This is well in agreement with our expectation We give one example Example 5 P (s) = s 3 +s 2 +s s s 3 +2s 2 +3s s 2 1 s 3 +3s 2 +s+1 s Perturbing P with quantities in the order of 10 8 leads to the result that the perturbed P is column reduced if the tolerance is less than 10 8 Setting the tolerance to 10 7 gives the solution: U(s) = s + s 2 + s 3 + s 4 R(s) = s + s 2, in accordance with the unperturbed case 13

16 9 Discussion First we examine example 3 of section 8, namely P ε (s) = s 3 +s 2 εs s s The U ε and R ε that will result from the algorithm, if calculations are performed exactly, are 0 0 2β U ε (s) = 0 3αδ βδ(2s 2 + s), δ αs β((2δ 1)s 2 +2δs) R ε (s) =, δ α(2s +3δ) βδs δ α(s 3δ) β(5s 2 δs) δ α(s 3δ) β(5s 2 + δs) with α = 1 6 6, β = andδ=ε 1 In section 8 we saw that the tolerance for which this result is obtained by the routine is proportional to ε 1 Close examination of the computations reveals that the computed  b (see section 4) gets small pivots, which cause growing numbers in the computation of the right null vectors until overflow occurs, and a breakdown of the process if the tolerance is too small Scaling of a null vector, which at first sight suggests itself, may suppress the overflow and thereby hide the problem at hand In this example the effect of scaling is that, if ε tends to zero, then U ε tends to a singular matrix and R ε to a constant matrix of rank 1 Is there any reason to believe that there exists an algorithm which yields an R continuously depending on ε in a neighborhood of 0? Observe that -det(p ε )(s) = 5εs 3,soP ε is singular for ε =0; - δ(r ε )=(0,1,2) if ε 0andδ(R 0 )=( 1,0,3) (We use the convention that the zero polynomial has degree 1) We conclude that the entries of U ε and R ε do not depend continuously on ε in a neighborhood of ε = 0 Even stronger: There do not exist families {V ε }, {S ε }, continuous in ε =0 such that for all ε, V ε is unimodular, S ε column reduced, and P ε V ε = S ε This is what we call a singular case Example 4, though at first sight similar, is quite different from example 3 Due to the fact that the third column of P minus s times its fourth column equals (2s+1, 1, 1, 1) t, the term εs 2 in the element P 12 is not needed to reduce the first column As a consequence, the elements of U ε and R ε depend continuously on ε and no large entries occur This is 14,

17 also shown in the solution found by hand calculations This feature is not recognized by the Wolovich algorithm That is why we suspect the Wolovich algorithm to behave badly on this type of problems Perturbation of P ε, for instance changing the (4,4) entry from 3 to 3 + η, destroys this property The resulting matrix behaves similar to example 3 To compare examples 3 and 4 we observe that in example 4: -det(p ε,η ) 0 for all values of ε and η So this property is not characteristic; - in the unperturbed case, η =0,δ(R ε )=(1,1,1,2) for all ε If η 0,thenδ(R ε )= (1, 1, 2, 2) for ε 0and(1,1,1,3) if ε = 0 Here again we conclude that in this case no algorithm can yield U ε,η, R ε,η continuous in ε = 0 So for η = 0 the problem is regular and for η 0 the problem is singular Remark Singularity in the sense just mentioned may appear in a more hidden form For instance, if in example 4 the third column is added to the second, resulting in P 12 =(1+ε)s 2 + 3s+ 4, we get a similar behavior depending on the values of η and ε Though ε in P 12 is likely to be a perturbation, its effect is quite different from the effects of the perturbations in example 5 For perturbations as in example 5 the tolerance should at least be of the magnitude of the uncertainties in the data to find out whether there is a non-column-reduced polynomial matrix in the range of uncertainty In cases like example 5 it may be wise to run the algorithm for several values of the tolerance 10 Conclusions In this paper we described a subroutine which is an implementation of the algorithm developed by Neven and Praagman [12] The routine asks for the polynomial matrix P to be reduced, and a tolerance The tolerance is used for rank determination within the accuracy of the computations Thus the tolerance influences whether or not the correct solution is found, but does not influence the accuracy of the solution We gave five examples In all cases the subroutine performs satisfactorily, ie the computed solution has a residual matrix PU R, that satisfies PU R <K P U EPS, where EPS is the machine precision, and K =((d(p)+1)m) 2 Normally the tolerance should be chosen in agreement with the accuracy of the elements of P, with a lower bound (default value) of the order of EPS times K P In some cases the tolerance has to be set to a larger value than the default value in order to get significant results Therefore, in case of failure, or if there is doubt about the correctness of the solution, the user is recommended to run the program with several values of the tolerance At this moment we are optimistic about the performance of the routine The only cases for which we had some difficulties to get the solution were what we called singular cases As we argued in the last section, the nature of this singularity will frustrate all algorithms We believe, although we cannot prove it at this moment, that the algorithm is numerically stable in the sense that the computed solution satisfies PU R <K P U EPS 15

18 Acknowledgements We would like to thank Herman Willemsen (TUE) for testing the routines on different computers References [1] Anderson ea, E LAPACK, Users Guide SIAM, Philadelphia, 1992 [2] Beelen, T New algorithms for computing the Kronecker structure of a pencil with applications to systems and control theory PhD thesis, Eindhoven University of Technology, 1987 [3] Beelen, T, van den Hurk, G, and Praagman, C A new method for computing a column reduced polynomial matrix Systems and Control Letters 10 (1988), [4] Codenotti, B, and Lotti, G A fast algorithm for the division of two polynomial matrices IEEE Transactions on Automatic Control 34 (1989), [5] Dongarra, J, Du Croz, J, Hammarling, S, and Hanson, R An extended set of Fortan Basic Linear Algebra Subprograms ACM TOMS 14 (1988), 1 17 [6] Geurts, A, and Praagman, C A Fortran subroutine for column reduction of polynomial matrices SOM, Research Report 94004, Groningen and EUT-Report 94- WSK-01, Eindhoven, 1994 [7] Inouye, I An algorithm for inverting polynomial matrices International Journal of Control 30 (1979), [8] Kailath, T Linear systems Prentice-Hall, 1980 [9] Lawson, C, Hanson, R, Kincaid, D, and Krogh, F Basic Linear Algebra Subprograms for Fortran Usage ACM TOMS 5 (1979), [10] Lin, C-A, Yang, C-W, and Hsieh, T-F An algorithm for inverting rational matrices Systems & Control Letters 26 (1996), to appear [11] Neven, W Polynomial methods in systems theory Master s thesis, Eindhoven University of Technology, 1988 [12] Neven, W, and Praagman, C Column reduction of polynomial matrices Linear Algebra and its Applications (1993), Special issue on Numerical Linear Algebra Methods for Control, Signals and Systems 16

19 [13] Praagman, C Inputs, outputs and states in the representation of time series In Analysis and Optimization of Systems (Berlin, 1988), A Bensoussan and J Lions, Eds, INRIA, Springer, pp Lecture Notes in Control and Information Sciences 111 [14] Praagman, C Invariants of polynomial matrices In Proceedings of the First ECC (Grenoble, 1991), I Landau, Ed, INRIA, pp [15] Van Dooren, P The computation of Kronecker s canonical form of a singular pencil Linear Algebra and its Applications 27 (1979), [16] Wang, Q, and Zhou, C An efficient division algorithm for polynomial matrices IEEE Transactions on Automatic Control 31 (1986), [17] Willems, J From time series to linear systems: Part 1,2,3 Automatica 22/23 (1986/87), , , [18] Willems, J Models for dynamics Dynamics reported 2 (1988), [19] Willems, J Paradigms and puzzles in the theory of dynamical systems IEEE Trans Automat Control 36 (1991), [20] Wolovich, W Linear multivariable systems Springer Verlag, Berlin, New York, 1978 [21] Wolovich, W A division algorithm for polynomial matrices IEEE Transactions on Automatic Control 29 (1984), [22] Working Group on Software (WGS) SLICOT: Implementation and Documentation Standards, vol of WGS-report WGS, Eindhoven/Oxford, May 1990 [23] Zhang, S The division of polynomial matrices IEEE Transactions on Automatic Control 31 (1986), [24] Zhang, S Inversion of polynomial matrices International Journal of Control 46 (1987), [25] Zhang, S, and Chen, C-T An algorithm for the division of two polynomial matrices IEEE Transactions on Automatic Control 28 (1983),

THE STABLE EMBEDDING PROBLEM

THE STABLE EMBEDDING PROBLEM THE STABLE EMBEDDING PROBLEM R. Zavala Yoé C. Praagman H.L. Trentelman Department of Econometrics, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands Research Institute for Mathematics

More information

Column reduction of polynomial matrices

Column reduction of polynomial matrices Column reduction of polynomial matrices WHL Neven Afd Informatica NLR POBox 153 8300 AD Emmeloord The Netherlands fax +31 527 248 210 C Praagman Department of Econometrics University of Groningen POBox

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006. LAPACK-Style Codes for Pivoted Cholesky and QR Updating Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig 2007 MIMS EPrint: 2006.385 Manchester Institute for Mathematical Sciences School of Mathematics

More information

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK

POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK 1 POLYNOMIAL EMBEDDING ALGORITHMS FOR CONTROLLERS IN A BEHAVIORAL FRAMEWORK H.L. Trentelman, Member, IEEE, R. Zavala Yoe, and C. Praagman Abstract In this paper we will establish polynomial algorithms

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach

Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Generic degree structure of the minimal polynomial nullspace basis: a block Toeplitz matrix approach Bhaskar Ramasubramanian, Swanand R Khare and Madhu N Belur Abstract This paper formulates the problem

More information

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

LAPACK-Style Codes for Pivoted Cholesky and QR Updating LAPACK-Style Codes for Pivoted Cholesky and QR Updating Sven Hammarling 1, Nicholas J. Higham 2, and Craig Lucas 3 1 NAG Ltd.,Wilkinson House, Jordan Hill Road, Oxford, OX2 8DR, England, sven@nag.co.uk,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

arxiv: v1 [cs.sy] 29 Dec 2018

arxiv: v1 [cs.sy] 29 Dec 2018 ON CHECKING NULL RANK CONDITIONS OF RATIONAL MATRICES ANDREAS VARGA Abstract. In this paper we discuss possible numerical approaches to reliably check the rank condition rankg(λ) = 0 for a given rational

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Product distance matrix of a tree with matrix weights

Product distance matrix of a tree with matrix weights Product distance matrix of a tree with matrix weights R B Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India email: rbb@isidacin Sivaramakrishnan Sivasubramanian

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Reduction of Smith Normal Form Transformation Matrices

Reduction of Smith Normal Form Transformation Matrices Reduction of Smith Normal Form Transformation Matrices G. Jäger, Kiel Abstract Smith normal form computations are important in group theory, module theory and number theory. We consider the transformation

More information

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems LESLIE FOSTER and RAJESH KOMMU San Jose State University Existing routines, such as xgelsy or xgelsd in LAPACK, for

More information

Efficient Algorithms for Order Bases Computation

Efficient Algorithms for Order Bases Computation Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Zero controllability in discrete-time structured systems

Zero controllability in discrete-time structured systems 1 Zero controllability in discrete-time structured systems Jacob van der Woude arxiv:173.8394v1 [math.oc] 24 Mar 217 Abstract In this paper we consider complex dynamical networks modeled by means of state

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Infinite elementary divisor structure-preserving transformations for polynomial matrices Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Computation of a canonical form for linear differential-algebraic equations

Computation of a canonical form for linear differential-algebraic equations Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field

Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Complexity, Article ID 6235649, 9 pages https://doi.org/10.1155/2018/6235649 Research Article Minor Prime Factorization for n-d Polynomial Matrices over Arbitrary Coefficient Field Jinwang Liu, Dongmei

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

Hermite normal form: Computation and applications

Hermite normal form: Computation and applications Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational

More information

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions

chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS 5.1 Basic Definitions chapter 5 INTRODUCTION TO MATRIX ALGEBRA GOALS The purpose of this chapter is to introduce you to matrix algebra, which has many applications. You are already familiar with several algebras: elementary

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 08,000.7 M Open access books available International authors and editors Downloads Our authors

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S.

More information

Chapter 1: Systems of Linear Equations

Chapter 1: Systems of Linear Equations Chapter : Systems of Linear Equations February, 9 Systems of linear equations Linear systems Lecture A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

DM559 Linear and Integer Programming. Lecture 3 Matrix Operations. Marco Chiarandini

DM559 Linear and Integer Programming. Lecture 3 Matrix Operations. Marco Chiarandini DM559 Linear and Integer Programming Lecture 3 Matrix Operations Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline and 1 2 3 and 4 2 Outline and 1 2

More information

= W z1 + W z2 and W z1 z 2

= W z1 + W z2 and W z1 z 2 Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

Exponentials of Symmetric Matrices through Tridiagonal Reductions

Exponentials of Symmetric Matrices through Tridiagonal Reductions Exponentials of Symmetric Matrices through Tridiagonal Reductions Ya Yan Lu Department of Mathematics City University of Hong Kong Kowloon, Hong Kong Abstract A simple and efficient numerical algorithm

More information

Week 4. (1) 0 f ij u ij.

Week 4. (1) 0 f ij u ij. Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Computing Invariant Factors

Computing Invariant Factors Computing Invariant Factors April 6, 2016 1 Introduction Let R be a PID and M a finitely generated R-module. If M is generated by the elements m 1, m 2,..., m k then we can define a surjective homomorphism

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Stabilization, Pole Placement, and Regular Implementability

Stabilization, Pole Placement, and Regular Implementability IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 47, NO. 5, MAY 2002 735 Stabilization, Pole Placement, and Regular Implementability Madhu N. Belur and H. L. Trentelman, Senior Member, IEEE Abstract In this

More information

Computing Minimal Nullspace Bases

Computing Minimal Nullspace Bases Computing Minimal Nullspace ases Wei Zhou, George Labahn, and Arne Storjohann Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada {w2zhou,glabahn,astorjoh}@uwaterloo.ca

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Reconstruction from projections using Grassmann tensors

Reconstruction from projections using Grassmann tensors Reconstruction from projections using Grassmann tensors Richard I. Hartley 1 and Fred Schaffalitzky 2 1 Australian National University and National ICT Australia, Canberra 2 Australian National University,

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

Topological Data Analysis - Spring 2018

Topological Data Analysis - Spring 2018 Topological Data Analysis - Spring 2018 Simplicial Homology Slightly rearranged, but mostly copy-pasted from Harer s and Edelsbrunner s Computational Topology, Verovsek s class notes. Gunnar Carlsson s

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

Week6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx

Week6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx Week6 Gaussian Elimination 61 Opening Remarks 611 Solving Linear Systems View at edx 193 Week 6 Gaussian Elimination 194 61 Outline 61 Opening Remarks 193 611 Solving Linear Systems 193 61 Outline 194

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Computing least squares condition numbers on hybrid multicore/gpu systems

Computing least squares condition numbers on hybrid multicore/gpu systems Computing least squares condition numbers on hybrid multicore/gpu systems M. Baboulin and J. Dongarra and R. Lacroix Abstract This paper presents an efficient computation for least squares conditioning

More information

Maximum-weighted matching strategies and the application to symmetric indefinite systems

Maximum-weighted matching strategies and the application to symmetric indefinite systems Maximum-weighted matching strategies and the application to symmetric indefinite systems by Stefan Röllin, and Olaf Schenk 2 Technical Report CS-24-7 Department of Computer Science, University of Basel

More information

Math 308 Midterm November 6, 2009

Math 308 Midterm November 6, 2009 Math 308 Midterm November 6, 2009 We will write A 1,..., A n for the columns of an m n matrix A. If x R n, we will write x = (x 1,..., x n ). he null space and range of a matrix A are denoted by N (A)

More information

Generalized interval arithmetic on compact matrix Lie groups

Generalized interval arithmetic on compact matrix Lie groups myjournal manuscript No. (will be inserted by the editor) Generalized interval arithmetic on compact matrix Lie groups Hermann Schichl, Mihály Csaba Markót, Arnold Neumaier Faculty of Mathematics, University

More information

Computing generalized inverse systems using matrix pencil methods

Computing generalized inverse systems using matrix pencil methods Computing generalized inverse systems using matrix pencil methods A. Varga German Aerospace Center DLR - Oberpfaffenhofen Institute of Robotics and Mechatronics D-82234 Wessling, Germany. Andras.Varga@dlr.de

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Gaussian elimination

Gaussian elimination Gaussian elimination October 14, 2013 Contents 1 Introduction 1 2 Some definitions and examples 2 3 Elementary row operations 7 4 Gaussian elimination 11 5 Rank and row reduction 16 6 Some computational

More information

Canonical lossless state-space systems: staircase forms and the Schur algorithm

Canonical lossless state-space systems: staircase forms and the Schur algorithm Canonical lossless state-space systems: staircase forms and the Schur algorithm Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics School of Mathematical Sciences Projet APICS Universiteit

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Computational Approaches to Finding Irreducible Representations

Computational Approaches to Finding Irreducible Representations Computational Approaches to Finding Irreducible Representations Joseph Thomas Research Advisor: Klaus Lux May 16, 2008 Introduction Among the various branches of algebra, linear algebra has the distinctions

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Min-Rank Conjecture for Log-Depth Circuits

Min-Rank Conjecture for Log-Depth Circuits Min-Rank Conjecture for Log-Depth Circuits Stasys Jukna a,,1, Georg Schnitger b,1 a Institute of Mathematics and Computer Science, Akademijos 4, LT-80663 Vilnius, Lithuania b University of Frankfurt, Institut

More information

Mathematics. EC / EE / IN / ME / CE. for

Mathematics.   EC / EE / IN / ME / CE. for Mathematics for EC / EE / IN / ME / CE By www.thegateacademy.com Syllabus Syllabus for Mathematics Linear Algebra: Matrix Algebra, Systems of Linear Equations, Eigenvalues and Eigenvectors. Probability

More information

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if UNDERSTANDING THE DIAGONALIZATION PROBLEM Roy Skjelnes Abstract These notes are additional material to the course B107, given fall 200 The style may appear a bit coarse and consequently the student is

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 16, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Uniqueness of the Solutions of Some Completion Problems

Uniqueness of the Solutions of Some Completion Problems Uniqueness of the Solutions of Some Completion Problems Chi-Kwong Li and Tom Milligan Abstract We determine the conditions for uniqueness of the solutions of several completion problems including the positive

More information

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM

THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM THE CAYLEY HAMILTON AND FROBENIUS THEOREMS VIA THE LAPLACE TRANSFORM WILLIAM A. ADKINS AND MARK G. DAVIDSON Abstract. The Cayley Hamilton theorem on the characteristic polynomial of a matrix A and Frobenius

More information

Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices

Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hankel Matrices Split Algorithms and ZW-Factorization for Toeplitz and Toeplitz-plus-Hanel Matrices Georg Heinig Dept of Math& CompSci Kuwait University POBox 5969 Safat 13060, Kuwait Karla Rost Faultät für Mathemati

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class: Math 4242 Fall 206 homework page Math 4242 Fall 206 Darij Grinberg: homework set 8 due: Wed, 4 Dec 206 Exercise Recall that we defined the multiplication of complex numbers by the rule a, b a 2, b 2 =

More information

Journal of Algebra 226, (2000) doi: /jabr , available online at on. Artin Level Modules.

Journal of Algebra 226, (2000) doi: /jabr , available online at   on. Artin Level Modules. Journal of Algebra 226, 361 374 (2000) doi:10.1006/jabr.1999.8185, available online at http://www.idealibrary.com on Artin Level Modules Mats Boij Department of Mathematics, KTH, S 100 44 Stockholm, Sweden

More information

Geometric Mapping Properties of Semipositive Matrices

Geometric Mapping Properties of Semipositive Matrices Geometric Mapping Properties of Semipositive Matrices M. J. Tsatsomeros Mathematics Department Washington State University Pullman, WA 99164 (tsat@wsu.edu) July 14, 2015 Abstract Semipositive matrices

More information

7. Dimension and Structure.

7. Dimension and Structure. 7. Dimension and Structure 7.1. Basis and Dimension Bases for Subspaces Example 2 The standard unit vectors e 1, e 2,, e n are linearly independent, for if we write (2) in component form, then we obtain

More information