APPROXIMATE GREATEST COMMON DIVISOR OF POLYNOMIALS AND THE STRUCTURED SINGULAR VALUE

Size: px
Start display at page:

Download "APPROXIMATE GREATEST COMMON DIVISOR OF POLYNOMIALS AND THE STRUCTURED SINGULAR VALUE"

Transcription

1 PPROXIMTE GRETEST OMMON DIVISOR OF POLYNOMILS ND THE STRUTURED SINGULR VLUE G Halikias, S Fatouros N Karcanias ontrol Engineering Research entre, School of Engineering Mathematical Sciences, ity University, Northampton Square, London EV H, UK addresses: ghalikiascityacuk, sfatouroscityacuk, nkarcaniascityacuk Keywords: pproximate GD, Sylvester matrix, structured singular value bstract In this note the following problem is considered: Given two monic coprime polynomials a(s) b(s) with real coefficients, find the smallest (in magnitude) perturbation in their coefficients so that the perturbed polynomials have a common root It is shown that the problem is equivalent to the calculation of the structured singular value of a matrix, which can be performed using efficient existing techniques of robust control simple numerical example illustrates the effectiveness of the method The generalisation of the method to calculate the approximate greatest common divisor (GD) of polynomials is finally discussed Introduction The computation of the greatest common divisor (GD) of two polynomials a(s) b(s) is a non-generic problem, in the sense that a generic pair of polynomials has greatest common divisor one Thus, the notion of approximate common factors GD s has to be introduced for the purpose of effective numerical calculations In the context of Systems ontrol applications, the main motivation for developing non-generic techniques for calculating the GD arises from the study of almost zeros [3] The numerical computation of the GD of polynomials in a robust way has been considered using a variety of methodologies, such as ERES [5], extended ERES [5] matrix pencil methods [4] The main characteristic of all these techniques is that they transform exact procedures to their numerical versions In this work, we formalise the notion of approximate coprimeness approximate GD of two polynomials, by considering the minimum-magnitude perturbation in the polynomial s coefficients, such that the perturbed polynomials have a common root The calculation of the minimal perturbation is shown to correspond to the distance of a structured matrix from singularity, or, equivalently to the calculation of the structured singular value of a matrix [6, 7] The later problem has been studied extensively in the context of robust-stability robustperformance design of control systems, efficient methods have been developed for its numerical solution Generalising the definition of the structured singular value by imposing additional rank constraints, leads naturally to the formulation of a problem involving the calculation of the approximate GD of some minimal degree k 2 Main results The problem considered in this paper is the following: Problem : Let a(s) b(s) be two monic coprime polynomials with real coefficients, such that a(s) = m b(s) = n where m n What is the smallest real perturbation (in magnitude) in the coefficients of a(s) b(s) so that the perturbed polynomials have a common root? Formally write: a (s) = s m + α m s m + α m 2 s m α () b (s) = s m + β n s n + β n 2 s n β (2) assume that the coefficients {α i, i =,,, m } {β i, i =,,, n } are subjected to real perturbations δ, δ,, δ m ɛ, ɛ,, ɛ n, respectively, ie the perturbed polynomials are: a(s; δ,, δ m ) = s m + (α m + δ m )s m + + (α + δ ) b(s; ɛ,, ɛ n ) = s n + (β n + ɛ n )s n respectively Let also: + + (β + ɛ ) γ = max{ δ, δ,, δ m, ɛ, ɛ,, ɛ n } (3) Then Problem is equivalent to: min γ such that a(s; δ,, δ m ) b(s; ɛ,, ɛ n ) have a common root In the sequel, it is shown that Problem is equivalent to the calculation of the (real) structured singular value of an appropriate matrix which can be performed efficiently using existing algorithms

2 Problem 2: (real structured singular value) Let M R n n define the structured set: D = {diag(δ I r, δ 2 I r2,, δ s I rs ) : δ i R, i =, 2,, s} where the r i are positive integers such that s i= r i = n The structured singular value of M (relative to structure D) is defined as: µ D (M) = min{ : D, det(i n M ) = } (4) unless no D makes I n M singular, in which case µ D (M) = The problem here is to calculate µ D (M), provided that µ D (M), to find a D of minimal norm such that det(i n M ) = efore stating the equivalence between Problem Problem 2, we need the following result which relates the existence of common factors of two polynomials to the rank of their corresponding Sylvester resultant matrix: Theorem : onsider the monic polynomials a(s) b(s) with a(s) = m b(s) = n where m n, let be their Sylvester resultant matrix, α m α m 2 α α m α α = α m α α β n β n 2 β β n β β β n β β Then, if φ(s) denotes the GD of a(s) b(s) the following properties hold true: Rank() = n + m φ(s) 2 The polynomials a(s) b(s) are coprime if only if Rank() = n + m 3 The GD φ(s) is invariant under elementary row operations on Furthermore, if we reduce to its row echelon form, the last non-vanishing row defines the coefficients of φ(s) Proof: See [, 5, 2] Using Theorem, the equivalence of Problem Problem 2 can now be established: Theorem 2: Problem is equivalent to Problem 2 by defining: The structured set D as: D = {diag(δ m I n, δ m 2 I n,, δ I n, ɛ n I m, ɛ n 2 I m,, ɛ I m ) : δ i R, ɛ i R} ie s = m + n, r i = n for i m r i = m for m + i m + n 2 M = Z Θ where: ( ) In I Θ = n O n,m O n,m O m,n O m,n I m I m Z = ( (Znm) (Znm m ) (Zmn) (Zmn n ) ) with Θ R n+m,2nm Z R n+m,2nm, where: Z k nm = ( O n,k+ I n O n,m k ) for k =,,, m denotes the (non-singular) Sylvester s resultant matrix of polynomials a(s) b(s) defined above 3 With these definitions: γ = µ D (M) where γ denotes the minimum-magnitude real perturbation in the coefficients of a(s) b(s) such that the perturbed polynomials have a common root Proof: Since a(s) b(s) are assumed coprime, their Sylvester resultant matrix is nonsingular The Sylvester resultant matrix  of the perturbed polynomials a(s; δ,, δ m ) b(s; ɛ,, ɛ n ) can be decomposed as  = + E where E denotes the perturbation matrix : δ m δ m 2 δ δ m δ δ E = δ m δ δ ɛ n ɛ n 2 ɛ ɛ n ɛ ɛ ɛ n ɛ ɛ (5) Matrix E can now be factored as E = Θ Z where Θ Z are defined in Theorem 2 part 2 above, = diag(δ m I n, δ m 2 I n,, δ I n, ɛ n I m, ɛ m 2 I m,, ɛ I m ) learly D, ie it has a block-diagonal structure s = m + n, r i = n for i m r i = m for m + i m + n Note also that: γ = max{ δ, δ,, δ m, ɛ, ɛ,, ɛ n } = Since the resultant Sylvester matrix  loses rank if only if there is a common factor between a(s; δ,, δ m ) b(s; ɛ,, ɛ m ), Problem is equivalent to: min such that det( + Θ Z) = D (6)

3 Using the matrix identity det(i + ) = det(i + ) (7) which holds for any two matrices of compatible dimensions, the fact that is non-singular, we have that: det( + Θ Z) = det(i + Z Θ ) = Hence Problem becomes: det(i M ) = min{ : det(i M ) =, D } (8) which is equivalent to Problem 2, the minimum being µ D (M) Example : Let a(s) = s 3 + α 2 s 2 + α s + α (m = 3) b(s) = s 2 + β s + β (n = 2) The Sylvester resultant matrix of the perturbed polynomials is: α 2 + δ 2 α + δ α + δ α 2 + δ 2 α + δ α + δ Â = β + ɛ β + ɛ β + ɛ β + ɛ (9) β + ɛ β + ɛ which can be written as: α 2 α α δ 2 δ δ α 2 α α Â = β β β β + δ 2 δ δ ɛ ɛ ɛ ɛ β β ɛ ɛ Let E denote the first second matrices, respectively, in the RHS of this equation The perturbation matrix E can be factored as: E = δ 2 I 2 δ I 2 δ I 2 ɛ I 3 ɛ I 3 which is of the required form E = Θ Z with D The minimum coefficient perturbation is the inverse of the structured singular value of the matrix: M = α 2 α α α 2 α α β β β β β β can be computed numerically using efficient existing techniques [6, 7] Example 2: Here the effectiveness of the method is tested with a numerical example onsider the polynomials a (s) = s 3 6s 2 + s 6 b (s) = s 2 6s + 8 with roots {3, 2, } {4, 2} respectively Since there is a common root (s = 2) the resultant Sylvester matrix is singular The polynomials were perturbed to: which have roots a(s) = s 3 65s 2 + s 595 b(s) = s 2 66s + 8 {352, 2454, 9534} {43, 299} respectively The singular values of the corresponding Sylvester resultant matrix, = were obtained as: {227997, 23247, 547, 2264, 7} indicating a numerical rank of 4, hence, an approximate GD of degree one (Theorem ), as expected Next the results of Theorem 2 were applied to a(s) b(s) Note that since the maximum perturbation of the coefficients of

4 a(s) b(s) relative to those of a (s) b (s) (which have a common root) is, we expect that γ Two functions from Matlab s µ-optimisation toolbox (mum unwrapm) were used to calculate the (real) structured singular value of matrix M the corresponding minimum-norm singularising perturbation The lower upper bounds of the structured singular value were obtained as: µ D (M) = γ corresponding to γ = It was also checked that the singularising perturbation corresponding to µ D (M) had the right structure (ie D) in fact δ = δ = δ 2 = γ ɛ = ɛ = γ Polynomials with a common factor nearest to a(s) b(s) were obtained with the help of as: algorithm This requires the refinement of the definition of the structured singular value by introducing rank constraints Specifically, we define: Definition: (generalised real structured singular value) Let M R n n define the structured set: D = {diag(δ I r, δ 2 I r2,, δ s I rs ) : δ i R, i =, 2,, s} where the r i are positive integers such that s i= r i = n The generalised structured singular value of M relative to structure D for a non-negative integer k is defined as: ˆµ D,k (M) = min{ : D, null(i n M ) > k} unless there does not exist a D such that null(i n M ) > k, in which case ˆµ D,k (M) = â(s) = s s s ˆb(s) = s s It follows immediately from the definition that ˆµ D, (M) = µ D (M) that ˆµ D,k (M) ˆµ D,k+ (M) for each integer k Further if for some integer k, ˆµ D,k (M) > ˆµ D,k+ (M) =, then for any minimiser of ˆµ D,k (M), null(i n M ) = k + The roots of â(s) ˆb(s) were calculated as: { , , } { , } respectively corresponding to an optimal approximate GD, φ(s) = s pproximate GD of polynomials The technique developed in section 2 may be used to define a conceptual algorithm to calculate the numerical GD of any two polynomials a(s) b(s) This sequentially extracts approximate common factors φ i (s), by calculating the corresponding structured singular value µ D the minimumnorm singularising matrix perturbation fter the extraction of each factor, the quotients a i+ = a i (s)/φ i (s) b i+ (s) = b i (s)/φ i (s) are calculated, ignoring possible (small) remainders of the divisions The procedure is initialised by setting a (s) = a(s), b (s) = b(s), iterates by constructing at each step of the algorithm the reduceddimension Sylvester matrix corresponding to the polynomial pair (a i+ (s), b i+ (s)), followed by calculating the new µ D, which in turn results to the extraction of the new factor φ i+ (s) The whole process is repeated until a tolerance condition is met, at which stage the approximate GD φ(s) can be constructed by accumulating the extracted common factors φ i (s) Special care is needed in the real case, to ensure that any complex roots in φ(s) appear in conjugate pairs ompared to this procedure, a more elegant approach would be to extract the approximate common factor via a non-iterative Theorem 3: Let a (s) b (s) be two monic coprime polynomials of degrees a (s) = m b (s) = n with m n defined in equations () (2) Let a(s) b(s) be two perturbed polynomials, set γ = max{ δ, δ,, δ m, ɛ, ɛ,, ɛ n } () where {δ i } {ɛ i } denote the perturbed coefficients of a (s) b (s) respectively Further, let γ (k) denote the the minimum value of γ such that a(s) b(s) have a common factor φ(s) of degree φ(s) > k (k =,,, n ) Then, γ (k) = ˆµ D,k (M) () where ˆµ D,k (M) denotes the generalised structured singular value of M = Z Θ with respect to the structure D defined in equation (8),, Θ Z are the matrices defined in equations (7), (9) () respectively Further, φ(s) may be constructed from any D which minimises subject to the constraint null(i n M ) > k Proof: This is almost identical to the proof of Theorem 2, on noting that the transformations in equation (8) do not change the nullity of the corresponding matrices Theorem 3 suggests that the GD of two polynomials a(s) b(s) can be obtained by calculating successively ˆµ D,k (M) for k =,, n The procedure terminates when either k = n is reached, or when the generalised structured singular value falls below a pre-specified tolerance level The effective numerical calculation of ˆµ D,k (or at least of tight upper lower bounds) will be the subject of future research further generalisation of our method involves the calculation of

5 the approximate GD for an arbitrary number of polynomials (rather than just two) This problem can also be translated to our framework using some recent results on generalised resultants [2] will also be a topic of future work 4 onclusions In this paper we have proposed a new method for calculating numerically the approximate GD of a set of polynomials It was shown that, for two coprime polynomials, the problem of calculating the smallest l -norm perturbation in the polynomials coefficient vector so that the perturbed polynomials have a common root, is equivalent to the calculation of the structured singular value of an appropriate matrix This is a fundamental problem in the area of robust control various techniques have been successfully developed for its solution The effectiveness of one such method for calculating the GD of lowdegree polynomials has been demonstrated via a numerical example We have further shown that calculating the minimum l -norm perturbation in the coefficient vector of two (or more) coprime polynomials so that the perturbed polynomials have a GD of degree at least k, reduces to a structured singular value-type calculation with additional rank constraints This is a non-stard problem developing effective algorithms for its solution will be the topic of future research work References [] S arnett (983), Polynomials Linear onstrol Systems, Marcel Dekker, Inc, New York [2] S Fatouros N Karcanias (22), The GD of many polynomials the factorisation of generalised resultants, Research Report, ontrol Engineering Research entre, ity University [3] N Karcanias, Giannakopoulos M Hubbard (984), lmost zeros of a set of polynomials, Int J ontrol, 4, [4] N Karcanias M Mitrouli (994), matrix pencilbased numerical method for the computation of the GD of polynomials, IEEE Tran uto ontr, -39, [5] M Mitrouli N Karcanias (993), omputation of the GD of polynomials using Gaussian transformations shifting, Int J ontrol, 58, [6] PM Young J Doyle (996), Properties of the mixed µ problem its bounds, IEEE Tran uto ontr, 4(), [7] K Zhou J Doyle (998), Essentials of Robust ontrol, Prentice Hall, Inc, Upper Saddle River, NJ, US

City Research Online. Permanent City Research Online URL:

City Research Online. Permanent City Research Online URL: Christou, D., Karcanias, N. & Mitrouli, M. (2007). A Symbolic-Numeric Software Package for the Computation of the GCD of Several Polynomials. Paper presented at the Conference in Numerical Analysis 2007,

More information

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc. 1 Polynomial Matrices 1.1 Polynomials Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc., n ws ( ) as a

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Christou, D. (2011). ERES Methodology and Approximate Algebraic Computations. (Unpublished Doctoral thesis, City University

More information

15. C. Koukouvinos, M. Mitrouli and J. Seberry, On the Smith normal form of weighing matrices, Bull. Inst. Combin. Appl., Vol. 19 (1997), pp

15. C. Koukouvinos, M. Mitrouli and J. Seberry, On the Smith normal form of weighing matrices, Bull. Inst. Combin. Appl., Vol. 19 (1997), pp Papers in journals 1. G. Kalogeropoulos and M. Mitrouli, On the computation of the Weierstrass canonical form of a regular matrix pencil, Control and Computers, Vol. 20 (1992), No. 3, pp. 61-68. 2. M.

More information

System theoretic based characterisation and computation of the least common multiple of a set of polynomials

System theoretic based characterisation and computation of the least common multiple of a set of polynomials Linear Algebra and its Applications 381 (2004) 1 23 wwwelseviercom/locate/laa System theoretic based characterisation and computation of the least common multiple of a set of polynomials Nicos Karcanias

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

SMITH MCMILLAN FORMS

SMITH MCMILLAN FORMS Appendix B SMITH MCMILLAN FORMS B. Introduction Smith McMillan forms correspond to the underlying structures of natural MIMO transfer-function matrices. The key ideas are summarized below. B.2 Polynomial

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials Spec Matrices 2017; 5:202 224 Research Article Open Access Dimitrios Christou, Marilena Mitrouli*, and Dimitrios Triantafyllou Structured Matrix Methods Computing the Greatest Common Divisor of Polynomials

More information

Multivariable ARMA Systems Making a Polynomial Matrix Proper

Multivariable ARMA Systems Making a Polynomial Matrix Proper Technical Report TR2009/240 Multivariable ARMA Systems Making a Polynomial Matrix Proper Andrew P. Papliński Clayton School of Information Technology Monash University, Clayton 3800, Australia Andrew.Paplinski@infotech.monash.edu.au

More information

Krylov Techniques for Model Reduction of Second-Order Systems

Krylov Techniques for Model Reduction of Second-Order Systems Krylov Techniques for Model Reduction of Second-Order Systems A Vandendorpe and P Van Dooren February 4, 2004 Abstract The purpose of this paper is to present a Krylov technique for model reduction of

More information

Model reduction via tangential interpolation

Model reduction via tangential interpolation Model reduction via tangential interpolation K. Gallivan, A. Vandendorpe and P. Van Dooren May 14, 2002 1 Introduction Although most of the theory presented in this paper holds for both continuous-time

More information

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Infinite elementary divisor structure-preserving transformations for polynomial matrices Infinite elementary divisor structure-preserving transformations for polynomial matrices N P Karampetakis and S Vologiannidis Aristotle University of Thessaloniki, Department of Mathematics, Thessaloniki

More information

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation

The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation The Solvability Conditions for the Inverse Eigenvalue Problem of Hermitian and Generalized Skew-Hamiltonian Matrices and Its Approximation Zheng-jian Bai Abstract In this paper, we first consider the inverse

More information

CSL361 Problem set 4: Basic linear algebra

CSL361 Problem set 4: Basic linear algebra CSL361 Problem set 4: Basic linear algebra February 21, 2017 [Note:] If the numerical matrix computations turn out to be tedious, you may use the function rref in Matlab. 1 Row-reduced echelon matrices

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

On the computation of the Jordan canonical form of regular matrix polynomials

On the computation of the Jordan canonical form of regular matrix polynomials On the computation of the Jordan canonical form of regular matrix polynomials G Kalogeropoulos, P Psarrakos 2 and N Karcanias 3 Dedicated to Professor Peter Lancaster on the occasion of his 75th birthday

More information

LMI Based Model Order Reduction Considering the Minimum Phase Characteristic of the System

LMI Based Model Order Reduction Considering the Minimum Phase Characteristic of the System LMI Based Model Order Reduction Considering the Minimum Phase Characteristic of the System Gholamreza Khademi, Haniyeh Mohammadi, and Maryam Dehghani School of Electrical and Computer Engineering Shiraz

More information

Structured singular value and µ-synthesis

Structured singular value and µ-synthesis Structured singular value and µ-synthesis Robust Control Course Department of Automatic Control, LTH Autumn 2011 LFT and General Framework z P w z M w K z = F u (F l (P,K), )w = F u (M, )w. - Last week

More information

Projection of state space realizations

Projection of state space realizations Chapter 1 Projection of state space realizations Antoine Vandendorpe and Paul Van Dooren Department of Mathematical Engineering Université catholique de Louvain B-1348 Louvain-la-Neuve Belgium 1.0.1 Description

More information

MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, First Week, February 4 8

MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, First Week, February 4 8 MATHEMATICS 217: HONORS LINEAR ALGEBRA Spring Term, 2002 The textbook for the course is Linear Algebra by Hoffman and Kunze (second edition, Prentice-Hall, 1971). The course probably will cover most of

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013. The University of Texas at Austin Department of Electrical and Computer Engineering EE381V: Large Scale Learning Spring 2013 Assignment Two Caramanis/Sanghavi Due: Tuesday, Feb. 19, 2013. Computational

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

Course 2316 Sample Paper 1

Course 2316 Sample Paper 1 Course 2316 Sample Paper 1 Timothy Murphy April 19, 2015 Attempt 5 questions. All carry the same mark. 1. State and prove the Fundamental Theorem of Arithmetic (for N). Prove that there are an infinity

More information

Normal Forms for General Polynomial Matrices

Normal Forms for General Polynomial Matrices Normal Forms for General Polynomial Matrices Bernhard Beckermann a a Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de Lille, 59655 Villeneuve d Ascq Cedex,

More information

Normal Forms for General Polynomial Matrices

Normal Forms for General Polynomial Matrices Normal Forms for General Polynomial Matrices Bernhard Beckermann 1, George Labahn 2 and Gilles Villard 3 1 Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de

More information

SYNTHESIS OF LOW ORDER MULTI-OBJECTIVE CONTROLLERS FOR A VSC HVDC TERMINAL USING LMIs

SYNTHESIS OF LOW ORDER MULTI-OBJECTIVE CONTROLLERS FOR A VSC HVDC TERMINAL USING LMIs SYNTHESIS OF LOW ORDER MULTI-OBJECTIVE CONTROLLERS FOR A VSC HVDC TERMINAL USING LMIs Martyn Durrant, Herbert Werner, Keith Abbott Control Institute, TUHH, Hamburg Germany; m.durrant@tu-harburg.de; Fax:

More information

Solution to Sylvester equation associated to linear descriptor systems

Solution to Sylvester equation associated to linear descriptor systems Solution to Sylvester equation associated to linear descriptor systems Mohamed Darouach To cite this version: Mohamed Darouach. Solution to Sylvester equation associated to linear descriptor systems. Systems

More information

Linear Algebra and Matrix Theory

Linear Algebra and Matrix Theory Linear Algebra and Matrix Theory Part 3 - Linear Transformations 1 References (1) S Friedberg, A Insel and L Spence, Linear Algebra, Prentice-Hall (2) M Golubitsky and M Dellnitz, Linear Algebra and Differential

More information

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15]) LAA 278, pp.2-32, 998 STRUCTURED PERTURBATIONS AND SYMMETRIC MATRICES SIEGFRIED M. RUMP Abstract. For a given n by n matrix the ratio between the componentwise distance to the nearest singular matrix and

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

Hermite normal form: Computation and applications

Hermite normal form: Computation and applications Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational

More information

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT

PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT PARAMETERIZATION OF STATE FEEDBACK GAINS FOR POLE PLACEMENT Hans Norlander Systems and Control, Department of Information Technology Uppsala University P O Box 337 SE 75105 UPPSALA, Sweden HansNorlander@ituuse

More information

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

MATH 304 Linear Algebra Lecture 20: Review for Test 1. MATH 304 Linear Algebra Lecture 20: Review for Test 1. Topics for Test 1 Part I: Elementary linear algebra (Leon 1.1 1.4, 2.1 2.2) Systems of linear equations: elementary operations, Gaussian elimination,

More information

Sylvester Matrix and GCD for Several Univariate Polynomials

Sylvester Matrix and GCD for Several Univariate Polynomials Sylvester Matrix and GCD for Several Univariate Polynomials Manuela Wiesinger-Widi Doctoral Program Computational Mathematics Johannes Kepler University Linz 4040 Linz, Austria manuela.wiesinger@dk-compmath.jku.at

More information

The Fundamental Theorem of Arithmetic

The Fundamental Theorem of Arithmetic Chapter 1 The Fundamental Theorem of Arithmetic 1.1 Primes Definition 1.1. We say that p N is prime if it has just two factors in N, 1 and p itself. Number theory might be described as the study of the

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

BASIC MATRIX ALGEBRA WITH ALGORITHMS AND APPLICATIONS ROBERT A. LIEBLER CHAPMAN & HALL/CRC

BASIC MATRIX ALGEBRA WITH ALGORITHMS AND APPLICATIONS ROBERT A. LIEBLER CHAPMAN & HALL/CRC BASIC MATRIX ALGEBRA WITH ALGORITHMS AND APPLICATIONS ROBERT A. LIEBLER CHAPMAN & HALL/CRC A CRC Press Company Boca Raton London New York Washington, D.C. Contents Preface Examples Major results/proofs

More information

An LQ R weight selection approach to the discrete generalized H 2 control problem

An LQ R weight selection approach to the discrete generalized H 2 control problem INT. J. CONTROL, 1998, VOL. 71, NO. 1, 93± 11 An LQ R weight selection approach to the discrete generalized H 2 control problem D. A. WILSON², M. A. NEKOUI² and G. D. HALIKIAS² It is known that a generalized

More information

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms.

On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. On the application of different numerical methods to obtain null-spaces of polynomial matrices. Part 1: block Toeplitz algorithms. J.C. Zúñiga and D. Henrion Abstract Four different algorithms are designed

More information

Key words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design.

Key words. Polynomial matrices, Toeplitz matrices, numerical linear algebra, computer-aided control system design. BLOCK TOEPLITZ ALGORITHMS FOR POLYNOMIAL MATRIX NULL-SPACE COMPUTATION JUAN CARLOS ZÚÑIGA AND DIDIER HENRION Abstract In this paper we present new algorithms to compute the minimal basis of the nullspace

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

CS6964: Notes On Linear Systems

CS6964: Notes On Linear Systems CS6964: Notes On Linear Systems 1 Linear Systems Systems of equations that are linear in the unknowns are said to be linear systems For instance ax 1 + bx 2 dx 1 + ex 2 = c = f gives 2 equations and 2

More information

Some preconditioners for systems of linear inequalities

Some preconditioners for systems of linear inequalities Some preconditioners for systems of linear inequalities Javier Peña Vera oshchina Negar Soheili June 0, 03 Abstract We show that a combination of two simple preprocessing steps would generally improve

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Elementary Matrices. which is obtained by multiplying the first row of I 3 by -1, and

Elementary Matrices. which is obtained by multiplying the first row of I 3 by -1, and Elementary Matrices In this special handout of material not contained in the text, we introduce the concept of elementary matrix. Elementary matrices are useful in several ways that will be shown in this

More information

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse

More information

EDULABZ INTERNATIONAL NUMBER SYSTEM

EDULABZ INTERNATIONAL NUMBER SYSTEM NUMBER SYSTEM 1. Find the product of the place value of 8 and the face value of 7 in the number 7801. Ans. Place value of 8 in 7801 = 800, Face value of 7 in 7801 = 7 Required product = 800 7 = 00. How

More information

DESIGN OF ROBUST CONTROL SYSTEM FOR THE PMS MOTOR

DESIGN OF ROBUST CONTROL SYSTEM FOR THE PMS MOTOR Journal of ELECTRICAL ENGINEERING, VOL 58, NO 6, 2007, 326 333 DESIGN OF ROBUST CONTROL SYSTEM FOR THE PMS MOTOR Ahmed Azaiz Youcef Ramdani Abdelkader Meroufel The field orientation control (FOC) consists

More information

MODEL ANSWERS TO THE FIRST HOMEWORK

MODEL ANSWERS TO THE FIRST HOMEWORK MODEL ANSWERS TO THE FIRST HOMEWORK 1. Chapter 4, 1: 2. Suppose that F is a field and that a and b are in F. Suppose that a b = 0, and that b 0. Let c be the inverse of b. Multiplying the equation above

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Understanding hard cases in the general class group algorithm

Understanding hard cases in the general class group algorithm Understanding hard cases in the general class group algorithm Makoto Suwama Supervisor: Dr. Steve Donnelly The University of Sydney February 2014 1 Introduction This report has studied the general class

More information

Notes on n-d Polynomial Matrix Factorizations

Notes on n-d Polynomial Matrix Factorizations Multidimensional Systems and Signal Processing, 10, 379 393 (1999) c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Notes on n-d Polynomial Matrix Factorizations ZHIPING LIN

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Lecture 8 : Eigenvalues and Eigenvectors

Lecture 8 : Eigenvalues and Eigenvectors CPS290: Algorithmic Foundations of Data Science February 24, 2017 Lecture 8 : Eigenvalues and Eigenvectors Lecturer: Kamesh Munagala Scribe: Kamesh Munagala Hermitian Matrices It is simpler to begin with

More information

MODEL ANSWERS TO HWK #10

MODEL ANSWERS TO HWK #10 MODEL ANSWERS TO HWK #10 1. (i) As x + 4 has degree one, either it divides x 3 6x + 7 or these two polynomials are coprime. But if x + 4 divides x 3 6x + 7 then x = 4 is a root of x 3 6x + 7, which it

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix

Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix Proceedings of the 19th International Symposium on Mathematical heory of Networks and Systems MNS 1 5 9 July, 1 Budapest, Hungary Algorithm to Compute Minimal Nullspace Basis of a Polynomial Matrix S.

More information

(Refer Slide Time: 2:04)

(Refer Slide Time: 2:04) Linear Algebra By Professor K. C. Sivakumar Department of Mathematics Indian Institute of Technology, Madras Module 1 Lecture 1 Introduction to the Course Contents Good morning, let me welcome you to this

More information

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc

Chapter Robust Performance and Introduction to the Structured Singular Value Function Introduction As discussed in Lecture 0, a process is better desc Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology c Chapter Robust

More information

MIMO analysis: loop-at-a-time

MIMO analysis: loop-at-a-time MIMO robustness MIMO analysis: loop-at-a-time y 1 y 2 P (s) + + K 2 (s) r 1 r 2 K 1 (s) Plant: P (s) = 1 s 2 + α 2 s α 2 α(s + 1) α(s + 1) s α 2. (take α = 10 in the following numerical analysis) Controller:

More information

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij

Topics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix

More information

MATRIX AND LINEAR ALGEBR A Aided with MATLAB

MATRIX AND LINEAR ALGEBR A Aided with MATLAB Second Edition (Revised) MATRIX AND LINEAR ALGEBR A Aided with MATLAB Kanti Bhushan Datta Matrix and Linear Algebra Aided with MATLAB Second Edition KANTI BHUSHAN DATTA Former Professor Department of Electrical

More information

DESIGN OF INFINITE IMPULSE RESPONSE (IIR) FILTERS WITH ALMOST LINEAR PHASE CHARACTERISTICS

DESIGN OF INFINITE IMPULSE RESPONSE (IIR) FILTERS WITH ALMOST LINEAR PHASE CHARACTERISTICS DESIGN OF INFINITE IMPULSE RESPONSE (IIR) FILTERS WITH ALMOST LINEAR PHASE CHARACTERISTICS GD Halikias and IM Jaimoukha Control Engineering Research Centre, School of Engineering and Mathematical Sciences,

More information

Ph 219b/CS 219b. Exercises Due: Wednesday 21 November 2018 H = 1 ( ) 1 1. in quantum circuit notation, we denote the Hadamard gate as

Ph 219b/CS 219b. Exercises Due: Wednesday 21 November 2018 H = 1 ( ) 1 1. in quantum circuit notation, we denote the Hadamard gate as h 29b/CS 29b Exercises Due: Wednesday 2 November 208 3. Universal quantum gates I In this exercise and the two that follow, we will establish that several simple sets of gates are universal for quantum

More information

Proposition The gaussian numbers form a field. The gaussian integers form a commutative ring.

Proposition The gaussian numbers form a field. The gaussian integers form a commutative ring. Chapter 11 Gaussian Integers 11.1 Gaussian Numbers Definition 11.1. A gaussian number is a number of the form z = x + iy (x, y Q). If x, y Z we say that z is a gaussian integer. Proposition 11.1. The gaussian

More information

An angle metric through the notion of Grassmann representative

An angle metric through the notion of Grassmann representative Electronic Journal of Linear Algebra Volume 18 Volume 18 (009 Article 10 009 An angle metric through the notion of Grassmann representative Grigoris I. Kalogeropoulos gkaloger@math.uoa.gr Athanasios D.

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems. Chapter 3 Linear Algebra In this Chapter we provide a review of some basic concepts from Linear Algebra which will be required in order to compute solutions of LTI systems in state space form, discuss

More information

(Inv) Computing Invariant Factors Math 683L (Summer 2003)

(Inv) Computing Invariant Factors Math 683L (Summer 2003) (Inv) Computing Invariant Factors Math 683L (Summer 23) We have two big results (stated in (Can2) and (Can3)) concerning the behaviour of a single linear transformation T of a vector space V In particular,

More information

1 Number Systems and Errors 1

1 Number Systems and Errors 1 Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Structured singular values for Bernoulli matrices

Structured singular values for Bernoulli matrices Int. J. Adv. Appl. Math. and Mech. 44 2017 41 49 ISSN: 2347-2529 IJAAMM Journal homepage: www.ijaamm.com International Journal of Advances in Applied Mathematics and Mechanics Structured singular values

More information

Math 1553, Introduction to Linear Algebra

Math 1553, Introduction to Linear Algebra Learning goals articulate what students are expected to be able to do in a course that can be measured. This course has course-level learning goals that pertain to the entire course, and section-level

More information

1 Introduction 1. 5 Rooted Partitions and Euler s Theorem Vocabulary of Rooted Partitions Rooted Partition Theorems...

1 Introduction 1. 5 Rooted Partitions and Euler s Theorem Vocabulary of Rooted Partitions Rooted Partition Theorems... Contents 1 Introduction 1 Terminology of Partitions 1.1 Simple Terms.......................................... 1. Rank and Conjugate...................................... 1.3 Young Diagrams.........................................4

More information

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z.

Determining a span. λ + µ + ν = x 2λ + 2µ 10ν = y λ + 3µ 9ν = z. Determining a span Set V = R 3 and v 1 = (1, 2, 1), v 2 := (1, 2, 3), v 3 := (1 10, 9). We want to determine the span of these vectors. In other words, given (x, y, z) R 3, when is (x, y, z) span(v 1,

More information

DETERMINANTS DEFINED BY ROW OPERATIONS

DETERMINANTS DEFINED BY ROW OPERATIONS DETERMINANTS DEFINED BY ROW OPERATIONS TERRY A. LORING. DETERMINANTS DEFINED BY ROW OPERATIONS Determinants of square matrices are best understood in terms of row operations, in my opinion. Most books

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction

MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction MEM 355 Performance Enhancement of Dynamical Systems MIMO Introduction Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University 11/2/214 Outline Solving State Equations Variation

More information

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL

CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 24 CHAPTER 2 EXTRACTION OF THE QUADRATICS FROM REAL ALGEBRAIC POLYNOMIAL 2.1 INTRODUCTION Polynomial factorization is a mathematical problem, which is often encountered in applied sciences and many of

More information

ZERO STRUCTURE ASSIGNMENT OF MATRIX PENCILS: THE CASE OF STRUCTURED ADDITIVE TRANSFORMATIONS

ZERO STRUCTURE ASSIGNMENT OF MATRIX PENCILS: THE CASE OF STRUCTURED ADDITIVE TRANSFORMATIONS Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005 ThC05.1 ZERO STRUCTURE ASSIGNMENT OF MATRIX PENCILS: THE CASE

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Using MATLAB. Linear Algebra

Using MATLAB. Linear Algebra Using MATLAB in Linear Algebra Edward Neuman Department of Mathematics Southern Illinois University at Carbondale One of the nice features of MATLAB is its ease of computations with vectors and matrices.

More information

Reduction of Smith Normal Form Transformation Matrices

Reduction of Smith Normal Form Transformation Matrices Reduction of Smith Normal Form Transformation Matrices G. Jäger, Kiel Abstract Smith normal form computations are important in group theory, module theory and number theory. We consider the transformation

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition

Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Linearizing Symmetric Matrix Polynomials via Fiedler pencils with Repetition Kyle Curlett Maribel Bueno Cachadina, Advisor March, 2012 Department of Mathematics Abstract Strong linearizations of a matrix

More information

PARTIAL DIFFERENTIAL EQUATIONS

PARTIAL DIFFERENTIAL EQUATIONS MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then LECTURE 6: POSITIVE DEFINITE MATRICES Definition: A Hermitian matrix A Mn is positive definite (pd) if x Ax > 0 x C n,x 0 A is positive semidefinite (psd) if x Ax 0. Definition: A Mn is negative (semi)definite

More information

THE WINDSURFER APPROACH TO ITERATIVE CONTROL AND IDENTIFICATION USING AN H ALGORITHM. Arvin Dehghani,,1 Alexander Lanzon, Brian D. O.

THE WINDSURFER APPROACH TO ITERATIVE CONTROL AND IDENTIFICATION USING AN H ALGORITHM. Arvin Dehghani,,1 Alexander Lanzon, Brian D. O. IFA Workshop on Adaptation and Learning in ontrol and Signal rocessing, and IFA Workshop on eriodic ontrol Systems, Yokohama, Japan, August 30 September 1, 2004 THE WINDSURFER AROAH TO ITERATIVE ONTROL

More information

12x + 18y = 30? ax + by = m

12x + 18y = 30? ax + by = m Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.

More information

Warm-Up. Simplify the following terms:

Warm-Up. Simplify the following terms: Warm-Up Simplify the following terms: 81 40 20 i 3 i 16 i 82 TEST Our Ch. 9 Test will be on 5/29/14 Complex Number Operations Learning Targets Adding Complex Numbers Multiplying Complex Numbers Rules for

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach

Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach Numerical Computation of Minimal Polynomial Bases: A Generalized Resultant Approach E.N. Antoniou, A.I.G. Vardulakis and S. Vologiannidis Aristotle University of Thessaloniki Department of Mathematics

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION CONTENTS VOLUME VII CONTENTS VOLUME VII Control of Linear Multivariable Systems 1 Katsuhisa Furuta,Tokyo Denki University, School of Science and Engineering, Ishizaka, Hatoyama, Saitama, Japan 1. Linear Multivariable Systems

More information