Fast computation of normal forms of polynomial matrices
|
|
- Meredith Shelton
- 5 years ago
- Views:
Transcription
1 1/25 Fast computation of normal forms of polynomial matrices Vincent Neiger Inria AriC, École Normale Supérieure de Lyon, France University of Waterloo, Ontario, Canada Partially supported by the mobility grants Explo ra doc from Région Rhône-Alpes / Globalink Research Award - Inria from Mitacs & Inria / Programme Avenir Lyon Saint-Étienne Caramba seminar November 10, 2016
2 2/25 Polynomial matrix computations Matrices over K[X ] matrix m m 3X + 4 X 3 + 4X + 1 4X X 2 + 3X + 1 5X + 3 3X 3 + X 2 + 5X + 3 6X + 5 2X + 1 Fundamental operations multiplication kernel basis approximant basis Transformation to normal forms triangularization Hermite row reduction Popov diagonalization Smith
3 2/25 Polynomial matrix computations Matrices over K[X ] matrix m m degree d Õ(mω d) 3X + 4 X 3 + 4X + 1 4X X 2 + 3X + 1 5X + 3 3X 3 + X 2 + 5X + 3 6X + 5 2X + 1 Fundamental operations multiplication kernel basis approximant basis Transformation to normal forms triangularization Hermite row reduction Popov diagonalization Smith
4 2/25 Polynomial matrix computations Matrices over K[X ] matrix m m degree d Õ(mω d) type of average degree D/m 3X + 4 X 3 + 4X + 1 4X X 2 + 3X + 1 5X + 3 3X 3 + X 2 + 5X + 3 6X + 5 2X + 1 Fundamental operations multiplication kernel basis approximant basis Õ(m ω D/m) in specific cases Õ(m ω D/m) Õ(m ω D/m) Transformation to normal forms triangularization Hermite? row reduction Popov? diagonalization Smith Õ(m ω D/m)
5 3/25 Hermite and Popov forms working over K = Z/7Z 3X + 4 X 3 + 4X + 1 4X A = 5 5X 2 + 3X + 1 5X + 3 3X 3 + X 2 + 5X + 3 6X + 5 2X + 1 using elementary row operations, transform A into Hermite form X 6 + 6X 4 + X 3 + X H = 5X + 5X 4 + 6X 3 + 2X 2 + 6X + 3 X 0 3X 4 + 5X 3 + 4X 2 + 6X Popov form X 3 + 5X 2 + 4X + 1 2X + 4 3X + 5 P = 1 X 2 + 2X + 3 X + 2 3X + 2 4X X 2
6 Example: constrained bivariate interpolation As in Guruswami-Sudan list-decoding of Reed-Solomon codes M of degree D; L of degree < D M L 1 A = L L m 1 1 Problem: find p = [ p 1 p m ] RowSpace(A) such that ( ) deg(p j ) < N j for all j Approach: compute the Popov form P of A with degree weights on the columns return row of P which satisfies ( ) 4/25
7 5/25 Shifted Popov form Connects Popov and Hermite forms [4] [3] [3] [3] s = (0, 0, 0, 0) [3] [4] [3] [3] Popov [3] [3] [4] [3] [3] [3] [3] [4] s = (0, 2, 4, 6) s-popov s = (0, D, 2D, 3D) Hermite [7] [4] [2] [0] [6] [5] [2] [0] [6] [4] [3] [0] [6] [4] [2] [1] [16] [15] [0] [15] [0] [15] [0] [7] [0] [1] [5] [0] [1] [0] [2] [6] [0] [1] [6] [8] [5] [1] [7] [6] [1] [2] [0] [1] [0] [4] [3] [7] [1] [5] [3] [3] [6] [1] [2] normal form controlled average column degree and many useful properties
8 6/25 Shifted Popov form For A K[X ] m m nonsingular and s Z m, the s-popov form of A is the matrix P = UA which is s-reduced normalized [7] [4] [2] [0] [8] [5] [1] [6] [5] [2] [0] [7] [6] [1] [6] [4] [3] [0] [6] [4] [2] [1] [2] [0] [1] [0] sum of diagonal degrees: d d m = deg(det(p)) = deg(det(a)) D
9 7/25 Problem and previous work Input: A K[X ] m m nonsingular; shift s Z m Output: the s-popov form of A Previous fast algorithms focus on Hermite and Popov forms Popov form: Õ(m ω d), deterministic [Giorgi-Jeannerod-Villard 03] [Sarkar-Storjohann 11] [Gupta-Sarkar-Storjohann-Valeriote 12] Hermite form: Õ(m ω d), Las Vegas randomized [Gupta-Storjohann 11] [Gupta 11]
10 8/25 p 1 f p m f 1m = 0 mod M 1... p 1 f n1 + + p m f nm = 0 mod M n Reconstruction from equations High-order lifting [Storjohann, 2003] deg(p) d Reduction of basis matrix P triangular Popov form shifted Popov form Hermite form
11 9/25 Outline 1 reduction to average degree d O(D/m) 2 Hermite form in Õ(mω D/m), deterministic 3 s-popov form in Õ(mω D/m), probabilistic
12 10/25 1. Reduce to average degree Example of partial linearization on the columns [Gupta et al., 2012] (18) (1) [16] [17] (7) avg.=16 [0] [16] (7) [17] [6] (37) [17] [6] [36] (2) [0] [16] [6] (3) [16] [16] [0] [16] [6] [2] [16] [16] (2) Elementary rows are inserted: (1) [16] X 17 1 [0] [16] (7) [0] [16] [6] (3) [16] [16] X 17 1 X 17 1 [0] [16] [6] [2] [16] [16] (2) preserves determinant, Smith form, inverse...
13 1. Reduce to average degree Problem: given A and s, find P using no field operation, build such that L(A) K[X ] m m L(s) Z m m 3m and deg(l(a)) D/m, P = submatrix of L(s)-Popov form of L(A) uses partial linearization techniques from [Gupta et al., 2012] The bound D can be taken as the generic determinant degree: 1 i m deg(a i,π i ) max π Perm({1,...,m}) D/m average row and column degrees 11/25
14 12/25 p 1 f p m f 1m = 0 mod M 1... p 1 f n1 + + p m f nm = 0 mod M n Reconstruction from equations High-order lifting [Storjohann, 2003] deg(p) d Popov form Reduction of basis matrix d D/m d D/m shifted Popov form d D/m Hermite form P triangular
15 13/25 2. Fast deterministic Hermite form Previous fastest: Õ(m ω d), Las Vegas [Gupta-Storjohann, 2011] Here: Õ(m ω D/m), deterministic (joint work with G. Labahn and W. Zhou [ Approach: a Find diagonal degrees [Zhou, 2012] b Reduce to Popov form computation
16 14/25 2.a. Find diagonal degrees Partial computation of a triangularization: A 11 A 12 A 21 A 22 B 1 B 2 B 11 B 12 B 21 B 22 yields diagonal entries in Õ(mω d) B 2 = small degree row basis of N = minimal kernel basis of [ ] A11 B 1 = N A 21 [ A12 [ A12 A 22 A 22 ] ] [Zhou-Labahn, 2013] [Zhou-Labahn-Sorjohann, 2012]
17 15/25 2.b. Reduce to Popov form computation H = d-popov form of A (d = diagonal degrees) A [48] [37] [67] [32] [39] [28] [58] [23] [26] [15] [45] [10] [18] [7] [37] [2] d-reduction R normalization (constant U) [18] [7] [37] [2] [18] [7] [37] [2] [18] [7] [37] [2] [18] [7] [37] [2] H = U R (18) [17] (7) [17] [6] (37) [17] [6] [36] (2) d-reduction: via 0-reduction worst case Õ(mω+1 d) normalization: in Õ(mω d)
18 16/25 2.b. Reduce to Popov form computation Partial linearization: (A, d) transformed into (L(A), L(d)) L(A) has degree d L(A) has dimension 2m L(d) has entries d L(d)-reduction of L(A) in Õ(mω d) A d-reduction R normalization (constant U) H = U R partial linearization L(A) L(d)-reduction H directly obtained from L(H) ˆR normalization (constant Û) partial linearization L(H) = Û ˆR
19 p 1 f p m f 1m = 0 mod M 1... p 1 f n1 + + p m f nm = 0 mod M n Reconstruction from equations High-order lifting [Storjohann, 2003] deg(p) d Popov form Reduction of basis matrix d D/m d D/m shifted Popov form d D/m Hermite form P triangular via diagonal degrees 17/25
20 18/25 3. Fast s-popov form for arbitrary s Previous fastest: Õ(m ω (d + amp(s))) Õ(mω+2 d), relying on non-shifted Popov form computation [Gupta et al., 2012] Here: Õ(m ω D/m), Las Vegas randomized Approach: a Build system of modular equations [Gupta-Storjohann, 2011] b Find s-popov basis of solutions [Neiger, 2016] Note: yields fastest known algorithm for Popov form (s = 0)
21 p 1 f p m f 1m = 0 mod M 1... p 1 f n1 + + p m f nm = 0 mod M n Reconstruction from equations High-order lifting [Storjohann, 2003] Smith form of A and reduced right transformation deg(p) d Popov form Reduction of basis matrix d D/m d D/m shifted Popov form d D/m Hermite form P triangular via diagonal degrees 19/25
22 3.a. Build system of linear modular equations Compute: Smith form UAV = diag(1,..., 1, M 1,..., M n ) reduced right transformation [0 F] = V mod (1,..., 1, M 1,..., M n ) in probabilistic Õ(mω d) [Storjohann, 2003] [Gupta-Storjohann, 2011] [Gupta, 2011] Then RowSpace(A) = all solutions [p 1,..., p m ] to p 1 f p m f 1m = 0 mod M 1... p 1 f n1 + + p m f nm = 0 mod M n s-popov form of A = s-popov basis of solutions 20/25
23 3.b. Solve system of linear modular equations Input: nonzero moduli M 1,..., M n system matrix F K[X ] m n shift s Z m Output: the s-popov basis of {p pf = 0 mod (M 1,..., M n )} Result: Õ(m ω D/m) for arbitrary moduli, n O(m) where D = deg(m 1 ) + + deg(m n ) Previous work: Õ(m ω D/m) for Approximant bases: moduli = powers of X Interpolant bases: moduli given by roots and multiplicities Single degree-constrained solution (via structured system solving) 21/25
24 22/25 3.b. Solve system of linear modular equations divide-and-conquer on the number of equations using ideas from [Jeannerod et al., 2016] (manage arbitrary shifts) [Gupta-Storjohann, 2011] (solution when diagonal degrees are known) remains the base case: one equation p 1 f p m f m = 0 mod M P the sought s-popov solution basis: q 1 PF =. q m M [ ] [ ] F P q = 0 M
25 23/25 3.b. Solve system of linear modular equations Reduction to approximant basis: [ ] [ ] P q F M = 0 mod X amp(s)+2d where amp(s) = max(s) min(s) New divide-and-conquer approach: [ ] F Recursion: s = (s (1), s (2) (1) ), F = F (2) with amp(s (i) ) amp(s)/2 Base case: amp(s) O(D), cost Õ(mω D/m) [Jeannerod et al., 2016]
26 24/25 3.b. Solve system of linear modular equations 1 recursive call to find splitting index and P (1) : [ ] P (1) 0 = s (1) -Popov sol. basis for (F (1), M) UpdateSplit(s, F) 2 residual computation thanks to known P (1) : A = P(1) 0 q (1) P (0) = u-popov app. basis for F(1) F (2) 0 q M 0 G = A F(1) F (2) N M 3 recursive call to find P (2) P (2) 4 compute P = = v-popov sol. basis for (G, N), where amp(v) amp(s)/2 [ ] P (1) 0 P (2) P (0) using known diagonal degrees
27 25/25 Conclusion Linear systems of modular equations Õ(m ω D/m), deterministic (n O(m)) return s-popov solution basis for arbitrary moduli Shifted row reduction of polynomial matrices Õ(m ω D/m), Las Vegas randomized computes s-popov form for an arbitrary shift Hermite form: deterministic Questions: removing the assumption n O(m)? deterministic Õ(mω D/m) Popov form? fast deterministic shifted Popov form?
Fast computation of normal forms of polynomial matrices
1/25 Fast computation of normal forms of polynomial matrices Vincent Neiger Inria AriC, École Normale Supérieure de Lyon, France University of Waterloo, Ontario, Canada Partially supported by the mobility
More informationComputing minimal interpolation bases
Computing minimal interpolation bases Vincent Neiger,, Claude-Pierre Jeannerod Éric Schost Gilles Villard AriC, LIP, École Normale Supérieure de Lyon, France University of Waterloo, Ontario, Canada Supported
More informationFast algorithms for multivariate interpolation problems
Fast algorithms for multivariate interpolation problems Vincent Neiger,, Claude-Pierre Jeannerod Éric Schost Gilles Villard AriC, LIP, École Normale Supérieure de Lyon, France ORCCA, Computer Science Department,
More informationFast computation of minimal interpolation bases in Popov form for arbitrary shifts
Fast computation of minimal interpolation bases in Popov form for arbitrary shifts ABSTRACT Claude-Pierre Jeannerod Inria, Université de Lyon Laboratoire LIP (CNRS, Inria, ENSL, UCBL) Éric Schost University
More informationLattice reduction of polynomial matrices
Lattice reduction of polynomial matrices Arne Storjohann David R. Cheriton School of Computer Science University of Waterloo Presented at the SIAM conference on Applied Algebraic Geometry at the Colorado
More informationFast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix
Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix George Labahn, Vincent Neiger, Wei Zhou To cite this version: George Labahn, Vincent Neiger, Wei Zhou.
More informationInverting integer and polynomial matrices. Jo60. Arne Storjohann University of Waterloo
Inverting integer and polynomial matrices Jo60 Arne Storjohann University of Waterloo Integer Matrix Inverse Input: An n n matrix A filled with entries of size d digits. Output: The matrix inverse A 1.
More informationarxiv: v1 [cs.sc] 6 Feb 2018
Computing Popov and Hermite forms of rectangular polynomial matrices Vincent Neiger Univ. Limoges, CNRS, XLIM, UMR 7252 F-87000 Limoges, France vincent.neiger@unilim.fr Johan Rosenkilde Technical University
More informationSolving Sparse Rational Linear Systems. Pascal Giorgi. University of Waterloo (Canada) / University of Perpignan (France) joint work with
Solving Sparse Rational Linear Systems Pascal Giorgi University of Waterloo (Canada) / University of Perpignan (France) joint work with A. Storjohann, M. Giesbrecht (University of Waterloo), W. Eberly
More informationEfficient Algorithms for Order Bases Computation
Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms
More informationComputing Minimal Nullspace Bases
Computing Minimal Nullspace ases Wei Zhou, George Labahn, and Arne Storjohann Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada {w2zhou,glabahn,astorjoh}@uwaterloo.ca
More informationOn the use of polynomial matrix approximant in the block Wiedemann algorithm
On the use of polynomial matrix approximant in the block Wiedemann algorithm ECOLE NORMALE SUPERIEURE DE LYON Laboratoire LIP, ENS Lyon, France Pascal Giorgi Symbolic Computation Group, School of Computer
More informationAlgorithms for exact (dense) linear algebra
Algorithms for exact (dense) linear algebra Gilles Villard CNRS, Laboratoire LIP ENS Lyon Montagnac-Montpezat, June 3, 2005 Introduction Problem: Study of complexity estimates for basic problems in exact
More informationComputing over Z, Q, K[X]
Computing over Z, Q, K[X] Clément PERNET M2-MIA Calcul Exact Outline Introduction Chinese Remainder Theorem Rational reconstruction Problem Statement Algorithms Applications Dense CRT codes Extension to
More informationBases of relations in one or several variables: fast algorithms and applications
Bases of relations in one or several variables: fast algorithms and applications Vincent Neiger To cite this version: Vincent Neiger. Bases of relations in one or several variables: fast algorithms and
More informationTHE VECTOR RATIONAL FUNCTION RECONSTRUCTION PROBLEM
137 TH VCTOR RATIONAL FUNCTION RCONSTRUCTION PROBLM ZACH OLSH and ARN STORJOHANN David R. Cheriton School of Computer Science University of Waterloo, Ontario, Canada N2L 3G1 -mail: astorjoh@uwaterloo.ca
More informationA Fast Las Vegas Algorithm for Computing the Smith Normal Form of a Polynomial Matrix
A Fast Las Vegas Algorithm for Computing the Smith Normal Form of a Polynomial Matrix Arne Storjohann and George Labahn Department of Computer Science University of Waterloo Waterloo, Ontario, Canada N2L
More informationSolving Sparse Integer Linear Systems. Pascal Giorgi. Dali team - University of Perpignan (France)
Solving Sarse Integer Linear Systems Pascal Giorgi Dali team - University of Perignan (France) in collaboration with A. Storjohann, M. Giesbrecht (University of Waterloo), W. Eberly (University of Calgary),
More informationParallel Algorithms for Computing the Smith Normal Form of Large Matrices
Parallel Algorithms for Computing the Smith Normal Form of Large Matrices Gerold Jäger Mathematisches Seminar der Christian-Albrechts-Universität zu Kiel, Christian-Albrechts-Platz 4, D-24118 Kiel, Germany
More informationFraction-free Row Reduction of Matrices of Skew Polynomials
Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr
More informationPopov Form Computation for Matrices of Ore Polynomials
Popov Form Computation for Matrices of Ore Polynomials Mohamed Khochtali University of Waterloo Canada mkhochta@uwaterlooca Johan Rosenkilde, né Nielsen Technical University of Denmark Denmark jsrn@jsrndk
More informationAsymptotically Fast Polynomial Matrix Algorithms for Multivariable Systems
Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON-UCBL n o 5668 Asymptotically Fast Polynomial Matrix Algorithms for Multivariable
More informationPrimitive sets in a lattice
Primitive sets in a lattice Spyros. S. Magliveras Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431, U.S.A spyros@fau.unl.edu Tran van Trung Institute for Experimental
More informationHermite Forms of Polynomial Matrices
Hermite Forms of Polynomial Matrices by Somit Gupta A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Computer Science
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationAlgorithms for Normal Forms for Matrices of Polynomials and Ore Polynomials
Algorithms for Normal Forms for Matrices of Polynomials and Ore Polynomials by Howard Cheng A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Doctor
More informationNormal Forms for General Polynomial Matrices
Normal Forms for General Polynomial Matrices Bernhard Beckermann a a Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de Lille, 59655 Villeneuve d Ascq Cedex,
More informationBlack Box Linear Algebra
Black Box Linear Algebra An Introduction to Wiedemann s Approach William J. Turner Department of Mathematics & Computer Science Wabash College Symbolic Computation Sometimes called Computer Algebra Symbols
More informationNormal Forms for General Polynomial Matrices
Normal Forms for General Polynomial Matrices Bernhard Beckermann 1, George Labahn 2 and Gilles Villard 3 1 Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de
More informationAdaptive decoding for dense and sparse evaluation/interpolation codes
Adaptive decoding for dense and sparse evaluation/interpolation codes Clément PERNET INRIA/LIG-MOAIS, Grenoble Université joint work with M. Comer, E. Kaltofen, J-L. Roch, T. Roche Séminaire Calcul formel
More informationOn the Complexity of Polynomial Matrix Computations
Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON n o 5668 On the Complexity of Polynomial Matrix Computations Pascal Giorgi Claude-Pierre
More informationIN this article we examine fast decoding of one-point Hermitian codes beyond half the minimum distance. First we give a
Sub-quadratic Decoding of One-point Hermitian Codes Johan S. R. Nielsen, Peter Beelen 1 arxiv:1405.6008v3 [cs.it] 25 May 2015 Abstract We present the first two sub-quadratic complexity decoding algorithms
More informationDecoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1
Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 2016, Albena, Bulgaria pp. 255 260 Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1 Sven Puchinger
More informationAlgorithms for Fast Linear System Solving and Rank Profile Computation
Algorithms for Fast Linear System Solving and Rank Profile Computation by Shiyun Yang A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationLattice Basis Reduction Part II: Algorithms
Lattice Basis Reduction Part II: Algorithms Sanzheng Qiao Department of Computing and Software McMaster University, Canada qiao@mcmaster.ca www.cas.mcmaster.ca/ qiao November 8, 2011, revised February
More informationBlack Box Linear Algebra with the LinBox Library. William J. Turner North Carolina State University wjturner
Black Box Linear Algebra with the LinBox Library William J. Turner North Carolina State University www.math.ncsu.edu/ wjturner Overview 1. Black Box Linear Algebra 2. Project LinBox 3. LinBox Objects 4.
More informationLecture Summaries for Linear Algebra M51A
These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture
More informationList Decoding of Binary Goppa Codes up to the Binary Johnson Bound
List Decoding of Binary Goppa Codes up to the Binary Johnson Bound Daniel Augot Morgan Barbier Alain Couvreur École Polytechnique INRIA Saclay - Île de France ITW 2011 - Paraty Augot - Barbier - Couvreur
More informationSolving structured linear systems with large displacement rank
Solving structured linear systems with large displacement rank Alin Bostan a,1 Claude-Pierre Jeannerod b Éric Schost c,2 a Algorithms Project, INRIA Paris-Rocquencourt, 78153 Le Chesnay Cedex, France b
More informationM. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium
MATRIX RATIONAL INTERPOLATION WITH POLES AS INTERPOLATION POINTS M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium B. BECKERMANN Institut für Angewandte
More informationSection 5.6. LU and LDU Factorizations
5.6. LU and LDU Factorizations Section 5.6. LU and LDU Factorizations Note. We largely follow Fraleigh and Beauregard s approach to this topic from Linear Algebra, 3rd Edition, Addison-Wesley (995). See
More informationList decoding of binary Goppa codes and key reduction for McEliece s cryptosystem
List decoding of binary Goppa codes and key reduction for McEliece s cryptosystem Morgan Barbier morgan.barbier@lix.polytechnique.fr École Polytechnique INRIA Saclay - Île de France 14 April 2011 University
More informationOnline Exercises for Linear Algebra XM511
This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2
More informationQuestion: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?
Section 5. The Characteristic Polynomial Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI? Property The eigenvalues
More informationHigh Performance and Reliable Algebraic Computing
High Performance and Reliable Algebraic Computing Soutenance d habilitation à diriger des recherches Clément Pernet Université Joseph Fourier (Grenoble 1) November 25, 2014 Rapporteurs : Daniel Augot,
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationLattice Basis Reduction Part 1: Concepts
Lattice Basis Reduction Part 1: Concepts Sanzheng Qiao Department of Computing and Software McMaster University, Canada qiao@mcmaster.ca www.cas.mcmaster.ca/ qiao October 25, 2011, revised February 2012
More informationBlack Box Linear Algebra with the LinBox Library. William J. Turner North Carolina State University wjturner
Black Box Linear Algebra with the LinBox Library William J. Turner North Carolina State University www.math.ncsu.edu/ wjturner Overview 1. Black Box Linear Algebra 2. Project LinBox 3. LinBox Objects 4.
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationSUBQUADRATIC BINARY FIELD MULTIPLIER IN DOUBLE POLYNOMIAL SYSTEM
SUBQUADRATIC BINARY FIELD MULTIPLIER IN DOUBLE POLYNOMIAL SYSTEM Pascal Giorgi, Christophe Nègre 2 Équipe DALI, LP2A, Unversité de Perpignan avenue P. Alduy, F66860 Perpignan, France () pascal.giorgi@univ-perp.fr,
More informationRelationships Between Planes
Relationships Between Planes Definition: consistent (system of equations) A system of equations is consistent if there exists one (or more than one) solution that satisfies the system. System 1: {, System
More informationRECAP How to find a maximum matching?
RECAP How to find a maximum matching? First characterize maximum matchings A maximal matching cannot be enlarged by adding another edge. A maximum matching of G is one of maximum size. Example. Maximum
More informationSubquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation
Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation M A Hasan and C Negre Abstract We study Dickson bases for binary field representation Such representation
More informationA Maple Package for Parametric Matrix Computations
A Maple Package for Parametric Matrix Computations Robert M. Corless, Marc Moreno Maza and Steven E. Thornton Department of Applied Mathematics, Western University Ontario Research Centre for Computer
More informationEBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal
EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal [ x y Augmented matrix: 1 1 17 4 2 48 (Replacement) Replace a row by the sum of itself and a multiple
More information(Linear equations) Applied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB (Linear equations) Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots
More informationInverses. Stephen Boyd. EE103 Stanford University. October 28, 2017
Inverses Stephen Boyd EE103 Stanford University October 28, 2017 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number
More informationFraction-free Computation of Matrix Padé Systems
Fraction-free Computation of Matrix Padé Systems Bernhard Becermann, Laboratoire d Analyse Numérique et d Optimisation, UFR IEEA M3, USTL Flres Artois, F 59655 Villeneuve d Ascq CEDEX, France, e-mail:
More informationANSWERS. E k E 2 E 1 A = B
MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,
More informationLinear Equations in Linear Algebra
1 Linear Equations in Linear Algebra 1.1 SYSTEMS OF LINEAR EQUATIONS LINEAR EQUATION x 1,, x n A linear equation in the variables equation that can be written in the form a 1 x 1 + a 2 x 2 + + a n x n
More informationMath "Matrix Approach to Solving Systems" Bibiana Lopez. November Crafton Hills College. (CHC) 6.3 November / 25
Math 102 6.3 "Matrix Approach to Solving Systems" Bibiana Lopez Crafton Hills College November 2010 (CHC) 6.3 November 2010 1 / 25 Objectives: * Define a matrix and determine its order. * Write the augmented
More informationAdvanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur
Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur Lecture No. # 02 Vector Spaces, Subspaces, linearly Dependent/Independent of
More informationLecture 6. Numerical methods. Approximation of functions
Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation
More informationMath 2331 Linear Algebra
1.1 Linear System Math 2331 Linear Algebra 1.1 Systems of Linear Equations Shang-Huan Chiu Department of Mathematics, University of Houston schiu@math.uh.edu math.uh.edu/ schiu/ Shang-Huan Chiu, University
More informationCHAPTER 10 Shape Preserving Properties of B-splines
CHAPTER 10 Shape Preserving Properties of B-splines In earlier chapters we have seen a number of examples of the close relationship between a spline function and its B-spline coefficients This is especially
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationComputing the Rank and a Small Nullspace Basis of apolynomial Matrix
Computing the Rank and a Small Nullspace Basis of apolynomial Matrix Arne Storjohann School of Computer Science, U. Waterloo Waterloo Ontario N2L 3G1 Canada http://www.scg.uwaterloo.ca/~astorjoh Gilles
More informationNo books, notes, any calculator, or electronic devices are allowed on this exam. Show all of your steps in each answer to receive a full credit.
MTH 309-001 Fall 2016 Exam 1 10/05/16 Name (Print): PID: READ CAREFULLY THE FOLLOWING INSTRUCTION Do not open your exam until told to do so. This exam contains 7 pages (including this cover page) and 7
More informationBalanced Dense Polynomial Multiplication on Multicores
Balanced Dense Polynomial Multiplication on Multicores Yuzhen Xie SuperTech Group, CSAIL MIT joint work with Marc Moreno Maza ORCCA, UWO ACA09, Montreal, June 26, 2009 Introduction Motivation: Multicore-enabling
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationCSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio
CSE 206A: Lattice Algorithms and Applications Spring 2014 Basic Algorithms Instructor: Daniele Micciancio UCSD CSE We have already seen an algorithm to compute the Gram-Schmidt orthogonalization of a lattice
More information3.2 Gaussian Elimination (and triangular matrices)
(1/19) Solving Linear Systems 3.2 Gaussian Elimination (and triangular matrices) MA385/MA530 Numerical Analysis 1 November 2018 Gaussian Elimination (2/19) Gaussian Elimination is an exact method for solving
More informationSolving Polynomial Systems Symbolically and in Parallel
Solving Polynomial Systems Symbolically and in Parallel Marc Moreno Maza & Yuzhen Xie Ontario Research Center for Computer Algebra University of Western Ontario, London, Canada MITACS - CAIMS, June 18,
More informationMATH 304 Linear Algebra Lecture 34: Review for Test 2.
MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1
More informationMatrix Algebra. Matrix Algebra. Chapter 8 - S&B
Chapter 8 - S&B Algebraic operations Matrix: The size of a matrix is indicated by the number of its rows and the number of its columns. A matrix with k rows and n columns is called a k n matrix. The number
More informationWeakly Secure Data Exchange with Generalized Reed Solomon Codes
Weakly Secure Data Exchange with Generalized Reed Solomon Codes Muxi Yan, Alex Sprintson, and Igor Zelenko Department of Electrical and Computer Engineering, Texas A&M University Department of Mathematics,
More informationA behavioral approach to list decoding
A behavioral approach to list decoding Margreta Kuijper Department of Electrical and Electronic Engineering The University of Melbourne, VIC 3052 Australia E-mail: m.kuijper@ee.mu.oz.au Jan Willem Polderman
More informationMath 344 Lecture # Linear Systems
Math 344 Lecture #12 2.7 Linear Systems Through a choice of bases S and T for finite dimensional vector spaces V (with dimension n) and W (with dimension m), a linear equation L(v) = w becomes the linear
More informationIntroduction to Determinants
Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.
More informationMaximal Matching via Gaussian Elimination
Maximal Matching via Gaussian Elimination Piotr Sankowski Uniwersytet Warszawski - p. 1/77 Outline Maximum Matchings Lovásza s Idea, Maximum matchings, Rabin and Vazirani, Gaussian Elimination, Simple
More informationElementary Row Operations on Matrices
King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix
More informationSolving Sparse Integer Linear Systems
Solving Sparse Integer Linear Systems Wayne Eberly, Mark Giesbrecht, Giorgi Pascal, Arne Storjohann, Gilles Villard To cite this version: Wayne Eberly, Mark Giesbrecht, Giorgi Pascal, Arne Storjohann,
More informationFast reversion of formal power series
Fast reversion of formal power series Fredrik Johansson LFANT, INRIA Bordeaux RAIM, 2016-06-29, Banyuls-sur-mer 1 / 30 Reversion of power series F = exp(x) 1 = x + x 2 2! + x 3 3! + x 4 G = log(1 + x)
More informationFast Polynomial Multiplication
Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution
More informationA Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem
A Cholesky LR algorithm for the positive definite symmetric diagonal-plus-semiseparable eigenproblem Bor Plestenjak Department of Mathematics University of Ljubljana Slovenia Ellen Van Camp and Marc Van
More informationClassical iterative methods for linear systems
Classical iterative methods for linear systems Ed Bueler MATH 615 Numerical Analysis of Differential Equations 27 February 1 March, 2017 Ed Bueler (MATH 615 NADEs) Classical iterative methods for linear
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 7: More on Householder Reflectors; Least Squares Problems Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 15 Outline
More informationRecursive constructions of irreducible polynomials over finite fields
Recursive constructions of irreducible polynomials over finite fields Carleton FF Day 2017 - Ottawa Lucas Reis (UFMG - Carleton U) September 2017 F q : finite field with q elements, q a power of p. F q
More informationLecture 12: Solving Systems of Linear Equations by Gaussian Elimination
Lecture 12: Solving Systems of Linear Equations by Gaussian Elimination Winfried Just, Ohio University September 22, 2017 Review: The coefficient matrix Consider a system of m linear equations in n variables.
More informationPolynomial evaluation and interpolation on special sets of points
Polynomial evaluation and interpolation on special sets of points Alin Bostan and Éric Schost Laboratoire STIX, École polytechnique, 91128 Palaiseau, France Abstract We give complexity estimates for the
More informationVector Rational Number Reconstruction
Vector Rational Number Reconstruction Curtis Bright cbright@uwaterloo.ca David R. Cheriton School of Computer Science University of Waterloo, Ontario, Canada N2L 3G1 Arne Storjohann astorjoh@uwaterloo.ca
More informationSMITH MCMILLAN FORMS
Appendix B SMITH MCMILLAN FORMS B. Introduction Smith McMillan forms correspond to the underlying structures of natural MIMO transfer-function matrices. The key ideas are summarized below. B.2 Polynomial
More informationDiagonals and Walks. Alin Bostan 1 Louis Dumont 1 Bruno Salvy 2. November 4, Introduction Diagonals Walks. 1 INRIA, SpecFun.
and Alin Bostan 1 Louis Dumont 1 Bruno Salvy 2 1 INRIA, SpecFun 2 INRIA, AriC November 4, 2014 1/16 Louis Dumont and 2/16 Louis Dumont and Motivations In combinatorics: Lattice walks w i,j : number of
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationACI-matrices all of whose completions have the same rank
ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationRemark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 5 Eigenvectors and Eigenvalues In this chapter, vector means column vector Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called
More informationA Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design
A Linearithmic Time Algorithm for a Shortest Vector Problem in Compute-and-Forward Design Jing Wen and Xiao-Wen Chang ENS de Lyon, LIP, CNRS, ENS de Lyon, Inria, UCBL), Lyon 69007, France, Email: jwen@math.mcll.ca
More information