Lattice reduction of polynomial matrices

Similar documents
Inverting integer and polynomial matrices. Jo60. Arne Storjohann University of Waterloo

Fast computation of normal forms of polynomial matrices

THE VECTOR RATIONAL FUNCTION RECONSTRUCTION PROBLEM

Fast computation of normal forms of polynomial matrices

Computing minimal interpolation bases

Computing Minimal Nullspace Bases

Fast algorithms for multivariate interpolation problems

On the use of polynomial matrix approximant in the block Wiedemann algorithm

Efficient Algorithms for Order Bases Computation

arxiv: v1 [cs.sc] 6 Feb 2018

Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix

Algorithms for exact (dense) linear algebra

Asymptotically Fast Polynomial Matrix Algorithms for Multivariable Systems

Solving Sparse Rational Linear Systems. Pascal Giorgi. University of Waterloo (Canada) / University of Perpignan (France) joint work with

Fast computation of minimal interpolation bases in Popov form for arbitrary shifts

Primitive sets in a lattice

Popov Form Computation for Matrices of Ore Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

A Fast Las Vegas Algorithm for Computing the Smith Normal Form of a Polynomial Matrix

Computing the Rank and a Small Nullspace Basis of apolynomial Matrix

On Newton-Raphson iteration for multiplicative inverses modulo prime powers

Computing with polynomials: Hensel constructions

Solving Sparse Integer Linear Systems. Pascal Giorgi. Dali team - University of Perpignan (France)

Polynomials, Ideals, and Gröbner Bases

M. VAN BAREL Department of Computing Science, K.U.Leuven, Celestijnenlaan 200A, B-3001 Heverlee, Belgium

Algorithms for Fast Linear System Solving and Rank Profile Computation

ROOTS MULTIPLICITY WITHOUT COMPANION MATRICES

Hermite Forms of Polynomial Matrices

Normal Forms for General Polynomial Matrices

Black Box Linear Algebra

Normal Forms for General Polynomial Matrices

Tropical Algebraic Geometry 3

Fast Computation of Hermite Normal Forms of Random Integer Matrices

Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1

Computing zeta functions of certain varieties in larger characteristic

arxiv: v5 [cs.sc] 15 May 2018

Ruppert matrix as subresultant mapping

1: Introduction to Lattices

Computer Algebra and Computer Algebra Systems L algèbre computationnelle et systèmes d algèbre computationnelle (Org: Michael Monagan (SFU))

Sylvester Matrix and GCD for Several Univariate Polynomials

Gordon solutions. n=1 1. p n

An introduction to the algorithmic of p-adic numbers

Solving Sparse Integer Linear Systems

Eigenvalues and Eigenvectors

Maximal Matching via Gaussian Elimination

Towards High Performance Multivariate Factorization. Michael Monagan. This is joint work with Baris Tuncer.

Vector Rational Number Reconstruction

On deflation and multiplicity structure

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

Parallel Algorithms for Computing the Smith Normal Form of Large Matrices

Reduction of Smith Normal Form Transformation Matrices

A Fraction Free Matrix Berlekamp/Massey Algorithm*

An Approach to Hensel s Lemma

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Change of Ordering for Regular Chains in Positive Dimension

Nearly Sparse Linear Algebra and application to Discrete Logarithms Computations

On the Complexity of Polynomial Matrix Computations

Towards High Performance Multivariate Factorization. Michael Monagan. This is joint work with Baris Tuncer.

Chapter 3 Transformations

Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation

A Maple Package for Parametric Matrix Computations

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Algorithms for Normal Forms for Matrices of Polynomials and Ore Polynomials

Math 121 Homework 5: Notes on Selected Problems

Sporadic and related groups. Lecture 11 Matrices over finite fields J 4

Eigenvalues and Eigenvectors

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

FINITE-DIMENSIONAL LINEAR ALGEBRA

The shifted number system for fast linear algebra on integer matrices

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

Fraction-free Computation of Matrix Padé Systems

A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties:

Another algorithm for nonnegative matrices

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Fundamentals of Unconstrained Optimization

Preliminary Examination in Numerical Analysis

Factoring univariate polynomials over the rationals

Interpolation and Approximation

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

Rounding and Chaining LLL: Finding Faster Small Roots of Univariate Polynomial Congruences

PARTIAL DIFFERENTIAL EQUATIONS

MATHEMATICAL METHODS INTERPOLATION

Preliminaries and Complexity Theory

SUBQUADRATIC BINARY FIELD MULTIPLIER IN DOUBLE POLYNOMIAL SYSTEM

4. Parallel computation. a) Processor efficient NC and RNC algorithms for matrix computations with extensions. My paper with

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Lattice Basis Reduction and the LLL Algorithm

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

mathematics Smoothness in Binomial Edge Ideals Article Hamid Damadi and Farhad Rahmati *

Hermite normal form: Computation and applications

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Computing Approximate GCD of Univariate Polynomials by Structure Total Least Norm 1)

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MAKSYM FEDORCHUK. n ) = z1 d 1 zn d 1.

Infinite elementary divisor structure-preserving transformations for polynomial matrices

Linear Algebra 1 Exam 1 Solutions 6/12/3

Transcription:

Lattice reduction of polynomial matrices Arne Storjohann David R. Cheriton School of Computer Science University of Waterloo Presented at the SIAM conference on Applied Algebraic Geometry at the Colorado State University, Fort Collins, Colorado, USA August 4, 2013

Reduced basis Input: Matrix A Kx n m, K a field, e.g., K = Z/(7). Output: Reduced matrix R Kx n m such that Set of all Kx-linear combinations of rows of R is same as A. Degrees of row of R are minimal among all bases. Example: n = 3, m = 1 A 3 x 4 + 2 x 3 + 2 x 2 + 3 x 3 + 3 x 2 + 3 x + 2 x 2 + 4 R 0 0

Reduced basis Input: Matrix A Kx n m, K a field, e.g., K = Z/(7). Output: Reduced matrix R Kx n m such that Set of all Kx-linear combinations of rows of R is same as A. Degrees of row of R are minimal among all bases. Example: n = 3, m = 3 A 4 x 2 + 3 x + 5 4 x 2 + 3 x + 4 6 x 2 + 1 3 x + 6 3 x + 5 3 + x 6 x 2 + 4 x + 2 6 x 2 2 x 2 + x R 3 4 1 6 9 x 2 x 0 0 0 Obtained using unimodular row operations: R = UA.

Reduced basis Canonical Popov form The Popov form is a canonical normalization of a reduced basis. Let d denote a polynomial of degree d Row pivot is the rightmost element of maximal degree. R W P 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 1 1 1 1 2 2 1 4 4 4 4 3 4 3 3 R: minimal degrees and row degrees nonincreasing W : pivots have distinct indices P: degrees in columns strictly less than pivot 1 1 1 1 2 1 1 0 1 2 2 0 1 4 1 0 V. Popov. Some Properties of Control Systems with Irreducible Matrix Transfer Functions. Lecture Notes in Mathematics, Springer, 1969.

Iterative algorithm for basis reduction Vector case: Euclidean algorithm Input: A Kx 2 1 Output: Reduced basis for A Q 1 1 1 3 x R 0 := A 3 x 4 + 2 x 3 + 2 x 2 + 3 x 3 + 3 x 2 + 3 x + 2 = R 1 x 3 + 3 x 2 + 3 x + 2 Q 1 1 1 x 2 R 1 x 3 + 3 x 2 + 3 x + 2 = R 2 3 x + 2

Iterative algorithm for basis reduction Vector case: Euclidean algorithm Input: A Kx 2 1. Output: Reduced basis for A. Q 1 1 1 3 x R 0 := A 3 x 4 + 2 x 3 + 2 x 2 + 3 x 3 + 3 x 2 + 3 x + 2 = R 1 x 3 + 3 x 2 + 3 x + 2 Q 1 1 1 x 2 R 1 x 3 + 3 x 2 + 3 x + 2 = R 2 3 x + 2

Iterative algorithm for basis reduction Vector case: Euclidean algorithm Input: A Kx 2 1. Output: Reduced basis for A. Q 1 1 1 3 x R 0 := A 3 x 4 + 2 x 3 + 2 x 2 + 3 x 3 + 3 x 2 + 3 x + 2 = R 1 x 3 + 3 x 2 + 3 x + 2 Q 2 1 1 x 2 R 1 x 3 + 3 x 2 + 3 x + 2 = R 2 3 x + 2

Iterative algorithm for basis reduction Matrix case: Extension of the Euclidean algorithm Goal Reduce an input matrix A Kx n m. Algorithm: Add monomial multiples of one row to another to either move a pivot to the left or decrease the degree of the row. Stop when no more transformations are possible. 3 3 2 1 1 0 3 2 2 (1) 3 2 2 1 1 0 3 2 2 (2) 2 2 2 1 1 0 3 2 2 (1) add x 2 times second row to first row (appropriate K) (2) add times last row to first row final matrix is in weak Popov form (distinct pivot locations)

Iterative algorithm for basis reduction Cost analysis Input: A Kx n m of degree d and rank r. number of rows is n number of times a pivot can move left is O(r) number of times a pivot can decrease in degree is O(d) cost of each simple transformation is O(md) ops from K Overall cost: O(nmr d 2 ) field operations from K. 3 3 2 1 1 0 3 2 2 (1) 3 2 2 1 1 0 3 2 2 (2) 2 2 2 1 1 0 3 2 2 Mulders and S. On Lattice Reduction for Polynomial Matrices. Journal of Symbolic Computation, 2003.

Questions regarding the cost of lattice reduction Asymptotically faster algorithms Square nonsingular input A Kx n n of degree d. Iterative algorithm has cost O(n 3 d 2 ) operations from K. Questions: How to incorporate fast matrix multiplication? I.e, Reduce cost in n from O(n 3 ) to O(n ω ). Here, 2 < ω 3 is the exponent of matrix multiplication. How to incorporate fast polynomial multiplication? I.e., Reduce cost in d from O(d 2 ) to O(d 1+ɛ ). Here, 0 < ɛ 1 depending on algorithms used. Goal: Reduce lattice reduction to polynomial matrix multiplication. Thus, target cost is O(n ω d 1+ɛ ), at least up to log factors.

Iterative algorithm for basis reduction Recursive approach to incorporate polynomial multiplication? Scalar case: Example of A Kx 2 1 with degree 3. U 4x + 5 2x 3x 2 + 2x + 1 5x 2 + 4 Fact: deg U deg A A 2x 3 + 6x 2 + 3x + 2 3x 3 + 4x 2 + 3 R = 0 The celebrated recursive half-gcd approach can introduce polynomial multiplication. General case: Example of an A Kx 30 30 with degree 12. U 299 300..... 303 304 A 12 11..... = 12 10 R 0 0..... 1 4 Degrees in U too large: lower order coefficients involved.

Technique 1: Fast minimal approximant basis Input: G Kx 2n m and approximation order. Degree contraints δ 1, δ 2,..., δ 2n for columns of M. Output: Reduced basis M Kx n n such that MG 0 mod x. M G Cost: O(n ω 1+ɛ ) operations from K. 0 2n n mod x Beckermann and Labahn. A uniform approach for the fast computation of matrix-type Padé approximants. SIAM Journal on Matrix Analysis and Applications, 1994 Giorgi, Jeannerod and Villard. On the complexity of polynomial matrix computations, ISSAC 2003.

Technique 1: Fast minimimal approximant basis Application to lattice reduction Input: G = T A I Kx 2n m and = nd + d + 1. Degree contraints nd,..., nd, 0,..., 0 for columns of M. Output: Reduced basis M Kx n n such that MG 0 mod x. M U R G A I 0 2n n mod x Fact: For nd + d + 1 a reduced basis R will appear in M. Cost: O(n ω (nd) 1+ɛ ) operations from K Beckermann and Labahn. A uniform approach for the fast computation of matrix-type Padé approximants. SIAM Journal on Matrix Analysis and Applications, 1994 Giorgi, Jeannerod and Villard. On the complexity of polynomial matrix computations. ISSAC 2003.

Dual space approach for lattice reduction Example: A = A 1 = = = 1 1 x x x 1 1 1 x 1 1 x 1 1 1 x 1 + x + x 2 + x 3 + 1 x x 2 x 3 + x x 2 x 3 + 1 + x + x 2 + x 3 + B 0 1 1 0 1 + B 1 1 1 1 1 x+ B 2 1 1 1 1 Fact: R Kx 2 2 is a reduced basis for A precisely when R is nonsingular, R has minimal degrees, and R(B 0 + B 1 x + B 2 x 2 + ) is finite. x 2 +.

Dual space approach for lattice reduction In conjunction with minimal approximant basis Original formulation Apply minimal approximant basis algorithm directly M U R G A I 0 2n n mod x Need nd + d + 1 Dual space formulation Compute x-adic expansion: A 1 = B 0 + B 1 x + B 2 x 2 + Apply minimal approximant basis algorithm M R U Need nd + d + 1 G B0 + B 1 x + B 2 x 2 + I 0 2n n mod x

Using a high-order component of the inverse Scalar example A 1 = U R = x 4 + 6x 3 + 4x 2 + 3x + 1 x + 1 = 1 + 2x + 2x 2 + 4x 3 + 4x 4 + 3x 5 + 4x 6 + 3x 7 + 4x 8 + Original (dual) minimal approximant basis problem M R U G 1 + 2x + 2x 2 + 4x 3 + 4x 4 + 3x 5 + 4x 6 I New (dual) problem using high-order component 0 mod x 7 M 3 G 3 + 4x I 0 mod x 3

Using a high-order component of the inverse Scalar example A 1 = U R = x 4 + 6x 3 + 4x 2 + 3x + 1 x + 1 = 1 + 2x + 2x 2 + 4x 3 + 4x 4 + 3x 5 + 4x 6 + 3x 7 + 4x 8 + Original (dual) minimal approximant basis problem M R U G 1 + 2x + 2x 2 + 4x 3 + 4x 4 + 3x 5 + 4x 6 I New (dual) problem using high-order component M R 3 G 3 + 4x 2 I 0 mod x 3 0 mod x 7

Technique 2: High-Order component lifting A 1 = + x + x 2 + x 3 + x 4 + x 5 + x 6 + x 7 + Standard quadratic lifting (a la Newton iteration) 0 1 2 3 4 5 High-order component lifting 0 1 2 3 4 5 Input A has degree d Need a high-order component at order Ω(nd) of degree d Cost reduced from O(n ω (nd) 1+ɛ ) to O(n ω d 1+ɛ (log n)) S. High-order lifting and integrality certification, Journal of Symbolic Computation, 2003.

Normalization of row reduced forms R 1 1 1 1 2 2 2 2 2 2 2 2 4 4 4 4 (1) W 1 1 1 1 2 1 1 1 1 2 2 1 3 4 3 3 (2) P 1 1 1 1 2 1 1 0 1 2 2 0 1 4 1 0 (1) Suffices to work only with LC(R). (2) Based on following observation P X PX 1 1 1 1 x 2 3 1 3 4 2 1 1 0 1 4 1 3 3 1 2 2 0 x 2 = 3 2 4 3. 1 4 1 0 x 3 3 4 3 3 Row reduce WX, apply step (1), postmultiply by X 1. Sarkar and S. Normalization of row reduced matrices. ISSAC 2011.

Technique 3: x-basis decomposition Used for derandomization of fast lattice reduction algorithm Problem: What if A is singular modulo x? A 1 = + x + x 2 + x 3 + x 4 +? Solution: Compute an x-basis decomposition. A x 2 x + 1 x + 4 x x 2 + 5 x 6 x + 1 0 3 x + 5 x 2 + 6 x + 6 U H x x + 1 1 x 0 2 x 2 + 1 = 1 x 2 + 5 x 3 x + 5 1 4 x 2 + 3 x + 4 0 3 x + 5 2 det A = (det U) (det H) = (x 2 + 4) x 4 Gupta, Sarkar, S. and Valeriote. Triangular x-basis decompositions and derandomization of linear algebra algorithms over Kx. Journal of Symbolic Computation, 2012. x 3

Conclusions Deterministic algorithm for computing the Popov form of a nonsingular matrix. A 20 52 13 32 32 13 45 12 18 25 24 17 36 43 33 32 Input: Nonsingular A Kx n n P 1 1 1 1 2 1 1 0 1 2 2 0 1 4 1 0 Output: Canonical Popov form P (a reduced lattice) of A Cost: O(n ω d 1+ɛ (log n) 2 ) operations from K Extension to matrices of arbitrary shape and rank? Sarkar and S. Normalization of row reduced matrices. ISSAC 2011. PhD thesis of Wei Zhou, University of Waterloo, 2012. Zhou and Labahn. Computing column bases of polynomial matrices. ISSAC 2013.