Algorithms for exact (dense) linear algebra
|
|
- Ethan Cain
- 5 years ago
- Views:
Transcription
1 Algorithms for exact (dense) linear algebra Gilles Villard CNRS, Laboratoire LIP ENS Lyon Montagnac-Montpezat, June 3, 2005
2 Introduction
3 Problem: Study of complexity estimates for basic problems in exact linear algebra over K[x] and Z Deterministic Monte Carlo (non certified), Las Vegas (certified) randomized algorithms Time complexity Up to logarithmic factors, e.g. O (n c ) = O(n c log α n) (Space complexity) Introduction 1
4 Models Algebraic complexity K a commutative field, algebraic RAM: +,, / Over K[x], arithmetic operations in K Bit complexity Over Z or Q, bitwise computational cost Introduction 2
5 Motivations Complexity estimates with concrete entry domain Better understanding of linear algebra in bit complexity Improved algorithms for exact (or accurate) results Introduction 3
6 The talk Introduction I - Known reductions between problems II - Divide-double and conquer III - Matrix polynomials IV - Integer matrices Conclusion
7 The talk Introduction I - Known reductions between problems II - Divide-double and conquer III - Matrix polynomials IV - Integer matrices Conclusion
8 Algebraic complexity over K [Survey and more in Bürgisser et al. 199, Ch. 1] Asymptotic equivalence to matrix multiplication Matrix multiplication n n A B Determinant, inversion, rank, characteristic polynomial... O(n ω ), O(n 3 ) or O(n 2.3 ) Algorithms in O (n ω ) Known reductions between problems 4
9 Algebraic complexity over K [Survey and more in Bürgisser et al. 199, Ch. 1] Asymptotic equivalence to matrix multiplication Matrix multiplication n n A B Determinant, inversion, rank, characteristic polynomial... O(n ω ), O(n 3 ) or O(n 2.3 ) Algorithms in O (n ω ) LinSys MM Det Known reductions between problems 4
10 Example 1. Det K = Inversion K Known reductions between problems 5
11 Example 1. Det K = Inversion K Proof. Derivative inequality [Baur and Strassen, 1983] det A = a 11 det A 2..n,2..n +..., hence det A a 11 = det A 2..n,2..n A 1 = A det A, a j,i = det A a i,j Known reductions between problems 5
12 Does not carry over to the polynomial or bit complexity case x, y two vectors with fixed size entries c an n-bits integer Compute φ = c x T y, O(n) bit operations Known reductions between problems
13 Does not carry over to the polynomial or bit complexity case x, y two vectors with fixed size entries c an n-bits integer Compute φ = c x T y, O(n) bit operations Compute [ φ/ x i ] 1 i n = c y, O(n 2 ) bit operations Known reductions between problems
14 Example 2. MM K = CharPoly K Known reductions between problems
15 Example 2. MM K = CharPoly K Proof. Minimum polynomial [Keller-Gehrig, 1985] A K n n, u K n Known reductions between problems
16 Example 2. MM K = CharPoly K Proof. Minimum polynomial [Keller-Gehrig, 1985] A K n n, u K n A u [u, Au] Known reductions between problems
17 Example 2. MM K = CharPoly K Proof. Minimum polynomial [Keller-Gehrig, 1985] A K n n, u K n A u [u, Au] A 2 [u, Au] [u, Au, A 2 u, A 3 u] A 4 [u, Au, A 2 u, A 3 u] Known reductions between problems
18 Example 2. MM K = CharPoly K Proof. Minimum polynomial [Keller-Gehrig, 1985] A K n n, u K n A u [u, Au] A 2 [u, Au] [u, Au, A 2 u, A 3 u] A 4 [u, Au, A 2 u, A 3 u]... repeated squaring... [u, Au, A 2 u, A 3 u,..., A d u] Known reductions between problems
19 Example 2. MM K = CharPoly K Proof. Minimum polynomial [Keller-Gehrig, 1985] A K n n, u K n A u [u, Au] A 2 [u, Au] [u, Au, A 2 u, A 3 u] A 4 [u, Au, A 2 u, A 3 u]... repeated squaring... [u, Au, A 2 u, A 3 u,..., A d u] A d u + c d 1 A d 1 u c 0 u = 0 = π(x) = x d + c d 1 x d c 0 Known reductions between problems
20 Does not carry over to the polynomial or bit complexity case A of size log A A n/2 has entries of size O(n log A ) The multiplication by A n/2 costs O(n ω n log A ) = O(n ω+1 log A ) Known reductions between problems 8
21 Impact of data size? Ex. Determinant computation/output size : nd or O (nlog A ), Evaluation/interpolation or homomorphic scheme or O (nlog A ) bits a priori : n ω Known reductions between problems 9
22 Impact of data size? Ex. Determinant computation/output size : nd or O (nlog A ), Evaluation/interpolation or homomorphic scheme or O (nlog A ) bits a priori : nd points or O (nlog A ) bits n ω Known reductions between problems 9
23 Impact of data size? Ex. Determinant computation/output size : nd or O (nlog A ), Evaluation/interpolation or homomorphic scheme or O (nlog A ) bits a priori : nd points or O (nlog A ) bits n ω Complexity estimates: O (n ω nd) = O (n ω+1 d) O (n ω+1 log A ) Known reductions between problems 9
24 MM(n, d) = O (n ω d) : cost for multiplying n n matrices of degree d MM(n, log A ) = O (n ω log A ) : cost for multiplying n n integer matrices (general case: consider generalized functions MM) Previous analysis shows that the determinant may be computed in O(n MM(n, d)) or O(n MM(n, log A )) operations, i.e. in n corresponding matrix products Known reductions between problems 10
25 Fundamentals of dense linear algebra over K[x] or Z ( ) : Monte Carlo rank System solution (Hensel lifting) [Moenck & Carter 9, Dixon 82] O(n ω + n 2 log A ) O (n 3 log A ) Determinant, inversion, nullspace, rank,... [Edmonds, Bareiss 9, Moenck & Carter 9] Frobenius form (minimum, characteristic polynomial) [Giesbrecht 93, Giesbrecht & Storjohann 02] Hermite and Smith forms (diophantine systems) [Kannan & Bachem 9, Domich 85, Giesbrecht 95, Storjohann 9-00] O (n MM(n, log A )) Deterministic O (n MM(n, log A )) Las Vegas O (n MM(n, log A )) Deterministic Known reductions between problems 11
26 Bit complexity algebraic complexity output size Known reductions between problems 12
27 Bit complexity algebraic complexity output size Is this bound pessimistic? Known reductions between problems 12
28 Bit complexity algebraic complexity output size Is this bound pessimistic? Clue. The output length may not be necessary a priori, i.e. at the beginning of the computation, but only at its very end. Known reductions between problems 12
29 Reduction to matrix multiplication Question: Which dense linear algebra problems over K[x] or Z can be solved by algorithms using about the same number of operations as for myltiplying two corresponding matrices plus the input/output size? Known reductions between problems 13
30 The talk Introduction I - Known reductions between problems II - Divide-double and conquer III - Matrix polynomials IV - Integer matrices Conclusion
31 Triangularization in log 2 (n) steps A = Divide-double and conquer 14
32 Triangularization in log 2 (n) steps LA = Divide-double and conquer 14
33 Triangularization in log 2 (n) steps L LA = Divide-double and conquer 14
34 Triangularization in log 2 (n) steps L L LA = = T Divide-double and conquer 14
35 Divide and conquer [Strassen 199, Schönhage 193, Bunch & Hopcroft 194] [ I 0 BA 1 I ] [ A C B D ] = [ A C 0 D BA 1 C ] At next step : Dimension: divided by two Divide-double and conquer 15
36 Divide and conquer [Strassen 199, Schönhage 193, Bunch & Hopcroft 194] [ I 0 BA 1 I ] [ A C B D ] = [ A C 0 D BA 1 C ] At next step : Dimension: divided by two Entry size : multiplied by n/2 Divide-double and conquer 15
37 Divide-double and conquer [Jeannerod & Villard 2002, Storjohann 2002] The dimension is divided by two while the entry size is at most doubled Cost: ( n 2 i)ω 2 i d = O(n ω d) log n i=1 Divide-double and conquer 1
38 A = (Left) nullspace? N A = 0? Divide-double and conquer 1
39 Gaussian elimination (Schur complement) : N g = Divide-double and conquer 18
40 Gaussian elimination (Schur complement) : N g = However, one can choose instead (and compute over K[x]) : N = Divide-double and conquer 19
41 Example: for inversion and nullspace A(x) K[x] 2n n of degree d Divide and conquer Gaussian elimination N(x) A(x) = 0 N(x) K[x] 2n n of degree O(nd) Total size: O(n 3 d) Divide-double and conquer 20
42 Example: for inversion and nullspace A(x) K[x] 2n n of degree d Divide and conquer Gaussian elimination Divide-double and conquer Minimal module bases N(x) A(x) = 0 M(x) A(x) = 0 N(x) K[x] 2n n of degree O(nd) M(x) K[x] 2n n with degree sum O(nd) e.g. generically degree d (Kronecker indices) Total size: O(n 3 d) Total size: O(n 2 d) Divide-double and conquer 20
43 (Example ctd) Stacking for the nullspace n/2 nullspace vectors of degree less than 2d 2 4 2d n n 9 >= >; = 0 Divide-double and conquer 21
44 (Example ctd) Stacking for the nullspace n/4 nullspace vectors of degree less than 4d 2 4 4d 2d n n/2 9 >= >; = 0 Divide-double and conquer 21
45 (Example ctd) Stacking for the nullspace p/2 nullspace vectors of degree less than 2nd/p d 2d n p 9 >= >; = 0 Divide-double and conquer 21
46 Theorem: The rank r of M K[x] n n (degree d) and n r independent nullspace vectors can be computed in O (MM(n, d)) = O (n ω d) operations in K by a randomized Las Vegas (certified) algorithm. [Storjohann & Villard, 2005] Divide-double and conquer 22
47 The talk Introduction I - Known reductions between problems II - Divide-double and conquer III - Matrix polynomials IV - Integer matrices Conclusion
48 Matrix polynomial inversion, degree d over K[x] Over K : Input/output size: n 2 / Cost: n 3, n ω A 1 = B. Over K(x), output degree nd, Output size: n 3 d / Cost (e.g. Newton): n 4 d, n ω+1 d A 1 (x) = A (x)/ det A(x) = (B 0 + xb x nd 1 B nd 1 )/ det A(x) Matrix polynomials 23
49 Using minimal bases Theorem: The generic matrix inverse of a polynomial matrix (degree d) can be computed in essentially optimal time O (n 3 d). [Jeannerod & Villard, ] Matrix polynomials 24
50 Using minimal bases Theorem: The generic matrix inverse of a polynomial matrix (degree d) can be computed in essentially optimal time O (n 3 d). [Jeannerod & Villard, ] Corollary: For a generic A K n n, the matrix powers A, A 2,..., A n, can be computed in essentially optimal time O (n 3 d). Hint: (I xa) 1 = i=0 xi A i. Matrix polynomials 24
51 Using high order lifting A(x) unimodular transforms s 1 (x) s 2 (x)... s n (x) Theorem: The Smith normal form, hence the determinant of M K[x] n n (degree d) can be computed in O (MM(n, d)) = O (n ω d) operations in K by a randomized Las Vegas (certified) algorithm. [Storjohann, ] Matrix polynomials 25
52 Using high order lifting and minimal bases x x x 2 + x x + 5 x x x x x 4 + x x 3 x x x x x 3 + x x x + x x 2 + x x x x x 3 + x x 3 x Theorem: A basis (n n, degree d) of a K[x]-module can be reduced in O (MM(n, d)) = O (n ω d) operations in K by a randomized Las Vegas (certified) algorithm (cf also the matrix Gcd, and matrix Padé approximants). [Giorgi, Jeannerod & Villard, 2003] Matrix polynomials 2
53 The talk Introduction I - Known reductions between problems II - Divide-double and conquer III - Matrix polynomials IV - Integer matrices Conclusion
54 Fact : The known improvements in bit complexity come from corresponding improvements over the polynomials (or truncated power series). Integer matrices 2
55 Example 1. Correspondence between K[x] and Z Multiplication M(d) = O(d log d log log d) = O (d): product in K[x] M(log a ) = M(s) = O(s log s log log s) = O (s): product in Z Integer matrices 2
56 Example 1. Correspondence between K[x] and Z Multiplication M(d) = O(d log d log log d) = O (d): product in K[x] M(log a ) = M(s) = O(s log s log log s) = O (s): product in Z Gcd [Knuth/Schönhage ] Gcd in K[x]: O(M(d) log d) Gcd in Z: O(M(s) log s) Integer matrices 28
57 Example 2. Characteristic polynomial, correspondence between R and Z n n Best known estimations for the determinant without divisions carry over to the integer case, and lead to the characteristic polynomial Integer matrices 29
58 Example 2. Characteristic polynomial, correspondence between R and Z n n Best known estimations for the determinant without divisions carry over to the integer case, and lead to the characteristic polynomial Algebraic without divisions, R One tries to limit the degree increase Bit complexity, Z One tries to limit the size increase Integer matrices 29
59 Strassen s transformation for eliminating divisions works over K[[x]] A I + x(a I) Reducing the cost over K[[x]] = reducing the bit complexity Determinant without division over R Det K Det R [Strassen 193, Vermeidung von Divisionen] O (n ω+1 ) [(Le Verrier) Samuelson/Berkowitz, 1984] [Chistov, 1985] [Kaltofen, 1992] O (n 3+1/2 ), O(n 3.03 ) [Kaltofen & Villard, ] O (n 3+1/5 ), O(n 2. ) Integer matrices 30
60 Baby steps/giant steps and elimination of the divisions, over Z [Shanks, Kaltofen 1992, Kaltofen & Villard ] u T A k v, k = 0,..., n 1, A Z n n, u, v Z n? Integer matrices 31
61 Baby steps/giant steps and elimination of the divisions, over Z [Shanks, Kaltofen 1992, Kaltofen & Villard ] u T A k v, k = 0,..., n 1, A Z n n, u, v Z n? v, Av,..., A n 1 v n 2 n n Integer matrices 31
62 Baby steps/giant steps and elimination of the divisions, over Z [Shanks, Kaltofen 1992, Kaltofen & Villard ] u T A k v, k = 0,..., n 1, A Z n n, u, v Z n? v, Av,..., A n 1 v n 2 n n B = A n n 3 n Integer matrices 31
63 Baby steps/giant steps and elimination of the divisions, over Z [Shanks, Kaltofen 1992, Kaltofen & Villard ] u T A k v, k = 0,..., n 1, A Z n n, u, v Z n? v, Av,..., A n 1 v n 2 n n B = A n n 3 n u T, u T B,..., u T B n 1 n 2 n n Integer matrices 31
64 Baby steps/giant steps and elimination of the divisions, over Z [Shanks, Kaltofen 1992, Kaltofen & Villard ] u T A k v, k = 0,..., n 1, A Z n n, u, v Z n? v, Av,..., A n 1 v n 2 n n B = A n n 3 n u T, u T B,..., u T B n 1 n 2 n n u T B i A j v = u T A k v, i = 0,..., n 1, j = 0,... n 1 n 2 n Integer matrices 31
65 Baby steps/giant steps and elimination of the divisions, over Z [Shanks, Kaltofen 1992, Kaltofen & Villard ] u T A k v, k = 0,..., n 1, A Z n n, u, v Z n? v, Av,..., A n 1 v n 2 n n B = A n n 3 n u T, u T B,..., u T B n 1 n 2 n n u T B i A j v = u T A k v, i = 0,..., n 1, j = 0,... n 1 n 2 n Characteristic polynomial in time O (n 3+ n log A ) or... O (n 2. log A ) Integer matrices 31
66 Example 3. Correspondence between K[x] and Z Linear system solution A(x)Y (x) = b(x), K[x] AY = b, Z Integer matrices 32
67 Example 3. Correspondence between K[x] and Z Linear system solution A(x)Y (x) = b(x), K[x] AY = b, Z B = A 1 0 (mod x) B = A 1 mod p Integer matrices 32
68 Example 3. Correspondence between K[x] and Z Linear system solution A(x)Y (x) = b(x), K[x] AY = b, Z B = A 1 0 (mod x) B = A 1 mod p By i+1 = r i (mod x) By i+1 = r i mod p Integer matrices 32
69 Example 3. Correspondence between K[x] and Z Linear system solution A(x)Y (x) = b(x), K[x] AY = b, Z B = A 1 0 (mod x) B = A 1 mod p By i+1 = r i (mod x) r i+1 = (A(x)y i+1 b(x))/x By i+1 = r i mod p r i+1 = (Ay i+1 b)/p Integer matrices 32
70 Example 3. Correspondence between K[x] and Z Linear system solution A(x)Y (x) = b(x), K[x] AY = b, Z B = A 1 0 (mod x) B = A 1 mod p By i+1 = r i (mod x) r i+1 = (A(x)y i+1 b(x))/x By i+1 = r i mod p r i+1 = (Ay i+1 b)/p LinSys K[x] (n, d) = O (MM(n, d)) LinSys Z (n, log A ) = O (MM(n, log A )) [Storjohann ] Integer matrices 32
71 Theorem: The Smith normal form of A Z n n, hence the determinant, can be computed in O (MM(n, log A )) = O (n ω log A ) operations in K by a randomized Las Vegas (certified) algorithm. [Storjohann, ] Proof. Fast system solution (correction of the residue / p-adic lifting) Divide-double and conquer (based on the sizes of the invariant factors) Integer matrices 33
72 The talk Introduction I - Known reductions between problems II - Divide-double and conquer III - Matrix polynomials IV - Integer matrices Conclusion
73 Classical open problems, algebraic complexity Determinant without divisions in O (n ω )? Reduction of matrix multiplication to system solution? Open problems, K[x] or bit complexity Small nullspace bases in O (n ω log A )? (Z) Essentially optimal inversion? New reductions to matrix multiplication? Conclusion 34
A Fast Las Vegas Algorithm for Computing the Smith Normal Form of a Polynomial Matrix
A Fast Las Vegas Algorithm for Computing the Smith Normal Form of a Polynomial Matrix Arne Storjohann and George Labahn Department of Computer Science University of Waterloo Waterloo, Ontario, Canada N2L
More informationSolving Sparse Rational Linear Systems. Pascal Giorgi. University of Waterloo (Canada) / University of Perpignan (France) joint work with
Solving Sparse Rational Linear Systems Pascal Giorgi University of Waterloo (Canada) / University of Perpignan (France) joint work with A. Storjohann, M. Giesbrecht (University of Waterloo), W. Eberly
More informationInverting integer and polynomial matrices. Jo60. Arne Storjohann University of Waterloo
Inverting integer and polynomial matrices Jo60 Arne Storjohann University of Waterloo Integer Matrix Inverse Input: An n n matrix A filled with entries of size d digits. Output: The matrix inverse A 1.
More informationLattice reduction of polynomial matrices
Lattice reduction of polynomial matrices Arne Storjohann David R. Cheriton School of Computer Science University of Waterloo Presented at the SIAM conference on Applied Algebraic Geometry at the Colorado
More informationAsymptotically Fast Polynomial Matrix Algorithms for Multivariable Systems
Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON-UCBL n o 5668 Asymptotically Fast Polynomial Matrix Algorithms for Multivariable
More informationComputing the Rank and a Small Nullspace Basis of apolynomial Matrix
Computing the Rank and a Small Nullspace Basis of apolynomial Matrix Arne Storjohann School of Computer Science, U. Waterloo Waterloo Ontario N2L 3G1 Canada http://www.scg.uwaterloo.ca/~astorjoh Gilles
More informationLinear Algebra 1: Computing canonical forms in exact linear algebra
Linear Algebra : Computing canonical forms in exact linear algebra Clément PERNET, LIG/INRIA-MOAIS, Grenoble Université, France ECRYPT II: Summer School on Tools, Mykonos, Grece, June st, 202 Introduction
More informationOn the use of polynomial matrix approximant in the block Wiedemann algorithm
On the use of polynomial matrix approximant in the block Wiedemann algorithm ECOLE NORMALE SUPERIEURE DE LYON Laboratoire LIP, ENS Lyon, France Pascal Giorgi Symbolic Computation Group, School of Computer
More informationFast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix
Fast, deterministic computation of the Hermite normal form and determinant of a polynomial matrix George Labahn, Vincent Neiger, Wei Zhou To cite this version: George Labahn, Vincent Neiger, Wei Zhou.
More informationComputing the sign or the value of the determinant of an integer matrix, a complexity survey 1
Computing the sign or the value of the determinant of an integer matrix, a complexity survey 1 Erich Kaltofen a and Gilles Villard b a Department of Mathematics, North Carolina State University Raleigh,
More informationKaltofen s division-free determinant algorithm differentiated for matrix adjoint computation
Kaltofen s division-free determinant algorithm differentiated for matrix adjoint computation Gilles Villard Université de Lyon, CNRS, ENS de Lyon, INRIA, UCBL Laboratoire LIP, 46, Allée d Italie, 69364
More informationFast computation of normal forms of polynomial matrices
1/25 Fast computation of normal forms of polynomial matrices Vincent Neiger Inria AriC, École Normale Supérieure de Lyon, France University of Waterloo, Ontario, Canada Partially supported by the mobility
More informationEfficient Algorithms for Order Bases Computation
Efficient Algorithms for Order Bases Computation Wei Zhou and George Labahn Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada Abstract In this paper we present two algorithms
More informationFast computation of normal forms of polynomial matrices
1/25 Fast computation of normal forms of polynomial matrices Vincent Neiger Inria AriC, École Normale Supérieure de Lyon, France University of Waterloo, Ontario, Canada Partially supported by the mobility
More informationTHE VECTOR RATIONAL FUNCTION RECONSTRUCTION PROBLEM
137 TH VCTOR RATIONAL FUNCTION RCONSTRUCTION PROBLM ZACH OLSH and ARN STORJOHANN David R. Cheriton School of Computer Science University of Waterloo, Ontario, Canada N2L 3G1 -mail: astorjoh@uwaterloo.ca
More informationSolving Sparse Integer Linear Systems. Pascal Giorgi. Dali team - University of Perpignan (France)
Solving Sarse Integer Linear Systems Pascal Giorgi Dali team - University of Perignan (France) in collaboration with A. Storjohann, M. Giesbrecht (University of Waterloo), W. Eberly (University of Calgary),
More informationFast computation of minimal interpolation bases in Popov form for arbitrary shifts
Fast computation of minimal interpolation bases in Popov form for arbitrary shifts ABSTRACT Claude-Pierre Jeannerod Inria, Université de Lyon Laboratoire LIP (CNRS, Inria, ENSL, UCBL) Éric Schost University
More informationFast algorithms for multivariate interpolation problems
Fast algorithms for multivariate interpolation problems Vincent Neiger,, Claude-Pierre Jeannerod Éric Schost Gilles Villard AriC, LIP, École Normale Supérieure de Lyon, France ORCCA, Computer Science Department,
More informationThe shifted number system for fast linear algebra on integer matrices
The shifted number system for fast linear algebra on integer matrices Arne Storjohann 1 School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 Abstract The shifted number
More informationComputing minimal interpolation bases
Computing minimal interpolation bases Vincent Neiger,, Claude-Pierre Jeannerod Éric Schost Gilles Villard AriC, LIP, École Normale Supérieure de Lyon, France University of Waterloo, Ontario, Canada Supported
More informationPopov Form Computation for Matrices of Ore Polynomials
Popov Form Computation for Matrices of Ore Polynomials Mohamed Khochtali University of Waterloo Canada mkhochta@uwaterlooca Johan Rosenkilde, né Nielsen Technical University of Denmark Denmark jsrn@jsrndk
More informationParallel Algorithms for Computing the Smith Normal Form of Large Matrices
Parallel Algorithms for Computing the Smith Normal Form of Large Matrices Gerold Jäger Mathematisches Seminar der Christian-Albrechts-Universität zu Kiel, Christian-Albrechts-Platz 4, D-24118 Kiel, Germany
More informationComputing Minimal Nullspace Bases
Computing Minimal Nullspace ases Wei Zhou, George Labahn, and Arne Storjohann Cheriton School of Computer Science University of Waterloo, Waterloo, Ontario, Canada {w2zhou,glabahn,astorjoh}@uwaterloo.ca
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationWhy solve integer linear systems exactly? David Saunders - U of Delaware
NSF CDI, 2007Oct30 1 Why solve integer linear systems exactly?... an enabling tool for discovery and innovation? David Saunders - U of Delaware LinBox library at linalg.org NSF CDI, 2007Oct30 2 LinBoxers
More informationAn output-sensitive variant of the baby steps/giant steps determinant algorithm*
An output-sensitive variant of the baby steps/giant steps determinant algorithm* Erich Kaltofen Dept of Mathematics, North Carolina State University, Raleigh, North Carolina 27695-8205 USA kaltofen@mathncsuedu;
More informationx 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7
Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)
More informationOn the Complexity of Polynomial Matrix Computations
Laboratoire de l Informatique du Parallélisme École Normale Supérieure de Lyon Unité Mixte de Recherche CNRS-INRIA-ENS LYON n o 5668 On the Complexity of Polynomial Matrix Computations Pascal Giorgi Claude-Pierre
More informationReduction of Smith Normal Form Transformation Matrices
Reduction of Smith Normal Form Transformation Matrices G. Jäger, Kiel Abstract Smith normal form computations are important in group theory, module theory and number theory. We consider the transformation
More informationNormal Forms for General Polynomial Matrices
Normal Forms for General Polynomial Matrices Bernhard Beckermann a a Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de Lille, 59655 Villeneuve d Ascq Cedex,
More informationPrimitive sets in a lattice
Primitive sets in a lattice Spyros. S. Magliveras Department of Mathematical Sciences Florida Atlantic University Boca Raton, FL 33431, U.S.A spyros@fau.unl.edu Tran van Trung Institute for Experimental
More informationLinear recurrences with polynomial coefficients and application to integer factorization and Cartier-Manin operator
Linear recurrences with polynomial coefficients and application to integer factorization and Cartier-Manin operator Alin Bostan, Pierrick Gaudry, Éric Schost September 12, 2006 Abstract We study the complexity
More informationFast Computation of Power Series Solutions of Systems of Differential Equations
Fast Computation of Power Series Solutions of Systems of Differential Equations Bruno.Salvy@inria.fr Algorithms Project, Inria Joint work with A. Bostan, F. Chyzak, F. Ollivier, É. Schost, A. Sedoglavic
More informationTowards High Performance Multivariate Factorization. Michael Monagan. This is joint work with Baris Tuncer.
Towards High Performance Multivariate Factorization Michael Monagan Center for Experimental and Constructive Mathematics Simon Fraser University British Columbia This is joint work with Baris Tuncer. The
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationHermite normal form: Computation and applications
Integer Points in Polyhedra Gennady Shmonin Hermite normal form: Computation and applications February 24, 2009 1 Uniqueness of Hermite normal form In the last lecture, we showed that if B is a rational
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationMath 312/ AMS 351 (Fall 17) Sample Questions for Final
Math 312/ AMS 351 (Fall 17) Sample Questions for Final 1. Solve the system of equations 2x 1 mod 3 x 2 mod 7 x 7 mod 8 First note that the inverse of 2 is 2 mod 3. Thus, the first equation becomes (multiply
More informationOn Computing Determinants of Matrices Without Divisions*
On Computing Determinants of Matrices Without Divisions* Erich Kaltofen Department of Computer Science, Rensselaer Polytechnic Institute Troy, New York 12180-3590; Inter-Net: kaltofen@cs.rpi.edu An algorithm
More informationThe Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen
The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models Erich Kaltofen North Carolina State University google->kaltofen Outline 2 The 7 Dwarfs of Symbolic Computation Discovery
More informationComputing over Z, Q, K[X]
Computing over Z, Q, K[X] Clément PERNET M2-MIA Calcul Exact Outline Introduction Chinese Remainder Theorem Rational reconstruction Problem Statement Algorithms Applications Dense CRT codes Extension to
More informationAlgebraic Methods. Motivation: Systems like this: v 1 v 2 v 3 v 4 = 1 v 1 v 2 v 3 v 4 = 0 v 2 v 4 = 0
Motivation: Systems like this: v v 2 v 3 v 4 = v v 2 v 3 v 4 = 0 v 2 v 4 = 0 are very difficult for CNF SAT solvers although they can be solved using simple algebraic manipulations Let c 0, c,...,c 2 n
More informationNormal Forms for General Polynomial Matrices
Normal Forms for General Polynomial Matrices Bernhard Beckermann 1, George Labahn 2 and Gilles Villard 3 1 Laboratoire d Analyse Numérique et d Optimisation, Université des Sciences et Technologies de
More informationHigh Performance and Reliable Algebraic Computing
High Performance and Reliable Algebraic Computing Soutenance d habilitation à diriger des recherches Clément Pernet Université Joseph Fourier (Grenoble 1) November 25, 2014 Rapporteurs : Daniel Augot,
More informationVector Rational Number Reconstruction
Vector Rational Number Reconstruction Curtis Bright cbright@uwaterloo.ca David R. Cheriton School of Computer Science University of Waterloo, Ontario, Canada N2L 3G1 Arne Storjohann astorjoh@uwaterloo.ca
More informationFast Computation of Hermite Normal Forms of Random Integer Matrices
Fast Computation of Hermite Normal Forms of Random Integer Matrices Clément Pernet 1 William Stein 2 Abstract This paper is about how to compute the Hermite normal form of a random integer matrix in practice.
More informationChapter 4 Finite Fields
Chapter 4 Finite Fields Introduction will now introduce finite fields of increasing importance in cryptography AES, Elliptic Curve, IDEA, Public Key concern operations on numbers what constitutes a number
More information1 Last time: least-squares problems
MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationEliminations and echelon forms in exact linear algebra
Eliminations and echelon forms in exact linear algebra Clément PERNET, INRIA-MOAIS, Grenoble Université, France East Coast Computer Algebra Day, University of Waterloo, ON, Canada, April 9, 2011 Clément
More information5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0
535 The eigenvalues are 3,, 3 (ie, the diagonal entries of D) with corresponding eigenvalues,, 538 The matrix is upper triangular so the eigenvalues are simply the diagonal entries, namely 3, 3 The corresponding
More informationOn Strassen s Conjecture
On Strassen s Conjecture Elisa Postinghel (KU Leuven) joint with Jarek Buczyński (IMPAN/MIMUW) Daejeon August 3-7, 2015 Elisa Postinghel (KU Leuven) () On Strassen s Conjecture SIAM AG 2015 1 / 13 Introduction:
More informationComputer Algebra: General Principles
Computer Algebra: General Principles For article on related subject see SYMBOL MANIPULATION. Computer algebra is a branch of scientific computation. There are several characteristic features that distinguish
More informationDiscrete Math, Fourteenth Problem Set (July 18)
Discrete Math, Fourteenth Problem Set (July 18) REU 2003 Instructor: László Babai Scribe: Ivona Bezakova 0.1 Repeated Squaring For the primality test we need to compute a X 1 (mod X). There are two problems
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationBlack Box Linear Algebra with the LinBox Library. William J. Turner North Carolina State University wjturner
Black Box Linear Algebra with the LinBox Library William J. Turner North Carolina State University www.math.ncsu.edu/ wjturner Overview 1. Black Box Linear Algebra 2. Project LinBox 3. LinBox Objects 4.
More informationBlack Box Linear Algebra with the LinBox Library. William J. Turner North Carolina State University wjturner
Black Box Linear Algebra with the LinBox Library William J. Turner North Carolina State University www.math.ncsu.edu/ wjturner Overview 1. Black Box Linear Algebra 2. Project LinBox 3. LinBox Objects 4.
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationGaussian Elimination -(3.1) b 1. b 2., b. b n
Gaussian Elimination -() Consider solving a given system of n linear equations in n unknowns: (*) a x a x a n x n b where a ij and b i are constants and x i are unknowns Let a n x a n x a nn x n a a a
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationMODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function
MODEL ANSWERS TO THE FIRST QUIZ 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function A: I J F, where I is the set of integers between 1 and m and J is
More informationFactoring univariate polynomials over the rationals
Factoring univariate polynomials over the rationals Tommy Hofmann TU Kaiserslautern November 21, 2017 Tommy Hofmann Factoring polynomials over the rationals November 21, 2017 1 / 31 Factoring univariate
More informationMath 313 Chapter 1 Review
Math 313 Chapter 1 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 1.1 Introduction to Systems of Linear Equations 2 2 1.2 Gaussian Elimination 3 3 1.3 Matrices and Matrix Operations
More informationFraction-free Row Reduction of Matrices of Skew Polynomials
Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr
More informationFast Multivariate Power Series Multiplication in Characteristic Zero
Fast Multivariate Power Series Multiplication in Characteristic Zero Grégoire Lecerf and Éric Schost Laboratoire GAGE, École polytechnique 91128 Palaiseau, France E-mail: lecerf,schost@gage.polytechnique.fr
More informationFast algorithms for polynomials and matrices Part 6: Polynomial factorization
Fast algorithms for polynomials and matrices Part 6: Polynomial factorization by Grégoire Lecerf Computer Science Laboratory & CNRS École polytechnique 91128 Palaiseau Cedex France 1 Classical types of
More informationProcessor Efficient Parallel Solution of Linear Systems over an Abstract Field*
Processor Efficient Parallel Solution of Linear Systems over an Abstract Field* Erich Kaltofen** Department of Computer Science, University of Toronto Toronto, Canada M5S 1A4; Inter-Net: kaltofen@cstorontoedu
More informationLinear Algebra, Summer 2011, pt. 2
Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................
More informationGRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.
GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,
More informationFast reversion of power series
Fast reversion of power series Fredrik Johansson November 2011 Overview Fast power series arithmetic Fast composition and reversion (Brent and Kung, 1978) A new algorithm for reversion Implementation results
More informationHow to find good starting tensors for matrix multiplication
How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More informationOn Newton-Raphson iteration for multiplicative inverses modulo prime powers
On Newton-Raphson iteration for multiplicative inverses modulo prime powers Jean-Guillaume Dumas To cite this version: Jean-Guillaume Dumas. On Newton-Raphson iteration for multiplicative inverses modulo
More informationLesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method
Module 1: Matrices and Linear Algebra Lesson 3 Inverse of Matrices by Determinants and Gauss-Jordan Method 3.1 Introduction In lecture 1 we have seen addition and multiplication of matrices. Here we shall
More informationYale University Department of Mathematics Math 350 Introduction to Abstract Algebra Fall Midterm Exam Review Solutions
Yale University Department of Mathematics Math 350 Introduction to Abstract Algebra Fall 2015 Midterm Exam Review Solutions Practice exam questions: 1. Let V 1 R 2 be the subset of all vectors whose slope
More informationNOTES IN COMMUTATIVE ALGEBRA: PART 2
NOTES IN COMMUTATIVE ALGEBRA: PART 2 KELLER VANDEBOGERT 1. Completion of a Ring/Module Here we shall consider two seemingly different constructions for the completion of a module and show that indeed they
More informationComputing with polynomials: Hensel constructions
Course Polynomials: Their Power and How to Use Them, JASS 07 Computing with polynomials: Hensel constructions Lukas Bulwahn March 28, 2007 Abstract To solve GCD calculations and factorization of polynomials
More informationEXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)
EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily
More information2x 1 7. A linear congruence in modular arithmetic is an equation of the form. Why is the solution a set of integers rather than a unique integer?
Chapter 3: Theory of Modular Arithmetic 25 SECTION C Solving Linear Congruences By the end of this section you will be able to solve congruence equations determine the number of solutions find the multiplicative
More informationFast algorithms for factoring polynomials A selection. Erich Kaltofen North Carolina State University google->kaltofen google->han lu
Fast algorithms for factoring polynomials A selection Erich Kaltofen North Carolina State University google->kaltofen google->han lu Overview of my work 1 Theorem. Factorization in K[x] is undecidable
More informationMATH 425-Spring 2010 HOMEWORK ASSIGNMENTS
MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT
More informationEfficient parallel factorization and solution of structured and unstructured linear systems
Journal of Computer and System Sciences 71 (2005) 86 143 wwwelseviercom/locate/jcss Efficient parallel factorization and solution of structured and unstructured linear systems John H Reif Department of
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationScientific Computing: Dense Linear Systems
Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)
More informationSolving structured linear systems with large displacement rank
Solving structured linear systems with large displacement rank Alin Bostan a,1 Claude-Pierre Jeannerod b Éric Schost c,2 a Algorithms Project, INRIA Paris-Rocquencourt, 78153 Le Chesnay Cedex, France b
More informationTHE COMPLEXITY OF THE QUATERNION PROD- UCT*
1 THE COMPLEXITY OF THE QUATERNION PROD- UCT* Thomas D. Howell Jean-Claude Lafon 1 ** TR 75-245 June 1975 2 Department of Computer Science, Cornell University, Ithaca, N.Y. * This research was supported
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More information9. Numerical linear algebra background
Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization
More informationx y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2
5. Finitely Generated Modules over a PID We want to give a complete classification of finitely generated modules over a PID. ecall that a finitely generated module is a quotient of n, a free module. Let
More informationDiscrete Math, Second Problem Set (June 24)
Discrete Math, Second Problem Set (June 24) REU 2003 Instructor: Laszlo Babai Scribe: D Jeremy Copeland 1 Number Theory Remark 11 For an arithmetic progression, a 0, a 1 = a 0 +d, a 2 = a 0 +2d, to have
More informationDecoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1
Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 2016, Albena, Bulgaria pp. 255 260 Decoding Interleaved Gabidulin Codes using Alekhnovich s Algorithm 1 Sven Puchinger
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationFast algorithms for polynomials and matrices Part 2: polynomial multiplication
Fast algorithms for polynomials and matrices Part 2: polynomial multiplication by Grégoire Lecerf Computer Science Laboratory & CNRS École polytechnique 91128 Palaiseau Cedex France 1 Notation In this
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationMAT Linear Algebra Collection of sample exams
MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system
More information2x 1 7. A linear congruence in modular arithmetic is an equation of the form. Why is the solution a set of integers rather than a unique integer?
Chapter 3: Theory of Modular Arithmetic 25 SECTION C Solving Linear Congruences By the end of this section you will be able to solve congruence equations determine the number of solutions find the multiplicative
More informationMatrix Theory. A.Holst, V.Ufnarovski
Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different
More informationPolynomial evaluation and interpolation on special sets of points
Polynomial evaluation and interpolation on special sets of points Alin Bostan and Éric Schost Laboratoire STIX, École polytechnique, 91128 Palaiseau, France Abstract We give complexity estimates for the
More information