Lecture 2: Lattices and Bases

Similar documents
1: Introduction to Lattices

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio

Lattices and Hermite normal form

1 Shortest Vector Problem

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

Hermite normal form: Computation and applications

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

1 Last time: least-squares problems

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

COS 598D - Lattices. scribe: Srdjan Krstic

Math 240 Calculus III

ELEMENTARY LINEAR ALGEBRA

47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 2 Date: 03/18/2010

MAT Linear Algebra Collection of sample exams

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

MATH 1210 Assignment 4 Solutions 16R-T1

Determinants Chapter 3 of Lay

The Shortest Vector Problem (Lattice Reduction Algorithms)

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

Linear Systems and Matrices

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Math 320, spring 2011 before the first midterm

Chapter 5: Matrices. Daniel Chan. Semester UNSW. Daniel Chan (UNSW) Chapter 5: Matrices Semester / 33

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

The Dual Lattice, Integer Linear Systems and Hermite Normal Form

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Matrix Operations: Determinant

MH1200 Final 2014/2015

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Review problems for MA 54, Fall 2004.

TOPIC III LINEAR ALGEBRA

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Section 6.4. The Gram Schmidt Process

Matrix Factorization and Analysis

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

MA 265 FINAL EXAM Fall 2012

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Matrices and Linear Algebra

18.S34 linear algebra problems (2007)

A Review of Linear Algebra

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Linear Algebra Primer

Conceptual Questions for Review

LINEAR ALGEBRA WITH APPLICATIONS

Symmetric and anti symmetric matrices

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

MATH2210 Notebook 2 Spring 2018

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

SPRING OF 2008 D. DETERMINANTS

1 Matrices and Systems of Linear Equations. a 1n a 2n

Linear Algebra March 16, 2019

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Lecture 5: CVP and Babai s Algorithm

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Lattice Basis Reduction and the LLL Algorithm

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Topic 15 Notes Jeremy Orloff

MTH 464: Computational Linear Algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Introduction to Determinants

Orthonormal Transformations

ELEMENTARY LINEAR ALGEBRA

Math/CS 466/666: Homework Solutions for Chapter 3

Fundamental Domains, Lattice Density, and Minkowski Theorems

5.6. PSEUDOINVERSES 101. A H w.

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Math Lecture 26 : The Properties of Determinants

MTH 2032 SemesterII

MATH 235. Final ANSWERS May 5, 2015

CHAPTER 10 Shape Preserving Properties of B-splines

2. Every linear system with the same number of equations as unknowns has a unique solution.

Fundamentals of Engineering Analysis (650163)

Linear Algebra Primer

Discrete Math, Second Problem Set (June 24)

ELEMENTARY LINEAR ALGEBRA

SUMMARY OF MATH 1600

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Evaluating Determinants by Row Reduction

1 Last time: determinants

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

A Brief Outline of Math 355

Lecture 23: Trace and determinants! (1) (Final lecture)

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Honors Advanced Mathematics Determinants page 1

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Here are some additional properties of the determinant function.

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

This MUST hold matrix multiplication satisfies the distributive property.

Transcription:

CSE 206A: Lattice Algorithms and Applications Spring 2007 Lecture 2: Lattices and Bases Lecturer: Daniele Micciancio Scribe: Daniele Micciancio Motivated by the many applications described in the first lecture, today we start a systematic study of point lattices and related algorithm. 1 Lattices and Vector spaces Geometrically, a lattice can be defined as the set of intersection point of an infinite, regular, but not necessarily orthogonal n-dimensional grid. For example, the set of integer vectors Z n is a lattice. In computer science, lattices are usually represented by a generating basis. Definition 1 Let B = {b 1,...,b n } be linearly independent vectors in R m. The lattice generated by B is the set L(B) = { n x i b i : x i Z} = {B x : x Z n }. i=1 of all the integer linear combinations of the vectors in B, and the set B is called a basis for L(B). Notice the similarity between the definition of a lattice and the definition of vector space generated by B: span(b) = { n x i b i : x i R} = {B x : x R n }. i=1 The difference is that in a vector space you can combine the columns of B with arbitrary real coefficients, while in a lattice only integer coefficients are allowed, resulting in a discrete set of points. Notice that, since vectors b 1,...,b n are linearly independent, any point y span(b) can be written as a linear combination y = x 1 b 1 + x n b n in a unique way. Therefore y L(B) if and only if x 1,...,x n Z. Any basis B (i.e., any set of linearly independent vectors) can be compactly represented as an m n matrix whose columns are the basis vectors. Notice that m n because the columns of B are linearly independent, but in general m can be bigger than n. If n = m the lattice L(B) is said full-dimensional and span(b) = R n. Notice that if B is a basis for the lattice L(B), then it is also a basis for the vector space span(b). However, not every basis for the vector space span(b) is also a lattice basis for L(B). For example 2B is a basis for span(b) as a vector space, but it is not a basis for L(B) as a lattice because vector b i L(B) (for any i) is not an integer linear combination of the vectors in 2B. Another difference between lattices and vector spaces is that vector spaces always admit an orthogonal basis. This is not true for lattices. Consider for example the 2-dimensional

lattice generated by the vectors (2, 0) and (1, 2). This lattice contains pairs of orthogonal vectors (e.g., (0,4) is a lattice point and it is orthogonal to (2,0)) but no such a pair is a basis for the lattice. 2 Gram-Schmidt Any basis B can be transformed into an orthogonal basis for the same vector space using the well-known Gram-Schmidt orthogonalization method. Suppose we have vectors B = [b 1... b n ] R m n generating a vector space V = span(b). These vectors are not necessarily orthogonal (or even linearly independent), but we can always find an orthogonal basis B = [b 1... b n ] for V as follows: b 1 = b 1 b 2 = b 2 µ 2,1 b 1 where µ 2,1 = b 2,b 1 b 1,b 1. b i = b i j<i µ i,jb j where µ i,j = b i,b j b j,b j. Note that the columns of B are orthogonal ( b i,b j for all i j). Therefore the (non-zero) columns of B are linearly independent and form a basis for the vector space span(b). However they are generally not a basis for the lattice L(B). For example, the Gram-Schmidt orthogonalization of the basis B = [(2,0) T,(1,2) T ]] is (2,0) T,(0,2) T. However this is not a lattice basis because the vector (0,2) T does not belong to the lattice. We see that the set of bases of a lattice has a much more complex structure of the set of bases of a vector space. Can we characterise the set of bases of a given lattice? The following theorem shows that different bases of the same lattices are related by unimodular transformations. Theorem 2 Let B and B be two bases. Then L(B) = L(B ) if and only if there exists a unimodular matrix U (i.e., a square matrix with integer entries and determinant ±1) such that B = B U. Proof Notice that if U is unimodular, then U 1 is also unimodular. To see this, recall Cramer s rule to solve a system of linear equations: for any nonsingular A = [a 1,...a n ] R n n, and b R n, the (unique) solution to the system of linear equations Ax = b is given by x i = det([a 1,...,a i 1,b,a i+1,...,a n ]). det(a) In particular, if A and b are integer, then the unique solution to the system Ax = b has the property that det(a) x is an integer vector. Now, let U be the inverse of U, i.e., UU = I. Each column of U is the solution to a system of linear equations Ux = e i. By Cramer rule s the solution is integer because det(u) = ±1. This proves that U is an integer matrix. Matrix U is unimodular because det(u)det(u ) = det(uu ) = det(i) = 1.

So, we also have det(u ) = 1/det(U) = det(u ) = ±1. Now, let us get to the proof. First assume B = B U for some unimodular matrix U. In particular, both U and U 1 are integer matrices, and B = B U and B = BU 1. It follows that L(B) L(B ) and L(B ) L(B), i.e., the two matrices B and B generate the same lattice. Next, assume B and B are two bases for the same lattice L(B) = L(B ). Then, by definition of lattice, there exist integer square matrices M and M such that B = B M and B = BM. Combining these two equations we get B = BMM, or equivalently, B(I MM ) = 0. Since vectors B are linearly independent, it must be I MM = 0, i.e., MM = I. In particular, det(m) det(m ) = det(m M ) = det(i) = 1. Since matrices M and M have integer entries, det(m),det(m ) Z, and it must be det(m) = det(m ) = ±1 A simple way to obtain a basis of a lattice from another is to apply (a sequence of) elementary column operations, as defined below. It is easy to see that elementary column operations do not change the lattice generated by the basis because they can be expressed as right multiplication by a unimodular matrix. Elementary (integer) column operations are: 1. Swap the order of two columns in B. 2. Multiply a column by 1. 3. Add an integer multiple of a column to another column: b i b i + a b j where i j and a Z. It is also possible to show that any unimodular matrix can be obtained from the identity as a sequence of elementary integer column operations. (Hint: show that any unimodular matrix can be transformed into the identity, and then reverse the sequence of operations.) It follows, that any two equivalent lattice bases can be obtained one from the other applying a sequence of elementary integer column operations. 3 The determinant Definition 3 Given a basis B = [b 1,...,b n ] R m n, the fundamental parallelepiped associated to B is the set of points P(B) = {Σ n i=1 x i b i :0 x i < 1}. Note that P(B) is half-open, so that the translates P(B)+v (v L(B)) form a partition of the whole space R m. This statement is formalized in the following observation. Observation 4 For all x R m, there exists a unique lattice point v L(B), such that x (v + P(B)). We now define the determinant of a lattice.

Definition 5 Let B R m n be a basis. The determinant of a lattice det(l(b)) is defined as the n-dimensional volume of the fundamental parallelepiped associated to B: det(l(b)) = vol(p(b)) = i b i where B is the Gram-Schmidt orthogonalization of B. Geometrically, the determinant represent the inverse of the density of lattice points in space (e.g., the number of lattice points in a large and sufficiently regular region of space A should be approximately equal to the volume of A divided by the determinant.) In particular, the determinant of a lattice does not depent on the choice of the basis. We will prove this formally later in this lecture. A simple upper bound to the determinant is given by Hadamard inequality: det(l(b)) b i. The bound immediately follows from the fact that b i b i. We prove in the next section that the Gram-Schmidt orthogonalization of a basis can be computed in polynomial time. So, the determinant of a lattice can be computed in polynomial time by first computing the orthogonalized vectors B, and then taking the product of their lengths. But are there simpler ways to compute the determinant without having to run Gram-Schmidt? Notice that when B R n n is a non-singular square matrix then det(l(b)) = det(b) can be computed simply as a matrix determinant. (Recall that the determinant of a matrix can be computed in polynomial time by computing det(b) modulo many small primes, and combining the results using the Chinese reminder theorem.) The following formula gives a way to compute the lattice determinant to matrix determinant computation even when B is not a square matrix. Theorem 6 For any lattice basis B R n m det(l(b)) = det(b T B). Proof Remember the Gram-Schmidt orthogonalization procedure. In matrix notation, it shows that the orhogonalized vectors B satisfy B = B T, where T is an upper triangular matrix with 1 s on the diagonal, and the µ i,j coefficients at position (j,i) for all j < i. So, our formula for the determinant of a lattice can be written as det(b T B) = det(t T B T B T) = det(t T )det(b T B )det(t). The matrices T,T T are triangular, and their determinant can be easily computed as the product of the diagonal elements, which is 1. Now consider B T B. This matrix is diagonal because the columns of B are orthogonal. So, its determinant can also be computed as the product of the diagonal elements which is det(b T B ) = i b i,b i = ( i b i ) 2 = det(l(b)) 2.

Taking the square root we get det(t T )det(b T B )det(t) = det(l(b)). We can now prove that the determinant does not depend on the particular choice of the basis, i.e., if two bases generate the same lattice then their lattice determinants have the same value. Theorem 7 Suppose B, B are bases of the same lattice L(B) = L(B ). Then, det(l(b)) = det(l(b )). Proof Suppose L(B) = L(B ). Then B = B U where U Z n n is a unimodular matrix. Then det(b T B) = det((b U) T (B U)) = det(u T )det((b ) T B)det(U) = det((b ) T B) because det(u) = 1. We conclude this section showing that although not every lattice has an orthogonal basis, every integer lattice contains an orthogonal sublattice. Claim 8 For any nonsingular B Z n n, let d = det(b). Then d Z n L(B). Proof Let v be any vector in d Z n. We know v = d y for some integer vector y Z n. We want to prove that v L(B), i.e., d y = B x for some integer vector x. Since B is non-singular, we can always find a solution x to the system B x = d y over the reals. We would like to show that x is in fact an integer vector, so that dy L(B). We consider the elements x i and use Cramer s rule: x i = det ([b 1,...,b i 1,dy,b i+1,...,b n ]) det(b) = d det ([b 1,...,b i 1,y,b i+1,...,b n ]) det(b) = det ([b 1,...,b i 1,y,b i+1,...,b n ]) Z We may say that any integer lattice L(B) is periodic modulo the determinant of the lattice, in the sense that for any two vectors x,y, if x y (mod det(l(b))), then x L(B) if and only if y L(B). 4 Running time of Gram-Schmidt Is this section we analyze the running time of the Gram-Schmidt algorithm. The number of arithmetic operations performed by the Gram-Schmidt procedure is O(n 3 ). Can we conclude that it runs in polynomial time? Not yet. In order to prove polynomial time

termination, we also need to show that all the numbers involved do not grow too big. This will also be useful later on to prove the polynomial time termination of other algorithms that use Gram-Schmidt orthogonalized bases. Notice that even if the input matrix B is integer, the orthogonalized matrix B and the coefficients µ i,j will in general not be integers. However, if B is integer (as we will assume for the rest of this section), then the µ i,j and B are rational. The Gram-Schmidt algorithm uses rational numbers, so we need to bound both the precision required by these numbers and their magnitude. From the Gram-Schmidts orthogonalization formulas we know that b i = b i + j<i ν i,j b j for some reals ν i,j. Since b i is orthogonal to b t for all t < i we have b t,b i + j<i ν i,j b t,b j = 0. In matrix notation, if we let B i 1 = [b 1,...,b i 1 ] and (ν i ) j = ν i,j this can be written as: (B T i 1 B i 1 ) ν i = B T i 1 b i which is an integer vector. Solving the above system of linear equations in variables ν i using Cramer s rule we get ν i,j Z det(b T i 1 B i 1) = Z det(l(b i 1 )) 2. We use this property to bound the denominators that can occur in the coefficients µ i,j and orthogonalized vectors b i. Let D i = det(b i 1 ) 2 and notice that D i 1 b i = D i 1 b i + j<i(d i 1 ν i,j )b j is an integer combination of integer vectors. So, all denominators that occur in vector b i are factors of D i 1. Let s now compute the coefficients: µ i,j = b i,b j b j,b j = D j 1 b i,b j D j 1 b j,b j = b i,d j 1 b j D j Z D j and the denominators in the µ i,j must divide D j. This proves that all numbers involved in µ i,j and b i have denominators at most max k D k k b k 2. Finally, the magnitude of the numbers is also polynomial because b i b i, so all entries in b i are at most b i. This proves that the Gram-Schmidt orthogonalization procedure runs in polynomial time. Theorem 9 There exists a polynomial time algorithm that on input a matrix B, computes the Gram-Schmidt orthogonalization B