and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

Similar documents
Some Sieving Algorithms for Lattice Problems

Lecture 5: CVP and Babai s Algorithm

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x

Dimension-Preserving Reductions Between Lattice Problems

Shortest Vector Problem (1982; Lenstra, Lenstra, Lovasz)

Hard Instances of Lattice Problems

Locally Dense Codes. Daniele Micciancio. August 26, 2013

CSC 2414 Lattices in Computer Science September 27, Lecture 4. An Efficient Algorithm for Integer Programming in constant dimensions

Solving Closest Vector Instances Using an Approximate Shortest Independent Vectors Oracle

COMPLEXITY OF LATTICE PROBLEMS A Cryptographic Perspective

COS 598D - Lattices. scribe: Srdjan Krstic

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio

Lecture 7 Limits on inapproximability

CSC 2414 Lattices in Computer Science October 11, Lecture 5

Note on shortest and nearest lattice vectors

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

Lattice-Based Cryptography: Mathematical and Computational Background. Chris Peikert Georgia Institute of Technology.

From the Shortest Vector Problem to the Dihedral Hidden Subgroup Problem

Tensor-based Hardness of the Shortest Vector Problem to within Almost Polynomial Factors

1 Shortest Vector Problem

Improved Analysis of Kannan s Shortest Lattice Vector Algorithm

The Shortest Vector Problem (Lattice Reduction Algorithms)

A Fast Phase-Based Enumeration Algorithm for SVP Challenge through y-sparse Representations of Short Lattice Vectors

Algorithmic Problems for Metrics on Permutation Groups

Approximating-CVP to within Almost-Polynomial Factors is NP-Hard

Hardness of the Covering Radius Problem on Lattices

Inapproximability Results for the Closest Vector Problem with Preprocessing over l Norm

Worst case complexity of the optimal LLL algorithm

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Solving All Lattice Problems in Deterministic Single Exponential Time

Lattice Basis Reduction Part 1: Concepts

Lattices. A Lattice is a discrete subgroup of the additive group of n-dimensional space R n.

Upper Bound on λ 1. Science, Guangzhou University, Guangzhou, China 2 Zhengzhou University of Light Industry, Zhengzhou, China

47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 2 Date: 03/18/2010

Lattice Cryptography

one eciently recover the entire key? There is no known method for doing so. Furthermore, the common belief is that no such ecient algorithm exists. Th

IBM Almaden Research Center, 650 Harry Road, School of Mathematical Sciences, Tel Aviv University, TelAviv, Israel

The subject of this paper is nding small sample spaces for joint distributions of

Non-standard approaches to integer programming

from Lattice Reduction Problems MIT - Laboratory for Computer Science November 12, 1996 Abstract

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

A Digital Signature Scheme based on CVP

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

Reduction of Smith Normal Form Transformation Matrices

Solving the Shortest Lattice Vector Problem in Time n

A Lattice-Based Public-Key Cryptosystem

Approximation Algorithms for Maximum. Coverage and Max Cut with Given Sizes of. Parts? A. A. Ageev and M. I. Sviridenko

Lattices that Admit Logarithmic Worst-Case to Average-Case Connection Factors

Finding the closest lattice vector when it's unusually close

On the Quantitative Hardness of CVP

Diophantine equations via weighted LLL algorithm

On Nearly Orthogonal Lattice Bases and Random Lattices

Improved Analysis of Kannan s Shortest Lattice Vector Algorithm (Extended Abstract)

Deterministic Approximation Algorithms for the Nearest Codeword Problem

From the shortest vector problem to the dihedral hidden subgroup problem

1: Introduction to Lattices

High Dimensional Geometry, Curse of Dimensionality, Dimension Reduction

On Approximating the Covering Radius and Finding Dense Lattice Subspaces

Cryptanalysis of the Quadratic Generator

Background: Lattices and the Learning-with-Errors problem

Integer Least Squares: Sphere Decoding and the LLL Algorithm

New Practical Algorithms for the Approximate Shortest Lattice Vector

2 cryptology was immediately understood, and they were used to break schemes based on the knapsack problem (see [99, 23]), which were early alternativ

New Conjectures in the Geometry of Numbers

Lattice-based Cryptography

On Nearly Orthogonal Lattice Bases

Preface These notes were prepared on the occasion of giving a guest lecture in David Harel's class on Advanced Topics in Computability. David's reques

Réduction de réseau et cryptologie.

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio

Improved Nguyen-Vidick Heuristic Sieve Algorithm for Shortest Vector Problem

Lattice Reduction Algorithms: Theory and Practice

Solving shortest and closest vector problems: The decomposition approach

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

Integer Factorization using lattices

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

Density of Ideal Lattices

Linear Regression and Its Applications

Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems

A Randomized Sieving Algorithm for Approximate Integer Programming

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Solving the Closest Vector Problem in 2 n Time The Discrete Gaussian Strikes Again!

c 2001 Society for Industrial and Applied Mathematics

New Lattice Based Cryptographic Constructions

Block Korkin{Zolotarev Bases. and Successive Minima. C.P. Schnorr TR September Abstract

Lecture 14 - P v.s. NP 1

On estimating the lattice security of NTRU

How many rounds can Random Selection handle?

Computer Science Dept.

A Disaggregation Approach for Solving Linear Diophantine Equations 1

Cryptanalysis via Lattice Techniques

Metric Approximations (Embeddings) M M 1 2 f M 1, M 2 metric spaces. f injective mapping from M 1 to M 2. M 2 dominates M 1 under f if d M1 (u; v) d M

Lattice Basis Reduction and the LLL Algorithm

A Robust APTAS for the Classical Bin Packing Problem

APPROXIMATING THE COMPLEXITY MEASURE OF. Levent Tuncel. November 10, C&O Research Report: 98{51. Abstract

Notes for Lecture 2. Statement of the PCP Theorem and Constraint Satisfaction

Lecture 9 : PPAD and the Complexity of Equilibrium Computation. 1 Complexity Class PPAD. 1.1 What does PPAD mean?

Enumeration. Phong Nguyễn

Computing the RSA Secret Key is Deterministic Polynomial Time Equivalent to Factoring

A Deterministic Single Exponential Time Algorithm for Most Lattice Problems based on Voronoi Cell Computations

M4. Lecture 3. THE LLL ALGORITHM AND COPPERSMITH S METHOD

Transcription:

Sampling short lattice vectors and the closest lattice vector problem Miklos Ajtai Ravi Kumar D. Sivakumar IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120. fajtai, ravi, sivag@almaden.ibm.com Abstract We present a 2 O(n) time Turing reduction from the closest lattice vector problem to the shortest lattice vector problem. Our reduction assumes access to a subroutine that solves SVP exactly and a subroutine to sample short vectors from a lattice, and computes a (1 + )-approximation to CVP. As a consequence, using the SVP algorithm from [1], we obtain a randomized 2 O(1+?1 )n algorithm to obtain a (1+)-approximation for the closest lattice vector problem in n dimensions. This improves the existing time bound of O(n!) for CVP (achieved by a deterministic algorithm in [2]). Given an n-dimensional lattice L and a point x 2 R n, the closest lattice vector problem (CVP) is to nd a v 2 L such that the Euclidean norm kx? vk is minimized. CVP is one of the most fundamental problems concerning lattices and has many applications. The homogeneous version of CVP is the shortest lattice vector problem (SVP) where x = 0 and v is required to be non-zero. In the -approximate version of CVP, it is required to nd a v 0 2 L such that for every v 2 L, kx? v 0 k kx? vk. In this paper we give a Turing reduction from CVP to SVP. Our reduction assumes access to two subroutines for variants of SVP: one that solves SVP exactly, and one that can sample short vectors from a lattice (with very weak uniformity guarantees). The reduction solves the (1 + )-approximate version of CVP. Using the SVP algorithm from [1] in place of the subroutines, we obtain a randomized 2 O(1+?1 )n algorithm to obtain a (1 + )-approximation for CVP in n dimensions. CVP is a well-studied problem from many points of view. For the problem of computing the closest vector exactly, Kannan obtained an n O(n) time deterministic algorithm [10] and the constant in the exponent was improved by Helfrich [9]. Recently, Blomer obtained an O(n!) time deterministic algorithm to compute the closest vector exactly [2]. For the problem of approximating the closest vector, using the LLL algorithm [12], Babai obtained a (3= p 2) n - approximation algorithm that runs in polynomial time [3]. Using a 2 O(n) algorithm for SVP 1

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that runs in 2 O(n) time and a 2 n log log n= log n -approximation algorithm that runs in polynomial time [1] (see also [11] for a special case of CVP). From the hardness point of view, CVP was shown to be NP-hard by van Emde Boas [6], with a simpler proof by Kannan [10]. It was recently shown by Dinur et al. [5] to be NP-hard to approximate to within 2 log1? n. Goldreich p and Goldwasser showed that CVP is unlikely to be NP-hard to approximate within n= log n [7]. Cai [4] showed a worst-case to average-case reduction for certain approximate versions of CVP. In general, CVP seems to be a harder problem than SVP; for example, it was shown by Goldreich et al. [8] that if one can approximate CVP, then one can approximate SVP to within the same factor in essentially the same time. A few words of comparison between our method and that of Ravi Kannan [10]. Kannan also presents a deterministic polynomial-time Turing reduction from approximate CVP to the decision version of SVP, and obtains an approximation factor of O( p n); as remarked earlier, together with the 2 O(n) time SVP algorithm of [1], this gives a 2 O(n) time randomized algorithm that achieves an O( p n) approximation factor for CVP. Our reduction in this paper is similar to Kannan's reduction (both use a higher dimensional extension of the given lattice); however, we obtain the better approximation factor by reducing CVP to the problem of sampling short vectors in a lattice. It turns out that the algorithm of [1] can be adapted to perform the required sampling in 2 O(n) time; this yields an approximation factor of 1 +. Usually, in complexity theory, \counting" and \sampling" of \witnesses" is considered much harder (]P) than the corresponding search or decision problems; it is plausible that our stronger approximation factor results from a reduction to sampling instead of the search version of SVP, and the fact that our reduction runs in 2 O(n) time instead of polynomial time. Denitions. For an n-dimensional lattice L and a point x 2 R n, let D x = d(x; L) denote its Euclidean distance to the closest lattice vector v 2 L. Let B(y; r) denote the n-dimensional L 2 open ball of radius r around y. Let sh(l) denote the length of the shortest non-zero vector of L. Let bl(l) denote the length of the best basis of L, that is, the length of the longest vector in a basis of L, minimized over all bases of L. Outline of the reduction. Given L and a point x 2 R n, we rst assume that we know D x to within a factor of (1 + ); this assumption will be relaxed by \guessing" values for D x in increasing powers of (1 + ). Note that D x is polynomially bounded in terms of the given basis length and n; furthermore, the given basis length is at most simply exponential in the input length, the number of guesses needed for D x is bounded polynomially in the input length. (In fact, we will argue later that at most O(log n + log(1=)) guesses suce.) Assuming a specic guess (1 + ) k?1 D x < (1 + ) k, the algorithm works by an embedding method: Construct an 2

(n+1)-dimensional lattice L 0 generated by (v; 0); v 2 L and (x; k ), where k = (1+) k+1 = p 3; by nding several short vectors in L 0, we will hope to extract information about the vector in L closest to x. Some niceness assumptions. Without loss of generality, we may assume that sh(l) = 1; this is because with one call to a subroutine that computes a shortest non-zero vector, we can nd the quantity sh(l) and re-scale the lattice and the target point suitably. As remarked above, the algorithm will attempt polynomially many guesses of the form (1 + ) k, k = 0; 1; 2; : : :, for the (approximate) value of D x. For each k, dene the lattice L 0 = L 0 (k) to be generated by the vectors (u; 0), u 2 L, and (x; k ), where k = (1+) p k+1 = 3. Also, for each k, let z k 2 L be the lattice point (if any) discovered by the following procedure, using the lattice L 0 = L 0 (k), such that kz k? xk (1 + ) k. The output of the algorithm (reduction, together with the SVP sampling subroutine) will be the z k from the smallest such k. With each value of k, we will rst compute a shortest non-zero vector of L 0 = L 0 (k). We now argue that for every > 0, there exists a k = k 0 < 0 such that if D x < (1 + ) k, then a shortest non-zero vector of L 0 (k) will already help us discover the closest lattice point to x. Let z 2 L be a closest lattice point to x, that is, kz? xk = D x, and consider the point of L 0 (k) obtained as (z; 0)? (x; k ). We have k(z; 0)? (x; k )k 2 = kz? xk 2 + 2 = k D 2 x + (1 + ) 2(k+1) =3 < (1 + ) 2 k(1 + (1 + ) 2 =3) < 1 for suciently small k < 0. Let k 0 be the largest k for which this holds. On the other hand, no vector in L 0 (k) of the form (u; 0), u 2 L, can have length less than 1 (since L has been scaled to have sh(l) = 1). Furthermore, no vector in L 0 (k) of the form (u; 0)? a(x; k ), where u 2 L and integer a > 1, can be a shortest vector in L 0 (k). To see this, again let z 2 L be a closest lattice point to x, that is, kz? xk = D x, and note that k(u; 0)? a(x; k )k 2 = ku? axk 2 + a 2 2 k a2 2 k 42 = k 3 2 + k 2 = (1 + k )2k+2 + 2 > k D2 x + 2 = k(z; 0)? (x; k k)k 2. Thus the shortest vectors of L 0 (k) are precisely ((z; 0)? (x; k )), and so the case D x < (1 + ) k can be identied and the closest vector to x can be recovered. For k k 0, note also that sh(l 0 (k)) minf1; k0 g, a constant that depends only on and not on n. In the sequel we assume that k k 0. Next we argue that wlog., we may assume that D x n 2 =(2). Indeed, suppose that D x > n 2 =(2). Recall also that we have scaled the lattice so that sh(l) = 1. By applying a subroutine for SVP, nd a vector u 2 L so that kuk = 1. Let b L be the projection of L on the subspace orthogonal to u, let bx be the projection of x on this subspace, and let b = (1?1=n 2 ). Recursively apply the reduction from CVP to the shortest vector problems for the lattice b L and the target point bx to obtain a point bz 2 b L such that kbz? bxk (1 + b)d(bx; b L). Next nd a point z 2 L such that the projection of z on to the subspace orthogonal to u equals bz and the projection of z along u has length at most 1=2. Now kz? xk 2 (1=4) + kbz? bxk 2 (1=4) + (1 + b) 2 d(bx; b L)2 (1=4) + (1 + b) 2 D 2 x, which, by the choice of b and the assumption 3

D x > n 2 =(2), is at most (1 + ) 2 D 2 x. Remark. Actually, we can also show that it is possible to assume that L 0 has a basis of length at most poly(n)d x = poly(n; 1=). To do this, we will argue that either L has a basis of length poly(n)d x, or the problem may be reduced to a lower dimensional CVP instance. Suppose that L has a basis of length n a D x ; let z 2 L be obtained by rounding x with respect to the basis at hand. Clearly, kz? xk n a+1 D x and, furthermore, any vector generated by the basis and the vector (x; k ) can also be generated by together with the vector (z? x; k ). Let L be the dual of L. Suppose that the shortest non-zero vector u 2 L is shorter than (3(1 + )D x )?1. Let H be the n? 1 dimensional sub-lattice of L orthogonal to u. The distances of the cosets of H in L are greater than 3(1 + )D x so there is a unique coset that (1+)D x -closest to x. We can nd u and so this coset by solving the shortest vector problem for L. In the coset we nd a (1 + )-approximate closest lattice point to x by solving a (1 + )-approximate closest vector problem for an n? 1 dimensional lattice. If the length of u in L is larger than (3(1 + )D x )?1 then there is a basis of L whose length is at most 3n(1 + )D x ; this follows from the transference theorem 1 sh(l )bl(l) n. End Remark. Summary of assumptions. We now have that (1 + ) k 0 D x (1 + ) k 1, where k 0 < 0 depends only on, and k 1 = O(log n + log(1=)). Assume that k 0 k k 1 ; our reduction will attempt to produce a close lattice point to x using each of the lattices L 0 (k) in the following procedure. For the \right" value of k, namely when (1 + ) k?1 D x < (1 + ) k, we will argue that the procedure will produce a lattice point z 2 L such that kz? xk (1 + ) k+1 (1 + ) 2 D x. For the other values, the procedure may fail to produce any lattice point in L, or one that is too far; the latter case can be easily checked. For the rest of the discussion, we assume that we are working with the right value of k. For readability, we will write L 0 for L 0 (k) and for k. The main reduction steps. Recall that = (1 + ) k+1 = p 3 and L 0 is the n + 1 dimensional lattice generated by the vectors (u; 0); u 2 L, and (x; ). We dene three sets of vectors: B = f(u; 0) 2 L 0 j kuk < 2g B 0 = f(u; 0) 2 L 0 j kuk < D x g G = f(u; v) 2 L 0 j v = ; k(u; v)k < 2g Though the denition of B 0 involves D x, the reduction never needs to explicitly know this value; B p 0 is used only the analysis of the correctness of the reduction. Note that B 0 B since D x < 3=(1 + ) p 3 2; here we use the assumption that D x < (1 + ) k. Also, it is not hard to see that G [ B = L 0 \ B(0; 2). 4

First we will argue that jgj=jbj 2?c 1n for some constant c 1 0 (Lemma 5) that depends only on. This is accomplished in two steps: jb 0 j 2?c 1n jbj (Corollary 4) and jgj jb 0 j (Lemma 2). Then we will employ (a version of) the probabilistic algorithm from [1] to sample points from L 0 \ B(0; 2). The weak uniformity of this sampling procedure (Lemma 8) will be sucient to prove that the probability of producing a point from G is at least 2?c 2n for some constant c 2 > 0 (Lemma 7). Finally, we show that it suces to produce a point in G, i.e., we show that given a point (u; v) 2 G, we can compute a z 2 L so that kz? xk (1 + )kz? xk where z is the point in L closest to x. We prove this nal step rst, and then prove the lemmata in turn. Lemma 1 Given (u; v) 2 G, we can compute (in polynomial time) a point z 2 L such that kz? xk (1 + ) 2 kz? xk where z 2 L satises kz? xk = D x. Proof. Without loss of generality, let v =?. Given (u; v) 2 G, such that v =?, write (u; v) as (z? x;?) = (z; 0)? (x; ), and note that z 2 L and kz? xk 2 = k(u; v)k 2? 2 3 2 = (1 + ) 4 D 2 x using the assumption (1 + ) k?1 D x ; so kz? xk (1 + ) 2 D x. We now turn to the precise statements and proofs of the other lemmata. Lemma 2 jgj jb 0 j Proof. We will injectively map the set B 0 into G, namely we will show that to every u 2 L such that kuk D x, we may associate a unique u 0 2 G. Let z be a closest lattice point to x. Given u 2 L such that kuk D x, dene u 0 = (z + u? x;?) = (z + u; 0)? (x; ), so u 0 2 L 0. Also, ku 0 k 2 = k(z + u? x; )k 2 = kz + u? xk 2 + 2 (kuk + kz? xk) 2 + 2 (D x + D x ) 2 + 2 = (1 + ) 2 D 2 x + 2 < 4 2 ; using the assumption D x < (1 + ) k ; so ku 0 k < 2. Lemma 3 For every pair of constants a > b > 0, there exists a constant c = lg(4a=b) > 0 such that for any n-dimensional lattice and R > 0, j \ B(0; ar)j 2 cn j \ B(0; br)j. 5

Proof. First we note that given a and b, there exists an absolute constant c = c(a; b) = lg(4a=b) such that for any R > 0, B(0; ar) can be covered by at most 2 cn balls of radius br=2. To see this, take a packing of B(0; ar) by balls of radius br=4; a straightforward volume bound implies that the number of balls we need is at most (ar) n (br=4) n (4a=b)n = 2 cn : By doubling the radius of each ball in the packing, we obtain a covering. Thus j\b(0; ar)j is at most 2 cn times the maximum number of points of in any ball of radius br=2 in the covering. Consider the ball B(y; br=2) in the covering that has the largest number of lattice points, and let z 2 be a closest lattice point to y. Note that we may assume that ky? zk br=2, for otherwise, we will have \ B(y; br=2) = ; and since this is the ball with maximum number of lattice points, we will have 0 2 \B(0; ar) = ;, which is a contradiction. The map x 7! x?z sends B(y; br=2) to B(y?z; br=2), and injectively maps every lattice point in B(y; br=2) to a lattice point in B(y? z; br=2). Since ky? zk br=2, B(y? z; br=2) B(0; br), which implies that j \ B(y? z; br=2)j j \ B(0; br)j. Therefore, j \ B(0; ar)j 2 cn j \ B(0; br)j. Corollary 4 There exists a constant c 1 = O(?1 ) such that jb 0 j 2?c 1n jbj. Proof. Recall that jb 0 j = jl \ B(0; D x )j; since D x (1 + ) k?1 = p 3=(1 + ) 2, jb 0 j jl \ B(0; b)j, where b = p 3=(1 + ) 2. On the other hand, jbj = jl \ B(0; 2)j; applying Lemma 3 with = L, and R = and a = 2, let c 1 = c 1 () = c(a; b) = O(?1 ) given by the Lemma. Combining Lemma 2 and Corollary 4, we obtain: Lemma 5 There exists c 1 = O(?1 ) 0 such that jgj=jbj 2?c 1n. We now formally state the properties required of the procedure that samples short vectors from L 0. Assumption 6 There exists a constant c 2 = c 2 () > 0 such that given an n-dimensional lattice with sh() = (1) and a parameter R (at most 2 O(n) ) as input, the subroutine picks a point from \ B(0; R) so that if it picks x with probability p x, then maxfp x =p y j x; y 2 \ B(0; R)g 2 c 2n. Lemma 7 There is a c 3 probability at least 2?c 3 n. > 0 and a probabilistic algorithm that picks a point from G with 6

Proof. This lemma follows from Assumption 6, where we assume that the probabilities are suciently (weakly) uniform inside L P 0 \ B(0; 2) and P Lemma 5, which shows that G is not too small. More precisely, let p G = p x2g x; p B = p x2b x = 1? p G. Consider an x 2 B such that p x p B =jbj. Now, p G = X y2g p y 2?c 2n jgjp x (From assumption 6) 2?c2n p B jgj=jbj 2?(c 1+c 2 )n p B (From Lemma 5) = 2?(c 1+c 2 )n (1? p G ) 2?(c 1+c 2 )n =(1 + 2?(c 1+c 2 )n ) 2?c3n ; for some absolute constant c 3 = c 3 () > 0. To wrap it up, note that if we repeat the algorithm implied by Lemma 7 roughly 2 cn times for some constant c much larger than c 3, then with probability exponentially close to 1, we will succeed in picking a point of G. By Lemma 1, this suces to solve approximate CVP. A 2 O(1+?1 )n time algorithm for approximate CVP. We now point out some properties of the probabilistic SVP algorithm of [1], when used to sample points of a lattice inside a ball of given radius R. Together with the SVP algorithm of [1] and our reduction described above, this implies the algorithm for approximate CVP with the claimed bound on the running time. Lemma 8 There exists a constant c 2 = c 2 () > 0 and a probabilistic algorithm, which, when given an n-dimensional lattice with sh() = (1) and a parameter R (at most 2 O(n) ) as input, picks a point from \ B(0; R) so that if it picks x with probability p x, then maxfp x =p y j x; y 2 \ B(0; R)g 2 c 2n. The algorithm runs in time 2 O(n). Proof. The proof of this lemma uses the techniques in [1]. Recall that 2 O(n) lattice points are sampled from a 2 O(n) -large parallelepiped P and an iterative sieve is applied to successively reduce the length of the sampled lattice vectors. To ensure that this process outputs a non-zero lattice vector at the end, the original sample points are perturbed. In [1], the perturbations are chosen from the normal distribution N(0;p 1=(Kn)) with mean 0 and variance 1=(Kn) for an absolute constant K. An alternate way of looking at the algorithm in [1] is the following: given a lattice whose shortest non-zero vector has constant length, the algorithm of [1] is a sampling procedure to sample lattice points from a ball C of constant 7

radius, such that the probability q x of outputting x 2 C satises maxfq x =q y j x; y 2 Cg 2 O(n). In our case, we need a similar algorithm, but for the target ball B(0; R). This suggests p to enlarge the perturbations: now the perturbations are chosen with distribution N(0; R= Kn) for an absolute constant K. The following issues arise when we work with such a perturbation. We present only an informal treatment of these issues in this extended abstract. (1) We need to ensure that all the sample points are well within the parallelepiped, i.e., the distance of each sample point to the boundary of P is more than the magnitude R 0 = O(R) of the perturbation. Let P 0 be the interior of P that is at least R 0 away from the boundary of P. If the sides of P are exponentially large when compared to this magnitude, the measure of vol(pnp 0 ) is vanishing compared to vol(p 0 ). Hence, by choosing sucient constants, we can ensure that all the sample points fall in P 0. (2) Since any two lattice points in \ B(0; R) have distance at most 2R and by our choice of the variance of the perturbation, we can show as in [1] that we have p x 2?c2n p y for a constant c 2 > 0. Acknowledgments. Thanks to Jin-Yi Cai for many comments on an earlier draft. References [1] M. Ajtai, R. Kumar, and D. Sivakumar. A sieve algorithm for the shortest lattice vector problem. Proc. 33rd ACM Symposium on Theory of Computing, pp. 601{610, 2001. [2] J. Blomer. Closest vectors, successive minima, and dual HKZ-bases of lattices. Proc. 27th International Colloquium on Automata, Languages, and Programming, pp. 248{ 259, 2000. [3] L. Babai. On Lovasz' lattice reduction and the nearest lattice point problem. Combinatorica, 6(1):1{13, 1986. [4] J. Cai. On the Average-Case Hardness of CVP. Proc. 42nd IEEE Symposium on Foundations of Computer Science, 2001. [5] I. Dinur, G. Kindler, and S. Safra. Approximating CVP to within almost polynomial factors is NP-hard. Proc. 39th IEEE Symposium on Foundations of Computer Science, pp. 99{109. 1998. [6] P. van Emde Boas. Another NP-complete partition problem and the complexity of computing short vectors in lattices. Mathematics Department, University of Amsterdam, TR 81-04, 1981. 8

[7] O. Goldreich and S. Goldwasser. On the limits of non-approximability of lattice problems. Journal of Computer and System Sciences, 60(3):540{563, 2000. [8] O. Goldreich, D. Micciancio, S. Safra, and J.-P. Seifert. Approximating shortest lattice vectors is not harder than approximating closest lattice vectors. Information Processing Letters, 71:55{61, 1999. [9] B. Helfrich. Algorithms to construct Minkowski reduced and Hermite reduced bases. Theoretical Computer Science, 41:125{139, 1985. [10] R. Kannan. Minkowski's convex body theorem and integer programming. Mathematics of Operations Research, 12:415{440, 1987. [11] P. Klein. Finding the closest lattice vector when it's unusually close. Proc. 11th Symposium on Discrete Algorithms, pp. 937{941, 2000. [12] A. K. Lenstra, H. W. Lenstra, and L. Lovasz. Factoring polynomials with rational coecients. Mathematische Annalen, 261:515{534, 1982. 9