Improved Nguyen-Vidick Heuristic Sieve Algorithm for Shortest Vector Problem

Similar documents
Lattice Reduction Algorithms: Theory and Practice

Solving the Shortest Lattice Vector Problem in Time n

A Fast Phase-Based Enumeration Algorithm for SVP Challenge through y-sparse Representations of Short Lattice Vectors

Shortest Vector Problem (1982; Lenstra, Lenstra, Lovasz)

Dimension-Preserving Reductions Between Lattice Problems

Solving All Lattice Problems in Deterministic Single Exponential Time

Improved Analysis of Kannan s Shortest Lattice Vector Algorithm

Lattice Reduction for Modular Knapsack

Finding shortest lattice vectors faster using quantum search

Lattice reduction for modular knapsack

Upper Bound on λ 1. Science, Guangzhou University, Guangzhou, China 2 Zhengzhou University of Light Industry, Zhengzhou, China

Some Sieving Algorithms for Lattice Problems

Predicting Lattice Reduction

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Improved Analysis of Kannan s Shortest Lattice Vector Algorithm (Extended Abstract)

BKZ 2.0: Better Lattice Security Estimates

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

Finding shortest lattice vectors faster using quantum search

Hard Instances of Lattice Problems

Predicting Lattice Reduction

COS 598D - Lattices. scribe: Srdjan Krstic

Lower bounds of shortest vector lengths in random knapsack lattices and random NTRU lattices

Lattice Reduction of Modular, Convolution, and NTRU Lattices

Lattice-Based Cryptography: Mathematical and Computational Background. Chris Peikert Georgia Institute of Technology.

COMPLEXITY OF LATTICE PROBLEMS A Cryptographic Perspective

Lattice-based Cryptography

M4. Lecture 3. THE LLL ALGORITHM AND COPPERSMITH S METHOD

Solving Closest Vector Instances Using an Approximate Shortest Independent Vectors Oracle

A Note on the Density of the Multiple Subset Sum Problems

Sieving for Shortest Vectors in Ideal Lattices

Solving BDD by Enumeration: An Update

Cryptanalysis of two knapsack public-key cryptosystems

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

Solving shortest and closest vector problems: The decomposition approach

Low-Density Attack Revisited

Tuple Lattice Sieving

Cryptanalysis of a Public Key Cryptosystem Proposed at ACISP 2000

From the Shortest Vector Problem to the Dihedral Hidden Subgroup Problem

Computers and Mathematics with Applications

arxiv: v1 [cs.cr] 25 Jan 2013

CSC 2414 Lattices in Computer Science October 11, Lecture 5

Lattices. A Lattice is a discrete subgroup of the additive group of n-dimensional space R n.

CS Topics in Cryptography January 28, Lecture 5

Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems

Orthogonalized Lattice Enumeration for Solving SVP

Creating a Challenge for Ideal Lattices

Enumeration. Phong Nguyễn

BALANCED INTEGER SOLUTIONS OF LINEAR EQUATIONS

Lattice-Based Cryptography

Gauss Sieve on GPUs. Shang-Yi Yang 1, Po-Chun Kuo 1, Bo-Yin Yang 2, and Chen-Mou Cheng 1

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio

The Shortest Vector Problem (Lattice Reduction Algorithms)

Adapting Density Attacks to Low-Weight Knapsacks

Lower Bounds of Shortest Vector Lengths in Random NTRU Lattices

On estimating the lattice security of NTRU

Short generators without quantum computers: the case of multiquadratics

Cryptanalysis of the Quadratic Generator

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

1 Shortest Vector Problem

Dwork 97/07, Regev Lyubashvsky-Micciancio. Micciancio 09. PKE from worst-case. usvp. Relations between worst-case usvp,, BDD, GapSVP

Computing the RSA Secret Key is Deterministic Polynomial Time Equivalent to Factoring

A Digital Signature Scheme based on CVP

Analyzing Blockwise Lattice Algorithms using Dynamical Systems

Recovering Short Generators of Principal Ideals in Cyclotomic Rings

Background: Lattices and the Learning-with-Errors problem

Speeding-up lattice sieving without increasing the memory, using sub-quadratic nearest neighbor search

Sieving for Shortest Vectors in Ideal Lattices:

Looking back at lattice-based cryptanalysis

Time-Memory Trade-Off for Lattice Enumeration in a Ball

An Efficient Lattice-based Secret Sharing Construction

Oded Regev Courant Institute, NYU

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

Quantum-resistant cryptography

Extended Lattice Reduction Experiments using the BKZ Algorithm

Integer Factorization using lattices

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Solving the Closest Vector Problem in 2 n Time The Discrete Gaussian Strikes Again!

Diophantine equations via weighted LLL algorithm

A Deterministic Single Exponential Time Algorithm for Most Lattice Problems based on Voronoi Cell Computations

Open problems in lattice-based cryptography

Deterministic Polynomial Time Equivalence of Computing the RSA Secret Key and Factoring

A new attack on RSA with a composed decryption exponent

New Cryptosystem Using The CRT And The Jordan Normal Form

A Disaggregation Approach for Solving Linear Diophantine Equations 1

Lattice Gaussian Sampling with Markov Chain Monte Carlo (MCMC)

On the Quantitative Hardness of CVP

Hardness of the Covering Radius Problem on Lattices

Lattices that Admit Logarithmic Worst-Case to Average-Case Connection Factors

A Lattice-Based Public-Key Cryptosystem

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

New Practical Algorithms for the Approximate Shortest Lattice Vector

Density of Ideal Lattices

2 cryptology was immediately understood, and they were used to break schemes based on the knapsack problem (see [99, 23]), which were early alternativ

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Locally Dense Codes. Daniele Micciancio. August 26, 2013

Réduction de réseau et cryptologie.

Deterministic Polynomial Time Equivalence between Factoring and Key-Recovery Attack on Takagi s RSA

Ideal Lattices and NTRU

New attacks on RSA with Moduli N = p r q

Transcription:

Improved Nguyen-Vidick Heuristic Sieve Algorithm for Shortest Vector Problem Xiaoyun Wang,, Mingjie Liu, Chengliang Tian and Jingguo Bi Institute for Advanced Study, Tsinghua University, Beijing 84, China xiaoyunwang@mail.tsinghua.edu.cn,liu-mj7@mails.tsinghua.edu.cn Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education, Shandong University, Jinan 5, China {chengliangtian,jguobi}@mail.sdu.edu.cn Abstract. In this paper, we present an improvement of the Nguyen-Vidick heuristic sieve algorithm for shortest vector problem in general lattices, which time complexity is.3836n polynomial computations, and space complexity is.557n. In the new algorithm, we introduce a new sieve technique with two-level instead of the previous one-level sieve, and complete the complexity estimation by calculating the irregular spherical cap covering. keywords: lattice, shortest vector, sieve, heuristic, sphere covering Introduction The n-dimensional lattice Λ is generated by the basis B = {b, b,, b n } m which consists n linearly independent vectors. Λ = L(B) = {Bz = n z i b i : z Z n }. i= The minimum distance λ (Λ) of a lattice Λ is the length of its shortest nonzero vector: λ (Λ) = min x Λ x. Here x is the Euclidean norm of the vector x. The problem of finding a lattice point v with norm λ (Λ) is called the Shortest Vector Problem (SVP). The γ-approximation to SVP denoted as SVP γ is the problem of finding a lattice point v with the length v γλ (Λ). SVP is a classical mathematical problem originated from geometry of numbers [5, 5], and it is also a NP-hard problem in computational complexity theory [4]. In the past thirty years, SVP has been widely used in public-key cryptanalysis and lattice-based cryptography. On one hand, the fast algorithm for searching SVP or SVP γ [3, 33] is a fundamental tool in public-key cryptanalysis and lattice-based cryptanalysis [6]. The most successful polynomial algorithm to search a shorter vector with approximation (n )/ is the LLL basis reduction algorithm [] which has been successfully used in breaking most Knapsack encryptions [,, 3] and resulted in various weak key attacks on SA-like cryptosystems [7, 9]. On the other hand, many cryptographic functions corresponding to SVP variants are proposed to be acted as the trapdoor one-way functions so that various lattice-based cryptographic schemes Supported by the National 973 Program of China (Grant No.7CB879) and National Natural Science Foundation of China (Grant No.693644).

are easy to be constructed [3, 6,, 3]. There are two popular cryptographic functions which are derived from SIS (small integer solution) problem and LWE (learning with errors) problem respectively, and it is noted that SIS and LWE can be reduced to an SVP variant SIVP γ (Shortest Independent Vectors Problem). Today, fast searching shortest vector has become the most important focus point both on the security assessment and cryptanalysis in lattice-based cryptography. Because SVP is NP-hard, the exact algorithm to search for SVP is not expected to be polynomial time. So far, there are essentially two different types of algorithms for exact SVP: deterministic algorithms and randomized sieve algorithms. The first deterministic algorithm for SVP is originated from the work of Pohst [36] and Kannan [9], which is named as deterministic enumeration algorithm. Its main idea is to enumerate all lattice vectors shorter than a fixed bound A λ (Λ), with the help of the Gram- Schmidt orthogonalization of the given lattice basis. Given an LLL-reduced basis as input, the algorithm of Fincke and Pohst [] runs in time O(n), while the worst-case complexity of Kannan s algorithm is n n e +o(n) [6]. See the survey paper [] for more details. Among the enumeration algorithms, the Schnorr-Euchner enumeration strategy [34] is the most important one used in practice, whose running time is O(n) polynomial-time operations where the basis is either LLL-reduced or BKZ-reduced. ecently, Gama, Nguyen and egev propose a new technique called extreme pruning in enumeration algorithm to achieve exponential speedups both in theory and in practice [4]. All enumeration algorithms we mentioned above only require a polynomial data complexity. A completely different deterministic algorithm for SVP is based on Voronoi cell computation which originally aimed at solving the Closest Vector Problem (CVP) [35]. ecently, Micciancio and Voulgaris [4] proposed an improved algorithm which is applicable to most lattice problems, including SVP, CVP and SIVP. The running time is Õ(n+o(n) ) polynomialtime operations, where f = Õ(g), means f(n) logc g(n) g(n) for some constant c and all sufficiently large n. This is so far the best known result in lattice computational complexity in the deterministic search setting. Another type algorithm for exact SVP is the randomized sieve algorithm, which was first proposed in by Ajtai, Kumar and Sivakumar [5] (AKS sieve algorithm). The sieve method reduces upper bound of the time to O(n) at the cost of O(n) space. In [3] egev got the first constant estimation with time 6n+o(n) and space 8n+o(n), and further decreased to time 5.9n+o(n) and space.95n+o(n) by Nguyen and Vidick [8]. Micciancio and Voulgaris utilized the bound estimation of sphere packing [7], and improved both the time and space complexity to 3.4n+o(n) and.97n+o(n) respectively [3], and further reduced to 3.99n+o(n) and space.35n+o(n) by combining with ListSieve technique. By implementing the birthday attack on the sieved shorter vectors in a small ball with the radius 3.λ (Λ), Pujol and Stehlé give a sieve algorithm to search SVP with the time complexity.465n+o(n) [9]. Besides the above algorithms, there is a more practical searching algorithm which is heuristic under a natural random assumption. Nguyen and Vidick [8] presented the first heuristic variant of AKS sieve [5] with time.45n and space.75n, which is so far the fastest randomized sieve algorithm. It is remarked that, Micciancio and Voulgaris [3] also described a heuristic ListSieve called Gauss Sieve, which performed fairly well in practice but the upper bound time complexity of this sieve is still unknown. In this paper, we present an improved heuristic randomized algorithm which solves SVP with time.3836n and space.557n. The main idea of the algorithm is to collect shorter

vectors by two-level sieve. The estimation of the complexity is based on the computation of the irregular spherical cap covering which comes from the intersection of a spherical surface and two balls. This paper is organized as follows: Section gives some notations and preliminaries. The new algorithm is introduced in Section 3. We present a proof of the algorithm complexity in Section 4. Conclusions are given in Section 5. Notations and Preliminaries ω(f(n)) represents a function growing faster than cf(n) for any c >. Θ(f(n)) is a function that has the same order as f(n), when n. Let S n = {x n x = } be the unit sphere in n. B n (x, r) denotes the n-dimensional ball centered at x with radius r, and is simplified as B n (r) when center is the origin. κ n is the volume of the unit Euclidean n-dimensional ball. C n (γ) = {x n γ x } is a spherical shell in the ball B n (). B(ϕ, x) = {y x, y > cos ϕ, y S n } is the spherical cap with angle ϕ in S n,ϕ (, π ). B(ϕ, x, γ) = {y x, y > cos ϕ, y C n (γ)} is the spherical cap with height and angle ϕ in C n (γ),ϕ (, π ). A represents its volume if A is a geometric body or its cardinality if A is a finite set. We note that S n = nκ n. it is well-known that κ n = π n Γ( n + ) = where Γ(z) = t z e t dt is the gamma function. { π k k!, n = k k+ k!π k (k+)!, n = k + Lemma. [8] Let ϕ (, π ), and x Sn, if ϕ arccos n, then κ n 3 cos ϕ (sin ϕ)n < B(ϕ, x) < κ n cos ϕ (sin ϕ)n. Define Ω n (ϕ) = B(ϕ,x) S n = B(ϕ,x,γ) C n(γ). From the facts that n π following corollary holds. Corollary. [8] Let ϕ (, π ), if ϕ arccos n, then 3 πn cos ϕ (sin ϕ)n < Ω n (ϕ) < < κ n κ n < π(n ) cos ϕ (sin ϕ)n. n+ π, the For any real s >, the Gaussian function on n centered at c with parameter s is given as follows. x n, ρ s,c (x) = e π (x c)/s. The subscripts s and c are taken to be and (respectively) when omitted. For any c n, real s >, and n-dimensional lattice Λ, define the discrete Gaussian distribution over Λ as: x Λ, D Λ,s,c (x) = ρ s,c(x) ρ s,c (Λ), 3

where ρ s,c (A) = x A ρ s,c(x) for any countable set A. A function ε(n) is negligible if ε(n) < /n c for any c > and all sufficiently large n. Statistical distance between two distributions X and Y over a countable domain D is defined as d D X(d) Y (d). We say two distributions (indexed by n) are statistically close if their statistical distance is negligible in n. Lemma. [] There is a probabilistic polynomial-time algorithm that, given a basis B of an n-dimensional lattice Λ = L(B), a parameter s B ω( log n), and a center c n, outputs a sample from a distribution that is statistically close to D Λ,s,c. Similar to NV heuristic algorithm, our algorithm requires that the lattice points distribute in C n (γ ) uniformly at any stage of the algorithm. We need to select the sample by applying Klein randomized variant of nearest plane algorithm [8] so that the initial chosen sample is indistinguishable from Gauss distribution. The distribution proof of Klein algorithm can be found in []. 3 Algorithm Section 3. gives a brief description of Nguyen-Vidick heuristic sieve algorithm (NV algorithm). Then in section 3., under the same natural heuristic assumption as NV algorithm, we present a new algorithm with two-level sieve, which can output the shortest vector in time.3836n. 3. Nguyen and Vidick s heuristic sieve algorithm We start with the randomized algorithm proposed by Ajtai, Kummar and Sivakumar(AKS sieve)[5]. The main idea of the algorithm is as follows: sample O(n) lattice vectors in a ball B n () for = O(n) λ, then implement a partition and a sieve method to search enough shorter vectors within B n (γ), for γ < without losing many vectors. gets updated every time which is close to λ by a polynomial iterations, while the short vectors left are enough to get the shortest vector. Until now, the perturbation technique in sampling procedure is necessary to prove the successful probability of finding the shortest vector in all randomized algorithms. But the effect of perturbation in practice is unclear. In [8], Nguyen and Vidick presented a fast heuristic algorithm (NV algorithm) which collects the short vectors by directly sieving the chosen lattice vectors instead of sieving the lattice vectors derived from perturbed points. The NV algorithm has.45n time and.75n space complexity. In every sieve iteration of NV algorithm, the input is a set S of lattice vectors with maximal norm which can be regarded as having the random distribution in C n (γ). The main purpose of NV algorithm is to randomly select a subset C of S as the center points which are located in C n (γ). The set C has enough points such that for any vector a in S, there is at least a point c C with a c shorter than γ. a c is an output shorter vector in the iteration. In every iteration, the sieve captures a new set of vectors within the ball B n (γ) without losing many vectors by selecting available γ and the size of C, i.e. the upper norm bound of the set shrinks by γ. Then after a polynomial iterations, the shortest vector will be included in the sieved short vectors, and can be found by searching. The core of NV algorithm is the sieve in Algorithm. 4

The main part of the data complexity is determined by the upper size of the point centers C which should guarantee that after polynomial number of iterations the set S is not empty. The estimation of C is based on a natural assumption, and the experiments shows the assumption is rational. Algorithm The NV sieve Input: An subset S B n() of vectors in a lattice L, sieve factors Output: A subset S B n(γ) L. : max v S v. : C, S 3: for v S do 4: if v γ then 5: S S {v}. 6: else 7: if c C, v c γ then 8: S S {v c}. 9: else : C C {v} : end if : end if 3: end for 3 < γ <. Heuristic Assumption: At any stage in the Algorithm, the vectors in S C n (γ) are uniformly distributed in C n (γ) = {x n : γ x }. 3. New Sieve Algorithm In NV algorithm, the time complexity is the square of space complexity. In order to achieve some balance between time and space, we try a new sieve method which in fact uses a two-level sieve. (Algorithm 3). Our algorithm also includes polynomial iterations, and each iteration consists of two level sieves. At the first level, we partition the lattice points in the spherical shell C n (γ ) into different big balls rather than small ones in Algorithm (See Fig.). In the second level, we cover every spherical cap (intersection of the big ball and C n (γ )) using small balls which are centered at a lattice point in the same spherical cap. By comparing all the lattice points in the spherical cap with centers of every small ball, we can get some shorter vectors. Merging all the short vectors calculated in every spherical cap, we obtain the required short lattice vectors for the next iteration. It is clear that, the first level sieve needs less number of big balls which saves the comparing time. At the second level, each shorter vector is obtained by pair-wise difference among the lattice points in a spherical cap. This means that more data is required in order to get enough shorter vectors. In particular, the Heuristic Assumption in NV sieve guarantees the uniform distribution in S C n (γ) which also supports our algorithm. The frame of our algorithm is given in Algorithm. Instead of the shrink factor γ in NV sieve, we use two pivotal input parameters γ, γ. These parameters will be used to determine N and estimate the complexity and efficiency. In steps -5, we generate N lattice vectors within a proper length similar to NV sieve. Steps 6- are the key parts of our algorithm which reduce the norm of lattice vectors in S by a factor γ without significantly decreasing 5

Fig.. the size of S. This new sieve is different from any known sieve in which we put longer lattice vectors in different big balls firstly, then perform sieve again in separate big balls. The details will be given in Algorithm 3. This main loop repeats until the set S is empty, and the shortest vector in S is returned. The size of S decreases in two ways: firstly in Algorithm 3 the vector used as center vector is removed from S; secondly, in step the appearance of zero vector vanishes some vectors. We will provide a detailed analysis to estimate the number of vectors that are got rid of from the process. Algorithm Finding short lattice vectors based on sieving Input: An LLL-reduced basis B = [b,, b n] of a lattice L, sieve factors γ, γ such that 3 < γ < < γ < γ, and a number N. Output: A short non-zero vector of L. : S. : for j = to N do 3: S S sampling(b) using Klein s algorithm 4: end for 5: emove zero vector from S. 6: S S 7: repeat 8: S S 9: S latticesieve(s, γ, γ ) using algorithm 3. : emove zero vector from S. : Until S = : Compute v S such that v = min{ v, v S } 3: return v Under the heuristic assumption, the collisions in step of algorithm are negligible until Bn () L S C n (γ ) which means the upper bound of the norm get very close to the shortest length in lattice. Two levels of center points are used in our sieve. Let C be the set of centers of big balls with radius γ in the first level, where γ >. Since we can not get the short vectors in a 6

Algorithm 3 The lattice sieve Input: An subset S B n() of vectors in a lattice L, sieve factors 3 γ < < γ < γ. Output: A subset S B n(γ ) L. : max v S v. : C, C { }, S 3: for v S do 4: if v γ then 5: S S {v}. 6: else 7: if c C, v c γ then 8: if c C c, c v γ \ C c is initialized as empty set \ 9: S S {v c }. : else : C c C c {v} : end if 3: else 4: C C {v}, C C {C v = {v}} 5: end if 6: end if 7: end for big ball by subtracting its center directly, a second level covering is needed. C c consists of the centers of small balls in the second level that cover a big ball with center c. It is clear that, C c is selected in the regular spherical cap C n(γ ) B n (c, γ ). All C c are merging into one set C, i.e., C = c C C c. Denote the expected number of lattice vectors in C as N C and the expected size of every C c as N C c. To estimate N C, we have to calculate the fraction of the spherical cap C n (γ ) B n (c, γ ) in C n (γ ). The right part of Fig. illustrates the covering of first level. N C c is the number of small balls centered in C c with radius γ, which cover the spherical cap C n (γ ) B n (c, γ ) with probability close to. The region of C n (γ ) B n (c, γ ) B n (c, γ ), c C c is a regular or irregular spherical cap whose volume determines the number of N C c. Fig. shows the second covering. O b denotes a center c of the first-level big ball, and O s is a center c of second-level small ball. We give the estimations for N C and N C c in the following theorems. The purpose of every iteration is to compress a large number of lattice points in B n () to B n (γ ) without losing many points. So we just consider the covering of the spherical shell C n (γ ). Applying LLL reduced input basis and the Klein s sample algorithm with proper parameter, we can choose the initial smaller than O(n) λ. After every two-level sieve of the algorithm, the upper bound of the norm shrinks by γ. If the number of sampled vectors is not less than poly(n)n C N C c, then it is expected that the shortest vector remains after a polynomial many time iterations,. The upper bound of C and C c are given in Theorem and Theorem respectively. Theorem. Let n be a non-negative integer, and < γ < < γ < γ, N C = c n H 3 πn 3, where c H =. S is a subset of C n (γ ) of cardinality N whose points are picked γ γ 4 independently at random with uniform distribution. If N C < N < n, then for any subset 7

Fig.. Second Level Covering in the New Algorithm C S of size at least N C whose points are picked independently at random with uniform distribution, with overwhelming probability, for all v S, there exists a c C such that v c γ. Theorem. Let n be a non-negative integer, γ < < γ < γ, γ is very close to, N C c = c ( ch d min ) n n 3, where c H = γ γ γ, d 4γ min = γ γ c H 4, c is a positive constant unrelated to n. S is a subset of {x C n (γ ) x c γ } of cardinality N whose points are picked independently at random with uniform distribution. If N C c < N < n, then for any subset C S of size at least N C c whose points are picked independently at random with uniform distribution, with overwhelming probability, for all v S, there exists a c C such that v c γ. 4 Proof of the Complexity This section contains the proofs of Theorem, and the complexity estimation of our algorithm. Since has no effect on the conclusion of Theorem and Theorem, we prove following lemma for unit ball. Let Ω n (γ ) be the fraction of C n (γ ) that is covered by a ball of radius γ centered in a point of C n (γ ). Lemma 3. 3 < γ < < γ < γ, 3 (sin θ ) n < Ω n (γ ) < πn cos θ where θ = arccos ( γ ), θ γ = arccos ( γ ). 8 (sin θ ) n, π(n ) cos θ

y y θ θ Ο x Fig. 3. Proof. Let x C n (γ ), x = α where γ α. y and y are two points in the spherical cap C n (γ ) which are at distance γ from x, and y = γ, and y =. Denote the angle of vertices x, O and y as θ and θ is the angle of vertices x, O and y. We have cos θ = α +γ γ α γ, cos θ = α + γ α. From γ > α γ and γ <, we know that cos θ < cos θ. This implies θ > θ. Then B(θ, x, γ ) Ω n (γ ) B(θ, x, γ ). By Corollary, we have 3 (sin θ ) n < Ω n (γ ) < πn cos θ (sin θ ) n. π(n ) cos θ Furthermore, both cos θ and cos θ increases with α. So the lower bound is given by α =, where θ = arccos ( γ ). When α = γ, θ = arccos ( γ ), we get the upper bound for γ Ω n (γ ). emark. It is noted that Lemma 3 is similar to lemma 4. in [8]. The difference is that we generalize the formula to that of reflecting the exact expressions of angles θ, θ with parameters γ and γ, which are important in the main complexity estimation of Theorem and Theorem. Now we are ready to prove Theorem. Proof. By lemma 3, we have Ω n (γ ) > 3 (sin θ ) n > πn cos θ 3 πn (sin θ ) n > 3 πn c n H. The expected proportion of C n (γ ) that is not covered by N C randomly chosen points of C n (γ ) is ( Ω n (γ )) N C. So, balls of radius γ centered at N C log ( Ω n (γ )) N C ( Ω n (γ )) < c n H 3 πn 3 which implies ( Ω n (γ)) N C < e n < N. 3 πn c n H n < log N, Therefore, the expected number of uncovered points is smaller than. In other words, any point in C(γ ) is covered by a ball of radius γ with successful probability e n. 9

Without loss of generality, we denote the center of one big ball centered at C n (γ ) as (α,,..., ), where γ α. The region of the regular spherical cap B n (c, γ ) C n (γ ) is denoted as M, i.e., M = {(x, x, x 3,..., x n ) C n (γ ) (x α ) + x +... + x n < γ }, where γ α. To discuss the covering of the M by the small balls B n(c, γ ), c C c, we need to calculate the minimum fraction Ω n(γ, γ ) of the spherical cap B n(c, γ ) B n (c, γ ) C n (γ ) as c ranging over C c. We denote B n(c, γ ) B n (c, γ ) C n (γ ) as H. Before estimate the proportion of spherical cap H in M, we need to clarify its location. From Fig., we know that, when the small ball completely fall into the big ball, H is a regular spherical cap, otherwise it is an irregular spherical cap. Especially, if c slips along the sphere of the big ball (See right part of Fig. ), the fraction of H in M is minimal. We noted that, only a little part of H is regular, and most centers c are close to surface of big balls. So, we only compute the volume of minimum H, i.e., c is located at the sphere of a big ball B n (c, γ ). Lemma 4. Let γ < < γ < γ, γ is very close to, we have Ω n (γ, γ ) c dn min πn, where d min = γ γ c H 4, c H = γ γ 4, and c is a positive constant. Ο c=( α,,...,) c =(x, y,,...,) Fig. 4. Proof. Note that γ is selected close to in our algorithm, to estimate the covering of the regular spherical cap M by irregular spherical cap, we just consider the proportion on the sphere covering rather than the shell covering. Without loss of generality, we assume the center of B n (c, γ ) as (α,,..., ), and the center B n(c, γ ) as (x, y,,..., ) where x >, y >. According to the above description, the irregular spherical cap B n(c, γ ) B n (c, γ ) C n (γ ) with the minimum volume is expressed as: x + x +... + x n = (x α ) + x +... + x n < γ, (x x ) + (x y ) +... + x n < γ

where γ α, (x α ) + y = γ, and γ x + y. In order to calculate this surface integral we project the target region to the hyperplane orthogonal to x, then this integral is changed to multiple integral. To simplify the expression, denote A = x + y and B = (A + γ )/. Let { D = (x, x 3,..., x n ) n ( x α + x 3 +... + x n < γ + ) }. α { D = (x, x 3,..., x n ) A x (x By A ) + x 3 +... + x n < B x ( y A ), x < B }, y D = {(x, x 3,..., x n ) } x + x 3 +... + x n <, x By, D = D D. ( ) α Let = γ, ) + α = ( B y x A. The integral region is denoted as D which is the intersection of D and D. By the equation x + x +... + x n =, we have x = ± (x +... + x n), x x i = x i (x +... + x n). Now we can calculate the volume of the target region by computing ( ) ( ) x x Q =... + +... + dx dx 3... dx n D x x n =... (x + x 3 +... + x n) dx dx 3... dx n. D We analysis the region D and first compute x to simplify the above multiple integral. The upper bound and lower bound of x is (x 3 +... + x n) and By A x A (x 3 +... + x n) respectively, while the region of (x 3,..., x n ) is a ball of dimension n with radius ( ( ( α d = B x γ + ))). y α Therefore Q is expressed as, Q = n n i=3 x i <d = n i=3 x i <d i=3 x i By A x A n i=3 x i arcsin n i=3 x i n i=3 x i dx dx 3... dx n n i=3 x i x By A arcsin x A n i=3 x i dx 3... dx n. n i=3 x i Let x 3 = t cos ϕ x 4 = t sin ϕ cos ϕ. x n = t sin ϕ... sin ϕ n 4 cos ϕ n 3 x n = t sin ϕ... sin ϕ n 4 sin ϕ n 3,

then t d, ϕ k π, k =,..., n 4, ϕ n 3 π. Furthermore, we get, So, (x 3, x 4,..., x n ) (t, ϕ,... ϕ n 3 ) = tn 3 sin ϕ n 4... (sin ϕ ) n 5 (sin ϕ ) n 4. Q = d π π =π arcsin d π t n 3 sin ϕ n 4... (sin ϕ ) n 5 (sin ϕ ) (arcsin n 4 t t ) By A x A t t t (arcsin n 3 t arcsin t dϕ... dϕ n 3 dt k+ Γ( ) By A x A t ) dt t From π sink ϕdϕ = π/ sin k ϕdϕ = π Γ( k +), we obtain π (n )/ d (arcsin tn 3 t By t arcsin and Q = Ω n (γ, γ ) = Q S n = n π d k=n 4 k= A x A t t π ) dt Γ( n ), t (arcsin n 3 t arcsin t By A x sin k ϕdϕ. A t ) dt. t When t [, d], the difference of the two anti-trigonometric function is bounded and positive which is independent of n. More precisely, the function f(t) = arcsin t t arcsin By A x A t t is decreasing when t [, d]. We have Ω n (γ, γ ) n π d ε t n 3 f(t)dt dn π ( ε d )n f(d ε). And since the omitted part which the integral region is from d ε to d is negligible compared to that from to d ε, our estimate is tight. Let ε = d n, using Taylor series to estimate f(d ε), we have f(d ε) = Θ( n ). Also ( n )n ( n )n e when n is sufficient large. Based on the above discussion, Ω n (γ, γ ) cdn πn. Next, given γ and γ, we compute the minimum d with the variables α, x, y. Because x,y satisfy the equation (x α ) + y = γ, let α = x + y, then x = α + α ( γ α, y = α α + α ) γ. α

So d can be regarded as a function with two variables α and α, and γ α, γ α. By calculating the partial derivative d(α,α ) α, from 3 < γ α < < γ < γ, it can be proven that, d is a decreasing function with respect to α. Let α =, we get d = γ γ 4T, T = ( + α γ α It is obvious that d decreases with α. Let α =, we achieve the minimum d. d min = γ γ c H, c H = 4 T =. γ γ 4 The proof of Lemma 4 is completed. Now we prove Theorem. Proof. Combing the Lemma 3 and Lemma 4, we get Ω n (γ, γ ) Ω n (γ ) c πn ( γ γ ) ( ) n dmin, which reflects the fraction of M covered by a small ball with radius γ centered in M. Similar to Theorem, it is easy to know the center points C c of the second level in every big ball are less than c n 3 ( c H d min ) n. Theorem 3. The time complexity of our algorithm is NC N C c + N C NC c, while the space complexity is N C N C c. When γ, γ =.97, we get the optimal time complexity.3836n, and the space complexity.557n. c H ). Proof. The total number of point centers in C is about N C N C c. If sampling poly(n)n C N C c vectors, after a polynomial iterations, we expect the vector left is enough to include the shortest vector. So the space complexity is poly(n)n C N C c. The initial sampling size S is poly(n)n C N C c. In each iteration, steps 3-7 in algorithm 3 repeat N C N C c times, in every repeat, at most N C + N C c comparisons are needed. So the total time complexity is NC N C c + N C NC c polynomial computations. Because N C only depends on γ, and N C c decreases with γ, we obtain the minimum time complexity by selecting γ and N C = N C c which leads to γ =.97 and N C = N C c =.78n. 5 Conclusion In this paper, we describe a new algorithm of heuristic sieve for solving the shortest vector problem with.3836n polynomial time operations and.557n space. Although our algorithm decreases the index of the time complexity from.45 to.3836, the polynomial part of the time complexity increases to n 4.5 from that of n 3 in NV algorithm. So our algorithm performs better than NV algorithm for large n. Acknowledgments We thank Guangwu Xu for revising the paper during his stay in Tsinghua University. 3

EFEENCES [] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger. Closest point search in lattices. IEEE Transactions on Information Theory, 48(8):-4. August. [] L.M.Adleman. On breaking generalized knapsack public key cryptosystems. In the 5th Annual ACM Symposium on Theory of Computing Proceedings, pages 4-4. ACM, April 983. [3] M. Ajtai and C. Dwork. A public-key cryptosystem with worst-case/average-case equivalence. In the 9th Annual ACM Symposium on Theory of Computing, pages 84-93. ACM, May 997. [4] M. Ajtai. The shortest vector problem in L is NP-hard for randomized reductions (extended abstract). In the 3th Annual ACM Symposium on Theory of Computing Proceedings, pages -9. ACM, May 998. [5] M. Ajtai,. Kumar, and D. Sivakumar. A sieve algorithm for the shortest lattice vector problem. In the 33th Annual ACM Symposium on Theory of Computing Proceedings, pages 66-75. July. [6] M. Ajtai. Generating hard instances of lattice problems. Complexity of Computations and Proofs, Quaderni di Matematica, 3:-3, 4. Preliminary version in STOC 996. [7] D.Boneh, G.Durfee. Cryptanalysis of SA with private key d less than N.9. In Advances in Cryptology - EUOCYPT 999 Proceedings, pages -. Springer, May 999. [8] K. Böröczky and G. Wintsche. Covering the sphere by equal spherical balls. Discrete and Computational Geometry, The Goodman-Pollack Festschrift, 37-53,3. [9] D. Coppersmith. Finding a small root of a univariate modular equation, In Advances in Cryptology - EUOCYPT 996 Proceedings, pages55-65. Springer, May 996. [] H. Cohen. A Course in Computational Algebraic Number Theory, nd edition. Springer-Verlag, 995. [] U. Fincke and M. Pohst. Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Mathematics of Computation, 44(7):463-47, 985. [] C. Gentry, C. Peikert, and V. Vaikuntanathan. Trapdoors for hard lattices and new cryptographic constructions. In the 4th Annual ACM Symposium on Theory of Computing Proceedings, pages 97-6. ACM, May 8. [3] N. Gama and P. Q. Nguyen. Finding short lattice vectors within mordell s inequality. In the 4th Annual ACM Symposium on Theory of Computing Proceedings, pages 7-6. ACM, May 8. [4] N. Gama, P. Q. Nguyen and O. egev. Lattice enumeration using extreme pruning. In Advances in Cryptology - EUOCYPT Proceedings, pages 57-78. Springer, May. [5] C. Hermite. Extraits de lettres de M. Hermite à M. Jacobi sur différents objects de la théorie des nombres, deuxième lettre. Journal fur die reine und angewandte mathematik, 4:79-9,85. Also available in the first volume of Hermite s complete works, published by Gauthier-Villars. [6] G. Hanrot and D. Stehlé. Improved analysis of kannan s shortest lattice vector algorithm. In Advances in Cryptology - CYPTO 7 Proceedings, pages 7-86. Springer, August 7. [7] G. Kabatiansky and V. Levenshtein. Bounds for packings on a sphere and in space. Problemy Peredachi Informatsii, 4():3-5, 978. [8] P.Klein. Finding the closest lattice vector when it s unusually close. In the th Annual ACM-SIAM Symposium on Discrete Algorithms,pages 937-94. SIAM, January. [9]. Kannan. Improved algorithms for integer programming and related lattice problems. In the 5th Annual ACM Symposium on Theory of Computing Proceedings, pages 93-6. ACM, April 983. [] A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovász. Factoring polynomials with rational coefficients. Mathematische Annalen, 6:53-534, 98. [] J. C. Lagarias and A. M. Odlyzko. Solving low-density subset sum problems. Journal of the ACM, 3(): 9-46, 985. [] D. Micciancio and O. egev. Post-Quantum Cryptography, chapter Lattice-based Cryptography. Springer-Verlag, 8. [3] D. Micciancio and P. Voulgaris. Faster exponential time algorithms for the shortest vector problem. In the th Annual ACM-SIAM Symposium on Discrete Algorithms Proceedings, pages 468-48. SIAM, January. [4] D. Micciancio and P. Voulgaris. A deterministic single exponential time algorithm for most lattice problems based on Voronoi cell computations. In the 4th Annual ACM Symposium on Theory of Computing Proceedings, pages 35-358. ACM, June. [5] H. Minkowski. Geometrie der Zahlen. Teubner, Leipzig, 896.

[6] P. Q. Nguyen. Cryptanalysis of the Goldreich-Goldwasser-Halevi cryptosystem from Crypto 997. In Advances in Cryptology - CYPTO 999 Proceedings, pages 88-34. Springer, August 999. [7] P. Q. Nguyen and J. Stern. The two faces of lattices in cryptology. In Cryptography and Lattices Conference, pages 46-8. Springer, March. [8] P. Q. Nguyen and T. Vidick. Sieve algorithms for the shortest vector problem are practical. Journal of Mathematical Cryptology, ():8-7, July 8. [9] X. Pujol and D. Stehlé. Solving the shortest lattice vector problem in time.465n. Cryptology eprint Archive, eport 9/65, 9. [3] O. egev. Lecture notes on lattices in computer science, 4. Available at http://www.cs.tau.ac.il/ odedr/teaching/lattices fall 4/index. html. [3] O. egev. On lattices, learning with errors, random linear codes, and cryptography. In the 37th Annual ACM Symposium on Theory of Computing Proceedings, pages 84-93. ACM, May 5. [3] A.Shamir. A polynomial time algorithm for breading the basic Merkel-Hellman cryptosystem. In the 3rd IEEE Symposium On Foundations of Computer Science Proceedings, pages 45-5. IEEE,98. [33] C. P. Schnorr. A hierarchy of polynomial lattice basis reduction algorithms. Theoretical Computer Science, 53:-4, 987. [34] C. P. Schnorr and M. Euchner. Lattice basis reduction: improved practical algorithms and solving subset sum problems. Mathematics of Programming, 66:8-99, 994. [35] N. Sommer, M. Feder, and O. Shalvi. Finding the closest lattice point by iterative slicing. SIAM Journal on Discrete Mathematics, 3():75-73, April 9. [36] M. Pohst. On the computation of lattice vectors of minimal length, successive minima and reduced bases with applications. ACM SIGSAM Bulletin, 5():37-44, 98. 5