Solving Closest Vector Instances Using an Approximate Shortest Independent Vectors Oracle

Similar documents
Dimension-Preserving Reductions Between Lattice Problems

and the polynomial-time Turing p reduction from approximate CVP to SVP given in [10], the present authors obtained a n=2-approximation algorithm that

Some Sieving Algorithms for Lattice Problems

Lattice-Based Cryptography: Mathematical and Computational Background. Chris Peikert Georgia Institute of Technology.

Hardness of the Covering Radius Problem on Lattices

Lecture 5: CVP and Babai s Algorithm

Shortest Vector Problem (1982; Lenstra, Lenstra, Lovasz)

COS 598D - Lattices. scribe: Srdjan Krstic

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

Practical Analysis of Key Recovery Attack against Search-LWE Problem

CSC 2414 Lattices in Computer Science September 27, Lecture 4. An Efficient Algorithm for Integer Programming in constant dimensions

Locally Dense Codes. Daniele Micciancio. August 26, 2013

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio

Lecture 7 Limits on inapproximability

Upper Bound on λ 1. Science, Guangzhou University, Guangzhou, China 2 Zhengzhou University of Light Industry, Zhengzhou, China

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

A Fast Phase-Based Enumeration Algorithm for SVP Challenge through y-sparse Representations of Short Lattice Vectors

Hard Instances of Lattice Problems

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio

Dwork 97/07, Regev Lyubashvsky-Micciancio. Micciancio 09. PKE from worst-case. usvp. Relations between worst-case usvp,, BDD, GapSVP

CSE 206A: Lattice Algorithms and Applications Spring Minkowski s theorem. Instructor: Daniele Micciancio

Background: Lattices and the Learning-with-Errors problem

Solving the Shortest Lattice Vector Problem in Time n

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem

COMPLEXITY OF LATTICE PROBLEMS A Cryptographic Perspective

1: Introduction to Lattices

CSC 2414 Lattices in Computer Science October 11, Lecture 5

Solving All Lattice Problems in Deterministic Single Exponential Time

New Lattice Based Cryptographic Constructions

Tensor-based Hardness of the Shortest Vector Problem to within Almost Polynomial Factors

1 Shortest Vector Problem

On Bounded Distance Decoding, Unique Shortest Vectors, and the Minimum Distance Problem. Vadim Lyubashevsky Daniele Micciancio

The Euclidean Distortion of Flat Tori

CS Topics in Cryptography January 28, Lecture 5

Lattices. A Lattice is a discrete subgroup of the additive group of n-dimensional space R n.

A Note on the Density of the Multiple Subset Sum Problems

Solving Hard Lattice Problems and the Security of Lattice-Based Cryptosystems

On Approximating the Covering Radius and Finding Dense Lattice Subspaces

From the Shortest Vector Problem to the Dihedral Hidden Subgroup Problem

Lattices Part II Dual Lattices, Fourier Transform, Smoothing Parameter, Public Key Encryption

Lattice-based Cryptography

Diophantine equations via weighted LLL algorithm

A Digital Signature Scheme based on CVP

Lower bounds of shortest vector lengths in random knapsack lattices and random NTRU lattices

Lattice Basis Reduction Part 1: Concepts

An intro to lattices and learning with errors

Density of Ideal Lattices

Solving BDD by Enumeration: An Update

Note on shortest and nearest lattice vectors

Fundamental Domains, Lattice Density, and Minkowski Theorems

Notes for Lecture 16

Computational complexity of lattice problems and cyclic lattices

Lattice Cryptography

Limits on the Hardness of Lattice Problems in l p Norms

Proving Hardness of LWE

Recovering Short Generators of Principal Ideals in Cyclotomic Rings

Deterministic Approximation Algorithms for the Nearest Codeword Problem

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

A Deterministic Single Exponential Time Algorithm for Most Lattice Problems based on Voronoi Cell Computations

Improved Analysis of Kannan s Shortest Lattice Vector Algorithm

Solving LWE with BKW

Lattice Reduction Algorithms: Theory and Practice

Limits on the Hardness of Lattice Problems in l p Norms

Integer Least Squares: Sphere Decoding and the LLL Algorithm

Ideal Lattices and NTRU

New Cryptosystem Using The CRT And The Jordan Normal Form

Post-quantum key exchange for the Internet based on lattices

Solving the Closest Vector Problem in 2 n Time The Discrete Gaussian Strikes Again!

Inapproximability Results for the Closest Vector Problem with Preprocessing over l Norm

The Shortest Vector Problem (Lattice Reduction Algorithms)

arxiv: v1 [cs.ds] 2 Nov 2013

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x

Algorithmic Problems for Metrics on Permutation Groups

Predicting Lattice Reduction

Lower Bounds of Shortest Vector Lengths in Random NTRU Lattices

Primitive Sets of a Lattice and a Generalization of Euclidean Algorithm

An Efficient Lattice-based Secret Sharing Construction

A CVP-BASED LATTICE SIGNATURE SCHEME FOR NETWORK CODING

Weaknesses in Ring-LWE

Lattice Reduction for Modular Knapsack

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Lattice Basis Reduction and the LLL Algorithm

Integer Factorization using lattices

Lattice Reduction of Modular, Convolution, and NTRU Lattices

Cryptanalysis of a Public Key Cryptosystem Proposed at ACISP 2000

Lattices that Admit Logarithmic Worst-Case to Average-Case Connection Factors

Reduction of Smith Normal Form Transformation Matrices

Approximating-CVP to within Almost-Polynomial Factors is NP-Hard

Primitive sets in a lattice

Ideal Lattices and Ring-LWE: Overview and Open Problems. Chris Peikert Georgia Institute of Technology. ICERM 23 April 2015

Solving LWE problem with bounded errors in polynomial time

On Nearly Orthogonal Lattice Bases and Random Lattices

A Note on the Distribution of the Distance from a Lattice

Branching proofs of infeasibility in low density subset sum problems

Limits on the Hardness of Lattice Problems in l p Norms

Improved Nguyen-Vidick Heuristic Sieve Algorithm for Shortest Vector Problem

Lattice reduction for modular knapsack

Fully homomorphic encryption scheme using ideal lattices. Gentry s STOC 09 paper - Part II

Improved Analysis of Kannan s Shortest Lattice Vector Algorithm (Extended Abstract)

Lattice Reduction Attacks on HE Schemes. Martin R. Albrecht 15/03/2018

Transcription:

Tian CL, Wei W, Lin DD. Solving closest vector instances using an approximate shortest independent vectors oracle. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 306): 1370 1377 Nov. 015. DOI 10.1007/s11390-015- 1604-4 Solving Closest Vector Instances Using an Approximate Shortest Independent Vectors Oracle Cheng-Liang Tian 1 ), Wei Wei ï å), and Dong-Dai Lin 1 üò), Senior Member, CCF 1 State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences Beijing 100093, China Institute for Advanced Study, Tsinghua University, Beijing 100084, China E-mail: tianchengliang@iie.ac.cn; wei-wei08@mails.tsinghua.edu.cn; ddlin@iie.ac.cn Received April 1, 014; revised July 9, 015. Abstract Given an n-dimensional lattice L and some target vector, this paper studies the algorithms for approximate closest vector problem CVP γ) by using an approximate shortest independent vectors problem oracle SIVP γ). More precisely, if the distance between the target vector and the lattice is no larger than c λ1l) for arbitrary large but finite γn constant c > 0, we give randomized and deterministic polynomial time algorithms to find a closest vector, while previous 1 reductions were only known for λ1l). Moreover, if the distance between the target vector and the lattice is larger γn than some quantity with respect to λ nl), using SIVP γ oracle and Babai s nearest plane algorithm, we can solve CVP γ n in deterministic polynomial time. Specially, if the approximate factor γ 1,) in the SIVP γ oracle, we obtain a better reduction factor for CVP. Keywords lattice, closest vector problem, shortest independent vectors problem, reduction 1 Introduction Lattices are discrete subgroups of R n. They are powerful mathematical objects that have been used to efficiently solve many important problems in computer science, most notably in the areas of cryptography and combinatorial optimization. In lattice theory, the most important and widely studied computational problems are shortest vector problem SVP) and closest vector problem CVP). Given a lattice L R n, SVP γ is the problem of finding a non-zero lattice vector of length at most γλ 1 L), where λ 1 L) denotes the length of shortest non-zero lattice vector. Given a lattice L R n and a target vector t R n, CVP γ is the problem of finding a v L such that v t γdistt,l), where distt,l) = min{ u t : u L} denotes the distance between t and L. In 1999, Goldreich et al. [1] first studied the relationship between these two problems and gave a deterministic polynomial-time rankpreserving reduction from SVP γ to CVP γ for any approximate factor γ 1, which implies that SVP γ is not harder than CVP γ. It is natural to ask whether CVP γ is strictly harder than SVP γ. In terms of known computational complexity results, the answer may be Yes. For any constant c and approximate factor γ = n c/loglogn, CVP γ is NP-hard under deterministic reductions [] ; while the proof of that SVP γ is NP-hard with the same approximate factor is randomized and under a strong complexity assumption [3]. A possible way to derandomize is giving a deterministic reduction from CVP γ to SVP γ. Using an exact SVP oracle, Kannan [4] presented a deterministic polynomial time algorithm for solving approximate closest vector problem CVP n. Ajtai et al. [5] generalized Kannan s reduction technique and proposed a O1+1/ǫ)n time algorithm for solving CVP 1+ǫ by sampling short vectors. In another survey paper [6], using dual lattice and trans- Regular Paper This work is partially supported by the National Basic Research 973 Program of China under Grant No. 011CB30400, the National Natural Science Foundation of China under Grant Nos. 61379139 and 61133013, and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDA06010701. 015 Springer Science + Business Media, LLC & Science Press, China

Cheng-Liang Tian et al.: Solving CVP Instances Using an SIVP Oracle 1371 ference theorem in the geometry of numbers [7], Kannan proved that CVP γ n can be reduced to SVP 3/ γ in deterministic polynomial time. Recently, combining Kannan s lattice-embedding technique [4] with the reduction from BDD 1/γ to usvp γ given by Lyubashevsky and Micciancio [8], Dubey and Holenstein [9] improved Kannan s result [6] and obtained a deterministic polynomial-time rank-preserving reduction from CVP γ n to SVP γ. Ajtai s groundbreaking work [10] which connects the worst-case and the average-case complexity of certain computational problems on lattices has opened the door to cryptography based on worst-case hardness. Regev s results [11] further broadened the foundation of latticebased cryptography. Their studies show that the security of all the cryptographic protocols based on SIS Small Integer Solution) and LWE Learning with Errors) depends on the worst-casehardness of SIVP γ the definition will be given in Section ). Therefore it is essential to compare the harness among SIVP γ, SVP γ and CVP γ. In order to study the hardness of SIVP γ, Blömer and Seifert [1] first gave a deterministic polynomial time reduction from the exact CVP to the exact SIVP,butthereductiondidnotpreservetherankoflattices. Combining the lattice-embedding technique with the relationship of primal-dual lattices, Micciancio [13] improved their result [1] and obtained a deterministic polynomial-time rank-preserving reduction. Furthermore, through constructing sublattice skillfully, the reference [13] also gave a deterministic polynomial-time rank-preservingreduction from SIVP γ to CVP γ for any approximate factor γ 1, which implies that the exact CVP and the exact SIVP are equivalent and SIVP γ is not harder than CVP γ. Naturally, we also want to know whether CVP γ is strictly harder than SIVP γ. In SODA 008, Micciancio [13] proposed the followingopen problem. Open Problem. Is there a deterministic polynomial time reduction from CVP γ to SIVP γ that preserves the rank of the lattice and approximation factor? Our Results. Stemming from the efforts to solve the open problem, we give a helpful exploration about the relationships between SIVP γ and some special CVP γ instances. More precisely, if the distance between the target vector and the lattice is less than some quantity with respect to λ 1 L), we give randomized and deterministic polynomial time reductions from BDD c to γn SIVP γ for any constant c > 0, which improves the known result by a factor of c. Moreover, if the distance between the target vector and the lattice is lager than some quantity with respect to λ n L), using SIVP γ oracle and Babai s nearest plane algorithm [14], we can solvecvp γ n indeterministicpolynomialtime, andfor a uniformly chosen target vector, its distance from the lattice satisfies this constraint with probability not less than 1/. Specially, if the approximatefactor γ 1,) in the SIVP γ oracle, we obtain a better result. Road Map. In Section, we review necessary concepts and notations, and then give some useful lemmas for our proofs. Our main results are stated and proved in Section 3 andsection 4. Using the SIVP γ oracle, two algorithms for finding a closest vector when the target is close to the lattice are presented in Section 3. Section 4 gives polynomial time algorithms to approximate a closest vector when the target is far from the lattice. Finally, we conclude the paper in Section 5. Preliminaries In this section, we will give some necessary concepts on lattices and some useful lemmas for our proofs. First, we give some notations. For any real x, x denotes the largest integer not larger than x and x denotes the smallest integer not smaller than x. The n-dimensional Euclidean space is represented by R n. denotes the Euclidean norm. We use bold lower letters e.g., x) to denote vectors, and bold upper case letters e.g., M) to denote matrices. The i-th coordinate of x is denoted by x i. For a set S R n, r R, rs = {ry : y S} denotes the scaling of S by r..1 Lattices and Lattice Problems Lattices. A lattice consists of all linear combinations with integer coefficients of some set of linearly independent vectors in the Euclidean space. If b 1,,b n R m are linearly independent, then the lattice spanned by these vectors is given by { n } L = LB) = z i b i : z i Z, where the matrix B = b 1,,b n ) R m n is called a basis of the lattice. Usually, the basis of a lattice L is not unique. The number m is called the dimension of the lattice L and n is called the rank of the lattice L. If m = n, the lattice is called full rank. In the Euclid space, every non-full rank lattice is isomorphic to a full rank lattice. Hence without loss of generality, in the rest of our paper, we assume that all the lattices

137 J. Comput. Sci. & Technol., Nov. 015, Vol.30, No.6 are full rank. The fundamental parallelepiped of B is defined as { n } PB) = x i b i : x i [0,1). We denote the volume of the fundamental parallelepiped as detl), which is independent of the choice of the basis. Minkowski s Minima. For any 1 i n, the i- th successive minimum with respect to a lattice L is defined as λ i L) = inf{r > 0 : dimspanl rb0,1))) i}, where B0,1) denotes the open unit ball in the Euclidean norm. Specially, λ 1 L) = min{ v : v L,v 0} denotes the length of the shortest non-zero lattice vector. Covering Radius. The covering radius associated to alatticelisdefinedtobeρl) = max t R n min v L v t. Gram-Schmidt Orthogonalization. Let b 1,,b n R n be linearly independent vectors. Let π i denote the projection over the orthogonal supplement of the linear span of b 1,,b i 1. The Gram-Schmidt orthogonalization GSO) is the family b 1,..., b n ) defined as: b 1 = b 1 and for i, bi = π i b i ). Then i 1 b i = b i µ i,j bj, j=1 where µ i,j = b i, b j / b j for 1 j < i n. Duality. Given a lattice L = LB), the dual lattice of L is the lattice L = {w spanl) : w,v Z, v L}. It is easy to verify that B T ) 1 is a basis of L, which is called the dual basis of B. Lattice Problems. For computational purpose, it is usually assumed that all lattices vectors have integer entries, namely, the lattice basis is given by an integer matrix B Z n n. There are several important computational problems in lattice theory. Here we give their strict definitions as follows. Definition 1 Shortest Vector Problem SVP γ )). Given a basis B Z n n for a lattice L = LB), find a lattice vector v L such that v γλ 1 L). Definition Closest Vector Problem CVP γ )). Given a basis B Z n n for a lattice L = LB) and some vector t R n generally not in L), find a lattice vector v L such that v t γdistt,l), where distt,l) = min u L u t denotes the distance between t and L. Definition 3 Bounded Distance Decoding BDD γ )). Given a basis B Z n n for a lattice L = LB) and a target point t R n such that distt,l) γλ 1 L), output a lattice vector v LB) such that v t = λ 1 L). Definition 4 Shortest Independent Vectors Problem SIVP γ )). Given a basis B Z n n for a lattice L = LB) and our goal is to find n linearly independent vectors v 1,,v n L such that max i v i γλ n L).. Useful Lemmas In this subsection, we will give some useful lemmas for our reductions. Since we study lattices from a computational point of view, without loss of generality, we assume that lattices are represented by a basis with integer coordinates. By the definition of Gram-Schmidt orthogonalization, the following lemma bounds the bit size of the representation of any Gram-Schmidt orthogonalization vector. Lemma 1 [15]. For a sequence of n linearly independent vectors b 1,,b n, their Gram-Schmidt orthogonalization is the sequence of vectors b 1,, b n. Then the representation of any vector b i as a vector of quotients of natural numbers takes at most polym) bits for M = max{n,logmax i b i )}. Clearly, any set of n linearly independent lattice vectors is not necessary a lattice basis. The following useful lemma says that any full-rank set of vectors in a lattice can be efficiently converted into a basis of the lattice, without increasing the length of the Gram-Schmidt vectors. Lemma [15]. There is a deterministic polynomial time algorithm ConverttoBasisB, S) that inputting a lattice basis B and linearly independent lattice vectors S = {s 1,,s n } LB) such that s 1 s s n, outputs a basis R equivalent to B such that r k max{ k/) s k, s k } for all k = 1,,n. Moreover, the new basis satisfies spanr 1,,r k ) = spans 1,,s k ) and the length of their Gram-Schmidt orthogonalization vectors satisfying r k s k for all k = 1,,n. About the relationships between primal lattice and its dual, we have the following two important results. Lemma 3 shows that in appropriate order, the Gram- Schmidt orthogonalization vectors of the dual basis are

Cheng-Liang Tian et al.: Solving CVP Instances Using an SIVP Oracle 1373 in the same direction as that of the Gram-Schmidt orthogonalization vectors of the primal basis. Lemma 4 is well known as transference theorem. It reflects the properties of the successive minima between a lattice and its dual. Lemma 3 1. Let b 1,,b n be some basis of L and b 1,, b n be its Gram-Schmidt orthogonalization. Let d 1,,d n be the dual basis of b 1,,b n and let d n,, d 1 be its Gram-Schmidt orthogonalization in reverse order. In other words, dn = d n, di = d i j>i ν d i,j j, where ν i,j = di, d j d j, d for 1 i < j n. j Then 1 i n, di = b i b i. Lemma 4 [7]. For any n-dimensional lattice L, λ 1 L)λ n L ) n. In SODA 000, Klein [16] proposed a randomized algorithm to find the closest vector when the target vector is unusually close to the lattice. Actually, it is a randomized version of Babai s algorithm [14]. The algorithm randomly samples lattice points from a Gaussian-like distribution and chooses the closest points among all the samples. Lemma 5 [16]. There is a randomized algorithm KleinB, t) that, when given an n-dimensional lattice L generated by basis vectors b 1,,b n and a target t R n that is at distance D away from L, will find the closest lattice vector to t, in time n D /min i b i, where b 1,, b n are Gram-Schmidt orthogonalization vectors of b 1,,b n. 3 Find a Closest Lattice Vector When It Is Close to the Lattice In this section, we shall study the algorithms for special CVP instance BDD γ problem with an SIVP γ oracle. We improve the presented result in two different algorithms, randomized and deterministic. First, we review some previous work as following. Lemma 6 [8]. For any γ 1, there is a polynomial time Cook-reduction from BDD 1/γ) to usvp γ. Lemma 7 [17]. For any γ 1, there is a probabilistic polynomial time reduction from usvp γn to SIVP γ. Combining the above two lemmas, we have the following result which is also shown in [18]. Lemma 8. For any γ 1, there is a probabilistic polynomial time reduction from BDD 1/γn) to SIVP γ. Combining Klein s algorithm [16] and the relationship between primal and dual lattices, we first improve Lemma 8 using a randomized reduction algorithm. Namely, we prove the following result. Theorem 1. For any γ 1 and any constant c > 0, there exists a randomized polynomial time reduction from BDD c/γn to SIVP γ. Proof. Given an SIVP γ oracle and any constant c > 0, we only need to show that Algorithm 1 will output alatticevectorv Lsuchthat v t = distt,l) in polyn) time. In fact, in step, for any 1 i n, s i s i γλ n L ). In step 3, by Lemma, the n linearly independent vectors s 1,,s n can be converted into a basis of dual lattice L : d 1,,d n satisfying d i max { } i s i, s i, d i s i, where 1 i n, and d 1,, d n and s 1,, s n are Gram-Schmidt orthogonalization vectors of d 1,,d n and s 1,,s n, respectively. Algorithm 1. BDD Algorithm: BDD B, t) Input: a lattice basis B Z n n, a target vector t such that distt,l) < c γn λ 1L) and an SIVP γ oracle O, where 1 < γ polyn), c > 0 is any constant. Output: a lattice vector v L such that distt,l) = v t. 1: Compute the dual basis of B: W = w 1,,w n) = B T ) 1, which is a basis of L. : Invoke SIVP γ oracle on the lattice L, output S = s 1,,s n) SIVP γl ). 3: Compute a basis of L : D = d 1,,d n) ConverttoBasisW, S). 4: Compute a basis of the original lattice L: R = r 1,,r n) = D T ) 1. 5: Return v KleinR,t). Assume that r n, r n 1,, r 1 are the Gram- Schmidt orthogonalization of r 1,r,,r n in reverse order. Then, by Lemma 3 and Lemma 4, for all 1 i n, r i = d i d i and r i = 1 d i 1 s i 1 γλ n L ) λ 1L) γn. Combining with Lemma 5, we can find the closest lattice vector to t in time n D /min i r i = On c ). Furthermore, we can improve the above algorithm in a deterministic way. 1 This lemma can be found in Regev s lecture Dual Lattices. http://www.cims.nyu.edu/ regev/teaching/lattices fall 009/, April 014.

1374 J. Comput. Sci. & Technol., Nov. 015, Vol.30, No.6 Theorem. For any γ 1 and any constant c > 0, there exists a deterministic polynomial time reduction from BDD c/γn to SIVP γ. Proof. We give our algorithm in two steps. Firstly, we show how to reduce BDD 1/γn) to SIVP γ, which, in fact, is a derandomization of Lemma 8. Secondly, for arbitrary but finite constant c > 1, we give a selfreduction from BDD c/γn to BDD with an c 1/4/γn SIVP γ oracle. Step 1. Reducing BDD 1/γn) to SIVP γ. Our reduction is shown in Algorithm. Clearly, using Gaussian elimination, Algorithm will output a lattice vector efficiently. We only need to prove the correctness of Algorithm. Let LB), t) be an instance of BDD 1/γn) with distt,l) < λ 1 L)/γn). Let v be a lattice vector in L such that t v = distt,l). For 1 i n, since s i γλ n L ) and v,s i Z, then, by Lemma 4, v,s i t,s i = v t,s i v t s i < λ 1L) γn γλ nl ) 1. It implies that v,s i t,s i 1/, t,s i + 1/). Since there exists at most one integer in this interval, the lattice vector v satisfying the system of linear equations v,s i = t,s i, 1 i n. Algorithm. BDD 1/γn) B,t) Input: a lattice basis B Z n n, a target vector t such that distt,l) < 1 γn λ 1L) and an SIVP γ oracle, where 1 < γ polyn). Output: a lattice vector v L such that distt,l) = v t. 1: Invoke the SIVP γ oracle on the lattice L, output S = s 1,,s n) SIVP γl ). : Solve the linear equations v,s i = t,s i for 1 i n and output v. Step. Solving BDD c/γn) instances using BDD and SIVP c 1/4/γn γ oracles. The algorithm is described in Algorithm 3. Firstly, we shall prove the correctness of Algorithm 3. Let LB),t) be an instance of BDD c/γn) with distt,l) < cλ 1 L)/γn). Let v be a lattice vector in L such that t v = distt,l). Invoke the SIVP γ oracle on the dual lattice L and return a set of n independent lattice vectors {s 1,...,s n } L such that s i γλ n L ) and v,s i Z for 1 i n. Then, for any 1 i n, v,s i t,s i = v t,s i v t s i < cλ 1L) γλ n L ) c. γn It implies that v,s i t,s i c, t,s i + c). Since there are at most c integers in this interval, the integer v,s i could be one of these adjacent integers. Each vector s i L 1 i n) can partition L into subsets L H i,j j Z) where H i,j denotes an n 1)- dimensional hyperplane H i,j = {x R n : x,s i = j}. Clearly, the distance between any two adjacent hyperplanes H i,j and H i,j+1 is 1/ s i. The above analysis shows that the closest vector v must be located on one of the c adjacent hyperplanes of t for each partition induced by s i. We discuss the following cases. Algorithm 3. BDD c/γn) B,t) Input: a lattice basis B Z n n and some constant c > 1 c, a target vector t such that distt,l) < γn λ 1L), BDD and c SIVPγ oracles, where 1 < γ polyn). 1/4/γn Output: a lattice vector v L such that distt,l) = v t. 1: Invoke SIVP γ oracle on the lattice L, output S = s 1,,s n) SIVP γl ). : Solve the linear equations v 0,s i = t,s i for 1 i n and output v 0. 3: for i = 1,,n do 4: for j = t,s i c,, t,s i +c do 5: Compute a vector w i,j L H i,j. 6: Compute the projection of t on H i,j : t i,j. 7: L i,j L H i,j w i,j, t i,j t i,j w i,j. 8: v i,j BDD c 1/4/γn L i,j,t i,j ) 9: v i,j v i,j +w i,j. 10: end for 11: end for 1: Output the closest point to t among all the points v i,j and v 0. Case 1. Suppose that v is located on all H i, t,si for 1 i n. Solving the linear equations v,s i = t,s i for 1 i n can immediately recover v. Case. Suppose that v lies on H i,j for some 1 i n and j t,s i. Then, by Lemma 4, we obtain the following two results: t t i,j 1 s i 1 γλ n L ) λ 1L) γn, distt i,j,l i,j ) = distt i,j,l H i,j ) = dist t,l) t t i,j ) 1/ c λ 1 < L) γ n λ 1 L) ) 1/ 4γ n c 1/4 c 1/4 λ 1 L) λ 1 L i,j ). γn γn

Cheng-Liang Tian et al.: Solving CVP Instances Using an SIVP Oracle 1375 It is easy to verify that L i,j is an n 1)-dimensional sublattice of L. Therefore, the recovery of v is converted to a BDD c 1/4/γn) instance L i,j,t i,j ). Now, we analyze the efficiency of Algorithm 3. In step of Algorithm 3, the vector v 0 can be found efficiently by Gaussian elimination. Using Euclidean algorithm, we can find w i,j efficiently in step 5 of Algorithm 3, and, in step 7, Micciancio [13] presented an efficient and deterministic algorithm to find a basis of L i,j. Therefore, invoking BDD oracle at c 1/4/γn) most cn times, we can find a closest vector v L to t in deterministic polynomial time in n. For arbitrary but finite constant c > 0, given an SIVP γ oracle, BDD c/γn can be solved by invoking Ocn) times BDD oracle. Recursively, the c 1/4/γn BDD c/γn problem can be reduced to BDD c m/4/γn after cn) m recursions. Let c m/4/γn 1/γn), we have m 4c 1. This implies that combining Algorithm and Algorithm 3, and invoking SIVP γ oracle at most cn) 4c 1 times, we can solve a BDD c/γn instance in deterministic polynomial time. 4 Approximate a Closer Lattice Vector When It Is Far from the Lattice First, we review some previous known results about the distance between a uniformly random chosen target and a lattice. Lemma 9 [19]. Given an n-dimensional lattice LB) and a vector t chosen uniformly from PL), then Pr t distt,lb)) ρl) ) 1, where ρl) denotes the covering radius of L. Lemma 10 [15]. For any n-dimensional lattice LB), λ n L) n ρl) λ nl). By Lemma 9 and Lemma 10, we have that for any uniformly chosen target vector t, ) Pr t Pr t distt,lb)) λ nl) 4 distt,lb)) ρl) ) 1. Given a lattice L = LB) and a target vector t R n. If we have n linearly independent vectors s 1,,s n satisfying that for any 1 i n, s i γλ n L) in hand. Then computing their Gram-Schmidt orthogonalization vectors s 1,, s n, and using Babai s nearest plane algorithm [14], we can find a vector v L such that distv,t) n ) si 1 n s i 1 nmax s i 1 i γ nλ n L). If distt,lb)) λnl) 4, then using SIVP γ oracle, we can find a vector v L such that distv,t) 1 γ nλ n L) γ ndistt,l). In summary, the above analysis contains the following result. Corollary 1. Given an n-dimensional lattice L = LB) and a target vector t R n, if distt,l) λ n L)/4, then CVP γ n can be reduced to SIVP γ in deterministic polynomial time. Specially, for uniformly chosen target vector, the reduction algorithm is correct with probability not less than 1/. Furthermore, if 1 < γ <, using lattice-embedding technique, we can get a better result. Theorem 3. Given an n-dimensional lattice L = LB) and a target vector t R n, for any real k > 3 3, 1 < γ < k, if distl,t) = min 1+k v L v t > γ k λ nl), then there exists a Cook reduction from CVP 3k1+1/n) to SIVP γ. Proof. Let µ = distt, L). Using Babai s nearest plane algorithm, we can get a real d satisfying µ d < n µ, namely, µ d/ n,d]. Divide the interval d/ n,d] into polyn) small intervals ) d 1+ 1 i, ) ] d n n 1+ 1 i+1 n n. For each i 0 = 0,, nlog 1+1/n), guess d µ n 1+ n) 1 i0, d n 1+ n) ] 1 i0+1. Let µ 0 = d 1+ 1 i0+1 n), then µ n µ0 < µ 1+ n) 1. Let B = B t 0 kµ 0 = b 1 b n t 0 0 kµ 0 ) = d 1 d n d n+1. The reduction algorithm goes as Algorithm 4.

1376 J. Comput. Sci. & Technol., Nov. 015, Vol.30, No.6 Algorithm 4. Lattice-Embedding B, t) Input: a lattice basis B Z n n, parameters k > γ < 1+k k and an SIVPγ oracle. Output: a lattice vector v L. 1: Construct a new lattice L = L B). 3 3,µ 0,1 < : Invoke SIVP γ oracle on L, v 1,v,, v n+1 SIVP γ L). 3: Express each v i = n+1 j=1 z ijd j. 4: Return v = n j=1 z i 0,jb j where z i0,1,, z i0,n,z i0,n+1) satisfies z i0,n+1 0. Now we prove the correctness of our algorithm in two cases. Case 1: λ n+1 L) µ +kµ 0 ). For every vector v i that can be representedas an integerlinearcombination of d 1,,d n+1, there must be some vector with a non-zerocoefficient in d n+1. Without loss of generality, assume that v n+1 = n z i d i +z n+1 d n+1 n ) = z i b i +z n+1 t,z n+1 kµ 0, z n+1 0. Now we will show that z n+1 = 1. In fact, if z n+1, then v n+1 4kµ 0 ). While, in step, we know v n+1 n = z i b i +z n+1 t γ λ n+1 L) γ µ +kµ 0 ) ), which implies that 4kµ 0 ) γ µ 0 +kµ 0 ) ) 4k γ 1+k ) γ +z n+1kµ 0 ) k 1+k. This contradicts with the condition in Theorem 3. Therefore z n+1 = 1. Let v = n z ib i. Then v +t = v n+1 z n+1 kµ 0) γ µ +γ 1)kµ 0 ) γ 1+k )µ 0 k µ 0 3k µ 0 v +t 3kµ 0 3k 1+ 1 ) µ. n Case : λ n+1 L) > µ +kµ 0 ). In this case, by the definition of λ n+1 L) and λ n L), we have µ +kµ 0 ) < λ n+1 L) λ n L). Similarly, we also show that z n+1 = 1. In fact, if z n+1, then v n+1 4kµ 0 ). While, in step, we know v n+1 n = z i b i +z n+1 t +zn+1 kµ 0) γ λ n+1 L) γ λ n L). Hence, 4kµ 0 ) γ λ n L) µ 0 γ k λ nl) µ γ k λ nl). This contradicts with the condition in Theorem 3. Therefore z n+1 = 1. Let v = n z ib i. Then v +t = v n+1 z n+1kµ 0 ) γ λ nl) kµ 0 ) < 4k µ k µ 0 4k µ k µ = 3k µ v +t < 3kµ. Combining the above two cases, we complete the proof of Theorem 3. γ Remark 1. In fact, let k = 1 4 in Theorem 3. We immediately obtain that if distt,l) > 1 4 λ nl), CVP 3γ1+1/n) can be reduced to SIVP γ for 1 < γ < 15. The reduction factor for CVP is much better than that in Corollary 1 for n 5. If we fix the reduction factor and let γ n = 3k1 + 1/n), then γ k = 31+1/n) 4 < 1 n 4 1 < γ < in the conditions that n 5 and 4 31+ 1 n ) /4n. This implies that for 4 31+ 1 n ) /4n, the reduction n 5 and 1 < γ < in Theorem 3 is valid for much more target vectors than that in Corollary 1. 5 Conclusions Motivated by the open problem presented by Micciancio in SODA 008, this paper studied the relationships between CVP and SIVP. Given a lattice and some target vector, intuitively, the hardness is different when the distance between the target vector and the lattice varies. Along this way, we gave some preliminary results about the relations between SIVP and some special CVP instances, which may be helpful for the full and final solution of the open problem. Solving this problem has a great impact on the computational complexity theory and security of lattice-based cryptosystems, which is the direction of our future work. References [1] Goldreich O, Micciancio D, Safra S, Seifert J P. Approximating shortest lattice vectors is not harder than approxi-

Cheng-Liang Tian et al.: Solving CVP Instances Using an SIVP Oracle 1377 mating closest lattice vectors. Information Processing Letters, 1999, 71): 55-61. [] Dinur I, Kindler G, Raz R, Safra S. Approximating CVP to within almost-polynomial factors is NP-hard. Combinatorica, 003, 3): 05-43. [3] Haviv I, Regev O. Tensor-based hardness of the shortest vector problem to within almost polynomial factors. In Proc. the 39th Annual ACM Symp. Theory of Computing, June 007, pp.469-477. [4] Kannan R. Minkowski s convex body theorem and integer programming. Mathematics of Operations Research, 1987, 13): 415-440. [5] Ajtai M, Kumar R, Sivakumar D. Sampling short lattice vectors and the closest lattice vector problem. In Proc. the 17th IEEE Annual Conf. Computational Complexity, May 00, pp.41-45. [6] Kannan R. Algorithmic geometry of numbers. Annual Review of Computer Science, 1987, : 31-67. [7] Banaszczyk W. New bounds in some transference theorems in the geometry of numbers. Mathematische Annalen, 1993, 961): 65-635. [8] Lyubashevsky V, Micciancio D. On bounded distance decoding, unique shortest vectors, and the minimum distance problem. In Lecture Notes in Computer Science 5677, Halevi S ed.), Springer Berlin Heidelberg, 009, pp.577-594. [9] Dubey C, Holenstein T. Approximating the closest vector problem using an approximate shortest vector oracle. In Lecture Notes in Computer Science 6845, Goldberg L A, Jansen K, Ravi R, Rolim J D P eds.), Springer Berlin Heidelberg, 011, pp.184-193. [10] Ajtai M. Generating hard instances of lattice problems extended abstract). In Proc. the 8th ACM Annual Symp. Theory of Computing, May 1996, pp.99-108. [11] Regev O. On lattices, learning with errors, random linear codes, and cryptography. In Proc. the 37th Annual ACM Symp. Theory of Computing, May 005, pp.84-93. [1] Blömer J, Seifert J. On the complexity of computing short linearly independent vectors and short bases in a lattice. In Proc. the 31st Annual ACM Symp. Theory of Computing, May 1999, pp.711-70. [13] Micciancio D. Efficient reductions among lattice problems. In Proc. the 19th Annual ACM-SIAM Symp. Discrete Algorithms, January 008, pp.84-93. [14] Babai L. On lovász lattice reduction and the nearest lattice point problem. Combinatorica, 1986, 61): 1-13. [15] Micciancio D, Goldwasser S. Complexity of Lattice Problems: A Cryptographic Perspective. Kluwer Academic Publishers, 00. [16] Klein P. Finding the closest lattice vector when it s unusually close. In Proc. the 11th Annual ACM-SIAM Symp. Discrete Algorithms, January 000, pp.937-941. [17] Cai J. A new transference theorem in the geometry of numbers and new bounds for Ajtai s connection factor. Discrete Applied Mathematics, 003, 161): 9-31. [18] Micciancio D. The geometry of lattice cryptography. In Lecture Notes in Computer Science 6858, Aldini A, Gorrieri R eds.), Springer Berlin Heidelberg, 011, pp.185-10. [19] Guruswami V, Micciancio D, Regev O. The complexity of the covering radius problem. Computational Complexity, 005, 14): 90-11. Cheng-Liang Tian gained his B.S. and M.S. degrees in mathematics from Northwest University, Xi an, in 006 and 009 respectively, and Ph.D. degree in information security from Shandong University, Jinan, in 013. He is a post-doctor in State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing. His research interest is lattice-based cryptography. Wei Wei received his B.S. degree in mathematics from Shandong University, Jinan, in 008. She is currently a Ph.D. candidate of Tsinghua University, Beijing. Her current research interest is lattice-based cryptography. text text text text text text text text text text Dong-Dai Lin received his B.S. degree in mathematics from Shandong University, Jinan, and his M.S. degree and Ph.D. degree in fundamental mathematics from the Institute of Systems Science, Chinese Academy of Sciences, Beijing. Now, he is the director of State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences. His research interests include cryptology, security protocols, symbolic computation and software development, and he is currently working on multivariate public key cryptography, sequences and stream cipher, zero knowledge proof, and network-based cryptographic computation.