Low-Dimensional Lattice Basis Reduction Revisited (Extended Abstract)

Size: px
Start display at page:

Download "Low-Dimensional Lattice Basis Reduction Revisited (Extended Abstract)"

Transcription

1 Algorithmic Number Theory Proceeings of ANTS-VI (June 13 18, 2004, Burlington, U.S.A.) D. Buell (E.), vol.???? of Lecture Notes in Computer Science, pages?????? c Springer-Verlag ( Low-Dimensional Lattice Basis Reuction Revisite (Extene Abstract) Phong Q. Nguyen 1 an Damien Stehlé 2 1 CNRS/École normale supérieure, Département informatique, 45 rue Ulm, Paris Ceex 05, France. pnguyen@i.ens.fr an 2 LORIA/INRIA Lorraine, 615 rue u J. botanique, Villers-lès-Nancy, France. stehle@loria.fr an Abstract. Most of the interesting algorithmic problems in the geometry of numbers are NP-har as the lattice imension increases. This article eals with the low-imensional case. We stuy a greey lattice basis reuction algorithm for the Eucliean norm, which is arguably the most natural lattice basis reuction algorithm, because it is a straightforwar generalization of the well-known two-imensional Gaussian algorithm. Our results are two-fol. From a mathematical point of view, we show that up to imension four, the output of the greey algorithm is optimal: the output basis reaches all the successive minima of the lattice. However, as soon as the lattice imension is strictly higher than four, the output basis may not even reach the first minimum. More importantly, from a computational point of view, we show that up to imension four, the bit-complexity of the greey algorithm is quaratic without fast integer arithmetic: this allows to compute various lattice problems (e.g. computing a Minkowski-reuce basis an a closest vector) in quaratic time, without fast integer arithmetic, up to imension four, while all other algorithms known for such problems have a bit-complexity which is at least cubic. This was alreay prove by Semaev up to imension three using rather technical means, but it was previously unknown whether or not the algorithm was still polynomial in imension four. Our analysis, base on geometric properties of low-imensional lattices an in particular Voronoï cells, arguably simplifies Semaev s analysis in imensions two an three, unifies the cases of imensions two, three an four, but breaks own in imension five. 1 Introuction A lattice is a iscrete subgroup of R n. Any lattice L has a lattice basis, i.e. a set {b 1,..., b } of linearly inepenent vectors such that the { lattice is the set of all } integer linear combinations of the b i s: L[b 1,..., b ] = i=1 x ib i : x i Z. A lattice basis is usually not unique, but all the bases have the same number of elements, calle the imension or rank of the lattice. In imension higher than

2 2 one, there are infinitely many bases, but some are more interesting than others: they are calle reuce. Roughly speaking, a reuce basis is a basis mae of reasonably short vectors which are almost orthogonal. Fining goo reuce bases has prove invaluable in many fiels of computer science an mathematics, particularly in cryptology (see for instance the survey [18]); an the computational complexity of lattice problems has attracte consierable attention in the past few years (see for instance the book [16]), following Ajtai s iscovery [1] of a connection between the worst-case an average-case harness of certain lattice problems. There exist many ifferent notions of reuction, such as those of Hermite [8], Minkowski [17], Korkine-Zolotarev (KZ) [9, 11], Venkov [19], Lenstra-Lenstra- Lovász [13], etc. Among these, the most intuitive one is perhaps Minkowski s reuction; an up to imension four, it is arguably optimal compare to all other known reuctions, because it reaches all the so-calle successive minima. However, fining a Minkowski-reuce basis or a KZ-reuce basis is NP-har uner ranomize reuctions as the imension increases, because such bases contain a shortest lattice vector, an the shortest vector problem is NP-har uner ranomize reuctions [2, 15]. In orer to better unerstan lattice reuction, it is tempting to stuy the low-imensional case. Improvements in low-imensional lattice reuction may lea to significant running-time improvements in highimensional lattice reuction, as the best lattice reuction algorithms known in theory an in practice for high-imensional lattices, namely Schnorr s blockwise KZ-reuction [20] an its heuristic variants [21, 22], are base on a repeate use of low-imensional KZ-reuction. The classical Gaussian algorithm [5] computes in quaratic time (without fast integer arithmetic [23]) a Minkowski-reuce basis of any two-imensional lattice. This algorithm was extene to imension three by Vallée [27] in 1986 an Semaev [24] in 2001: Semaev s algorithm is quaratic without fast integer arithmetic, whereas Vallée s algorithm has cubic complexity. More generally, Helfrich [7] showe in 1986 by means of the LLL algorithm [13] how to compute in cubic time a Minkowski-reuce basis of any lattice of fixe (arbitrary) imension, but the hien complexity constant grows very fast with the imension. In this paper, we generalize the Gaussian algorithm to arbitrary imension. Although the obtaine greey algorithm is arguably the simplest lattice basis reuction algorithm known, its analysis becomes remarkably more an more complex as the imension increases. Semaev [24] was the first to prove that the algorithm was still polynomial-time in imension three, but the polynomial-time complexity remaine open for higher imension. We show that up to imension four, the greey algorithm computes a Minkowski-reuce basis in quaratic time without fast arithmetic. This implies that a shortest vector an a KZreuce basis can be compute in quaratic time up to imension four. Inepenently of the running time improvement, we hope our analysis will help to esign new lattice reuction algorithms. The main novelty of our approach compare to previous work is that we use geometrical properties of low-imensional lattices. In imension two, the metho is very close to the argument given by

3 Semaev in [24], which is itself very ifferent from previous analyses of the Gaussian algorithm [12, 28, 10]. In imension three, Semaev s analysis [24] is base on a rather exhaustive analysis of all the possible behaviors of the algorithm, which involves quite a few computations an makes it ifficult to exten to higher imension. We replace the main technical arguments by geometrical consierations on two-imensional lattices. This makes it possible to exten the analysis to imension four, by carefully stuying geometrical properties of three-imensional lattices, although a few aitional ifficulties appear. However, it is still unknown whether or not the greey algorithm remains polynomial-time beyon imension four. Besies, we show that the output basis may not even reach the shortest vector beyon imension four. The paper is organize as follows. In Section 2, we recall useful facts about lattices. In Section 3, we recall Gauss algorithm an escribe its natural greey generalization. In Section 4, we analyze Gauss algorithm an exten the analysis to imensions three an four, using geometrical results. We explain why our analysis breaks own in imension five. In Section 5, we prove geometrical results on low-imensional lattices which are useful to prove the so-calle gap lemmata, an essential ingreient of the complexity analysis of Section 4. Important remark: Due to lack of space, this extene abstract contains few proofs, an we only show that the algorithm is polynomial-time. The proof of the quaratic complexity will appear in the full version of the paper. Notations:. an.,. enote respectively the Eucliean norm an inner prouct of R n ; variables in bol are vectors; whenever the notation [b 1,..., b ] is use, we have b 1... b n an in that case we say that the b i s are orere. Besies, the complexity moel we use is the RAM moel, an the computational cost is measure in elementary operations on bits. In any complexity statement, we assume that the unerlying lattice L is integral (L Z n ). If x R, then x enotes a nearest integer to x. 3 2 Preliminaries We assume the reaer is familiar with geometry of numbers (see [4, 6, 14, 25]). Gram-Schmit orthogonalization. Let b 1,..., b be vectors. The Gram matrix G(b 1,..., b ) of b 1,..., b is the matrix ( b i, b j ) 1 i,j forme by all the inner proucts. b 1,..., b are linearly inepenent if an only if the eterminant of G(b 1,..., b ) is not zero. The volume vol(l) of a lattice L is the square root of the eterminant of the Gram matrix of any basis of L. The orthogonality-efect of a basis (b 1,..., b ) of L is efine as ( i=1 b i )/vol(l): it is always greater than 1, with equality if an only if the basis is orthogonal. Let (b 1,..., b ) be linearly inepenent vectors. The Gram-Schmit orthogonalization (b 1,..., b ) is efine as follows: b i is the component of b i orthogonal to the subspace spanne by b 1,..., b i 1. Successive minima an Minkowski reuction. Let L be a -imensional lattice in R n. For 1 i, the i-th minimum λ i (L) is the raius of the smallest

4 4 close ball centere at the origin containing at least i linearly inepenent lattice vectors. The most famous lattice problem is the shortest vector problem (SVP): given a basis of a lattice L, fin a lattice vector of norm λ 1 (L). There always exist linearly inepenent lattice vectors v i s such that v i = λ i (L) for all i. Surprisingly, as soon as 4, such vectors o not necessarily form a lattice basis, an when 5, there may not even exist a lattice basis reaching all the minima. A basis [b 1,..., b ] of L is Minkowski-reuce if for all 1 i, b i has minimal norm among all lattice vectors b i such that [b 1,..., b i ] can be extene to a basis of L. Surprisingly, up to imension six, one can easily ecie if a given basis is Minkowski-reuce or not, by checking a small number of explicit norm inequalities, known as Minkowski s conitions. A basis reaching all the minima must be Minkowski-reuce, but a Minkowski-reuce basis may not reach all the minima, except the first four ones (see [29]): if [b 1,..., b ] is a Minkowski-reuce basis of L, then for all 1 i min(, 4), b i = λ i (L). Thus, a Minkowski-reuce basis is optimal in a natural sense up to imension four. A classical result (see [29]) states that the orthogonality-efect of a Minkowskireuce basis can be upper-boune by a constant which only epens on the lattice imension. Voronoï cell an Voronoï vectors. The Voronoï cell [30] of L = L[b 1,..., b ], enote by Vor(b 1,..., b ), is the set of vectors x in the linear span of L which are closer to 0 than to any other lattice vector: for all v L, x v x, that is v 2 2 v, x. The Voronoï cell is a finite polytope which tiles the linear span of L by translations by lattice vectors. We exten the notation Vor(b 1,..., b ) to the case where the first vectors may be zero (the remaining vectors being linearly inepenent): Vor(b 1,..., b ) enotes the Voronoï cell of the lattice spanne by the non-zero b i s. A lattice vector v L is calle a Voronoï vector if v/2 belongs to the Voronoï cell (in which case v/2 will be on the bounary of the Voronoï cell). v L is a strict Voronoï vector if v/2 is containe in the interior of a ( 1)- imensional face of the Voronoï cell. A classical result states that Voronoï vectors correspon to the minima of the cosets of L/2L. We say that (x 1,..., x ) Z is a possible Voronoï coor if there exists a Minkowski-reuce basis [b 1,..., b ] such that x 1 b x b is a Voronoï vector. In some parts of the article, we will eal with Voronoï coorinates with respect to other types of reuce bases: the kin of reuction consiere will be clear from the context. The covering raius ρ(l) of a lattice L is half of the iameter of the Voronoï cell. The closest vector problem (CVP) is a non-homogeneous version of SVP: given a basis of a lattice an an arbitrary vector x of R n, fin a lattice vector v minimizing the istance v x ; in other wors, if y enotes the orthogonal projection of x over the linear span of L, fin v L such that v y belongs to the Voronoï cell of L. 3 A Greey Generalization of Gauss Algorithm In imension two, there is a simple an efficient lattice basis reuction algorithm ue to Gauss. We view Gauss algorithm as a greey algorithm base on

5 the one-imensional CVP, which suggests a natural generalization to arbitrary imension that we call the greey reuction algorithm. We stuy properties of the bases output by the greey algorithm by efining a new type of reuction an comparing it to Minkowski reuction Gauss Algorithm Gauss algorithm escribe in Figure 1 can be seen as a two-imensional Input: A basis [u, v] with its Gram matrix. Output: A reuce basis of L[u, v], together with its Gram matrix. 1. Repeat 2. r := v xu where x := j u,v u 2 m, 3. v := u, 4. u := r, 5. Upate the Gram matrix of (u, v), 6. Until u v. 7. Return [v, u] an its Gram matrix. Fig. 1. Gauss algorithm. generalization of the centere Eucliean algorithm [28]. At Step 2 of each loop iteration, u is shorter than v, an one woul like to shorten v rather than u, while preserving the fact that [u, v] is a basis of L. This can be achieve by subtracting to v a multiple xu of u, because such a transformation is unimoular. The optimal choice is when xu is the closest vector to v, in the one-imensional lattice spanne by u, which gives rise to x := u,v. The values u, v an u 2 u 2 are extracte from G(u, v), which is upate at Step 5 of each loop iteration. The complexity of Gauss algorithm is given by the following classical result: Theorem 1. Given as input a basis [u, v] of a lattice L, Gauss algorithm outputs a Minkowski-reuce basis of L in time O(log v [1+log v log λ 1 (L)]). Note that this result is not trivial to prove. It is not even clear a priori why Gauss algorithm outputs a Minkowski-reuce basis. 3.2 The Greey Reuction Algorithm Gauss algorithm suggests the general greey algorithm escribe in Figure 2, which uses reuction an closest vectors in imension 1, to reuce bases in imension. We make a few remarks on the escription of the algorithm. If the Gram matrix is not given, we may compute it in time O(log 2 b ) for fixe. The algorithm upates the Gram matrix each time the basis changes.

6 6 Name: Greey(b 1,..., b ). Input: A basis [b 1,..., b ] together with its Gram matrix. Output: An orere greey-reuce basis of L[b 1,..., b ] with its Gram matrix. 1. If = 1, return b Repeat 3. Orer (b 1,..., b ) by increasing lengths an upate the Gram matrix, 4. [b 1,..., b 1 ] := Greey(b 1,..., b 1 ), 5. Compute a vector c closest to b, in L[b 1,..., b 1 ], 6. b := b c an upate the Gram matrix, 7. Until b b Return [b 1,..., b ] an its Gram matrix. Fig. 2. The greey lattice basis reuction algorithm in imension. Step 3 is easy: if this is the first iteration of the loop, the basis is alreay orere; otherwise, [b 1,..., b 1 ] is alreay orere, an only b has to be inserte among b 1,..., b 1. At Step 4, the greey algorithm calls itself recursively in imension 1: G(b 1,..., b 1 ) oes not nee to be compute before calling the algorithm, since G(b 1,..., b ) is alreay known. At this point, we o not explain how Step 5 (the computation of closest vectors) is performe: this issue is postpone to subsection 3.4. Note that for = 2, the greey algorithm is exactly Gauss algorithm. From a geometrical point of view, the goal of Steps 5 an 6 is to make sure that the orthogonal projection of b over the lattice spanne by [b 1,..., b 1 ] lies in the Voronoï cell of that lattice. An easy proof by inuction on shows that the algorithm terminates. Inee, the new vector b of Step 6 is strictly shorter than b 1 if the loop oes not en at Step 7. Thus the prouct of the norms of the b i s ecreases strictly at each iteration of the loop which is not the last one. But for all B, the number of lattice vectors of norm less than B is finite, which completes the proof. Although the escription of the greey algorithm is fairly simple, analyzing its bit complexity seems very ifficult. Even the two-imensional case of the Gaussian algorithm is not trivial. 3.3 Greey Reuction In this subsection, we stuy properties of the bases output by the greey algorithm. As previously mentione, it is not clear why Gauss algorithm outputs a Minkowski-reuce basis. But it is obvious that the output basis [u, v] satisfies: u v v xu for all x Z. This suggests the following efinition: Definition 2. An orere basis [b 1,..., b ] is greey-reuce if for all 2 i an for all x 1,..., x i 1 Z: b i b i + x 1 b x i 1 b i 1. In other wors, we have the following recursive efinition: a one-imensional basis is always greey-reuce, an an orere basis [b 1,..., b ] is greey-reuce if

7 an only if [b 1,..., b 1 ] is greey-reuce an the projection of b over the linear span of b 1,..., b 1 lies in the Voronoï cell Vor(b 1,..., b 1 ). The greey algorithm outputs a greey-reuce basis, an if the input basis is greey-reuce, the output basis will be equal to the input basis. The fact that Gauss algorithm outputs Minkowski-reuce bases is a particular case of the following result, which compares greey-reuction with Minkowski reuction: Lemma 3. The following statements hol: 1. Any Minkowski-reuce basis is greey-reuce. 2. A basis of 4 vectors is Minkowski-reuce if an only if it is greeyreuce. 3. If 5, there exists a basis of vectors which is greey-reuce, but not Minkowski-reuce. As a consequence, the greey algorithm outputs a Minkowski-reuce basis up to imension four, thus reaching all the successive minima of the lattice; but beyon imension four, the greey algorithm outputs a greey-reuce basis which may not be Minkowski-reuce. The following lemma shows that greeyreuce bases may consierably iffer from Minkowski-reuce bases beyon imension four: Lemma 4. Let 5. For all ε > 0, there exists a lattice L an a greey-reuce basis [b 1,..., b ] of L such that: λ 1 (L)/ b 1 ε an vol(l)/ i=1 b i ε. Such properties o not hol for Minkowski-reuce bases. The first phenomenon shows that greey-reuce bases may be arbitrarily far from the first minimum, while the secon one shows that a greey-reuce basis may be far from being orthogonal Computing Closest Vectors From Minkowski-Reuce Bases We now explain how Step 5 of the greey algorithm can be implemente efficiently up to = 5. Step 5 is trivial only when 2. Otherwise, note that after Step 4, the ( 1)-imensional basis [b 1,..., b 1 ] is greey-reuce, an therefore Minkowski-reuce as long as 5. An we know the Gram matrix of [b 1,..., b 1, b ]. Theorem 5. Let 1 be an integer. There exists an algorithm which, given as input a Minkowski-reuce basis [b 1,..., b 1 ], a target vector t longer than all the b i s, an the Gram matrix of [b 1,..., b 1, t], outputs a closest lattice vector c to t (in the lattice spanne by the [b 1,..., b 1 ]), an the Gram matrix of (b 1,..., b 1, t c), in time O(log t [1+log t log b α ]), where 1 α is any integer such that [b 1,..., b α 1, t] is Minkowski-reuce. Intuitively, the algorithm works as follows: an approximation of the coorinates (with respect to the b i s) of the closest vector is compute using linear algebra, an the approximation is then correcte by a suitable exhaustive search.

8 8 Let h be the orthogonal projection of t over the linear span of b 1,..., b 1. There exist y 1,..., y 1 R such that h = 1 i=1 y ib i. If c = 1 i=1 x ib i is a closest vector to t, then h c belongs to Vor(b 1,..., b 1 ). However, for any C > 0, the coorinates (with respect to any basis of orthogonality-efect C) of any point insie the Voronoï cell can be boune inepenently from the lattice (see [26]). It follows that if we know an approximation of the y i s with sufficient precision, then c can be erive from a O(1) exhaustive search, since the coorinates y i x i of h c are boune, an so is the orthogonality-efect of a Minkowski-reuce basis. To( approximate the y i s, we use linear algebra. Let G = G(b 1,..., b 1 ) an bi,b H = j b i. We have: )1 i,j 1 2 b 1,t y 1 b 1, t y 1 b 1 2 G. =. therefore. = H 1. y n b n, t y n. b n,t b n 2 We use the latter formula to compute the y i s with a one-bit accuracy, in the expecte time. Let r = max i log bi,t b i. Notice that r = O(1+log t log b 2 α 1 ) by bouning b i, t epening on whether i α. Notice also that the entries of H are all 1 in absolute value (because [b 1,..., b 1 ] is Minkowski-reuce), an et(h) is lower boune by some universal constant (because the orthogonalityefect of [b 1,..., b 1 ] is boune). It follows that one can compute the entries of H 1 with a Ω(r)-bit accuracy, in O(r 2 ) binary operations. One eventually erives the y i s with a one-bit accuracy, 4 Complexity Analysis of the Greey Algorithm 4.1 A Geometric Analysis of Gauss Algorithm We provie yet another proof of the classical result that Gauss algorithm has quaratic complexity. Compare to other proofs, our metho closely resembles the recent one of Semaev [24], itself relatively ifferent from [12, 28, 10, 3]. The analysis is not optimal (as oppose to [28]), but its basic strategy can be extene up to imension four. Consier the value of x at Step 2: If x = 0, this must be the last iteration of the loop. If x = 1, there are two cases: If v xu u, then this is the last loop iteration. Otherwise, the inequality can be rewritten as u xv < u, which means that u can be shortene with the help of v, which can only happen if this is the first loop iteration, because of the greey strategy. Otherwise, x 2, which implies that xu is not a Voronoï vector of the lattice spanne by u. Intuitively, this means that xu is far away from Vor(u), so that v xu is consierably shorter than v. More precisely, one can show that v 2 v xu u 2, which is therefore > 3 v xu 2 if this is not the last loop iteration.

9 This shows that the prouct of the basis vectors norms ecreases by a factor at least 3 every loop iteration except possibly the first an last ones. Thus, the number τ of loop iterations is boune by: τ 2 + log 3 v log 3 λ 1(L). It remains to estimate the cost of each Step 2, which is the cost of computing x. Because u,v u v 2 u, one can see that the bit complexity of Step 2 is O(log v [1 + log v log u ]). If we enote by u i an v i the values of u an v at the i-th iteration, then v i+1 = u i an we obtain that the bit complexity of Gauss algorithm is boune by: O ( τ i=1 log v i [1 + log v i log u i ]) = O (log v τ i=1 [1 + log v i log v i+1 ]) = O (log v [τ + log v log λ 1 (L)]). This completes the proof of Theorem A Geometric Analysis Up To Dimension Four The main result of the paper is the following: Theorem 6. Let 1 4. Given as input an orere basis [b 1,..., b ], the greey algorithm of Figure 2 base on the algorithm of Theorem 5 outputs a Minkowski-reuce basis of L[b 1,..., b ], using a number of bit operations boune by O(log b [1 + log b log λ 1 (L)]). However, ue to lack of space, we only prove the following weaker result in this extene abstract: Theorem 7. Let 1 4. Given as input an orere basis [b 1,..., b ], the greey algorithm of Figure 2 base on the algorithm of Theorem 5 outputs a Minkowski-reuce basis of L[b 1,..., b ], using a number of bit operations boune by a polynomial in log b. The result asserts the polynomial-time complexity of the greey algorithm, which is by far the harest part of Theorem 6. Both theorems are prove iteratively: the case = 4 is base on the case = 3, which is itself base on the case = 2. The analysis of Gauss algorithm (Section 4.1) was base on the fact that if x 2, xu is far away from the Voronoï cell of the lattice spanne by u. The proof of Theorem 7 relies on a similar phenomenon in imensions two an three. However, the situation is consierably more complex, as the following basic remarks hint: For = 2, we consiere the value of x, but if 3, there will be several coefficients instea of a single x, an it is not clear which coefficient will be useful in the analysis. For = 2, Step 4 cannot change the basis, as there are only two bases in imension one. If 3, Step 4 may completely change the vectors, an it coul be har to keep track of what is going on. In orer to prove Theorem 7, we introuce a few notations. Consier the i-th loop iteration. Let [a i 1,..., ai ] enote the basis [b 1,..., b ] at the beginning

10 10 of the i-th loop iteration. The basis [a i 1,..., ai ] becomes [bi 1,..., bi 1, ai ] with b i 1... bi 1 after Step 4, an (bi 1,..., bi ) after Step 6, where bi = ai c i an c i is the closest vector c foun at Step 5. Let p i be the number of integers 1 j such that b i j bi. Let be the rank of b i once (bi 1,..., bi ) is sorte by length: for example, = 1 if b i < bi 1. Clearly, 1 p i, if p i = then the loop terminates, an otherwise a i+1 = a i+1 p i. Note that may not be equal to p i because there may be several choices when sorting the vectors by length in case of equalities. Now consier the (i + 1)-th loop iteration for some i 1. Recall that by efinition of, we have a i+1 = b i = ai ci, while [a i+1 j ] j πi = [b i j ] 1 j 1. The closest vector c i+1 belongs to L[b i+1 1,..., b i+1 1 ] = L[ai+1 1,..., a i+1 1 ]: there exist integers x i+1 1,..., x i+1 1 such that ci+1 = 1 j=1 xi+1 j a i+1 j. Suppose we know that Theorem 7 is correct in imension 1 with 2 4; we are to prove that it is still vali in imension. Because of this inuction hypothesis an of Theorem 5, the number of bit operations performe in the i-th loop iteration is boune by a polynomial in log a i. Since ai a1 for any i, it is sufficient to prove that the number of loop iterations is boune by a polynomial in log a 1. Inee, we show that there exist a universal constant C > 1 such that for any execution of the -imensional greey algorithm, in any consecutive iterations of the loop, the prouct of the lengths of the current vectors ecreases by some factor higher than C : a i 1... ai a i a i+ C. (1) This automatically ensures that the number of loop iterations is at most proportional to log a 1, an that the total number of bit operations is boune by a polynomial in log a 1. We eal with the first ifficulty mentione: which coefficient will be useful? The trick is to consier the value of x i+1, that is, the coefficient of a i+1 = a i ci in c i+1, an to use the greey properties of the algorithm. Lemma 8. Among consecutive iterations of the loop of the greey algorithm of Figure 2, there is at least one iteration of inex i + 1 such that p i+1 p i. Moreover, for such a loop iteration, we have x i+1 2. Proof. The first statement is obvious. Consier one such loop iteration i + 1. Suppose we have a small x+1 i, that is x i+1 = 0 or x i+1 = 1. If x i+1 = 0, c i+1 L[aj i+1 ] j πi,j 1 = L[b i 1,..., bi 2 ]. We claim that the (i + 1)-th iteration must be the last one. Because the i-th loop iteration was not terminal, we have a i+1 = b i 1. Moreover, [bi 1,..., bi 1 ] is greeyreuce because of Step 4 of the i-th loop iteration. These two facts imply that c i+1 must be zero, an the (i + 1)-th loop iteration is the last one. If x i+1 = 1, we claim that p i+1 > p i. We have c i+1 = 1 j=1 xi+1 j a i+1 j where a i+1 = a i ci an [a i+1 j ] j πi = [b i j ] 1 j 1. Thus, c i+1 can be

11 written as c i+1 = ±(a i ci ) + e where e L[b i 1,..., bi 2 ]. Therefore a i+1 c i+1 = b i 1 ±(ai ci )+e. In other wors, a i+1 c i+1 = a i f for some f L[b i 1,..., b i 1 ]. It follows that p i p i, which achieves the claim. We will see that in imension three, any such loop iteration i + 1 implies that at least one of the basis vectors significantly ecreases in the (i + 1)-th loop iteration, or ha significantly ecrease in the i-th loop iteration. This is only almost true in imension four: fortunately, we will be able to isolate the ba cases, an to show that when a ba case occurs, the number of remaining loop iterations can be boune by some universal constant. We now eal with the secon ifficulty. Recall that c i+1 = 1 j=1 xi+1 j a i+1 j but the basis [a i+1 1,..., a i+1 1 ] is not necessarily greey-reuce. We istinguish two cases: 1) The basis [a i+1 1,..., a i+1 1 ] is somehow far from being greey-reuce. Then b i was significantly shorter than ai. Note that this length ecrease concerns the i-th loop iteration an not the (i + 1)-th. 2) Otherwise, the basis [a i+1 1,..., a i+1 1 ] is almost greey-reuce. The fact that x i+1 2 roughly implies that c i+1 is somewhat far away from the Voronoï cell Vor(a i+1 1,..., a i+1 1 ): this phenomenon will be precisely capture by the so-calle Gap Lemma. When this is the case, the new vector b i+1 will be significantly shorter than a i+1. To capture the property that a set of vectors is almost greey-reuce, we introuce the so-calle ε-reuction where ε 0, which is efine as follows: Definition 9. A single vector b 1 is always ε-reuce; for 2, a -tuple (b 1,..., b ) is ε-reuce if (b 1,..., b 1 ) is ε-reuce, b 1 b, an the orthogonal projection of b over the linear span of (b 1,..., b 1 ) belongs to (1 + ε)vor(b 1,..., b 1 ). With this efinition, a greey-reuce basis is ε-reuce for any ε 0. In the efinition of ε-reuction, we i not assume that the b i s were nonzero nor linearly inepenent. This is because the Gap Lemma is essentially base on compactness properties: the set of ε-reuce -tuples nees to be close (from a topological point of view), while a limit of bases may not be a basis. We can now give the precise statements of the two cases escribe above. Lemma 10 correspons to case 1), an Lemma 11 to case 2). Lemma 10. Let 2 4. There exists a constant ε 1 > 0 such that for any ε ε 1 there exists C ε > 1 such that the following statement hols. Consier the (i + 1)-th loop iteration of an execution of the -imensional greey algorithm. If [a i+1 1,..., a i+1 1 ] is not ε-reuce, then ai C ε b i. 11

12 12 Lemma 11. Let 2 4. There exist two constants ε 2 > 0 an D > 0 such that the following statement hols. Consier the (i + 1)-th loop iteration of an execution of the -imensional greey algorithm. Suppose that [a i+1 1,..., a i+1 1 ] is ε 2 -reuce, an that a i+1 k (1 ε 2 ) a i+1 for some 1 k 1. Then, if x k 2 an if we are not in the 211-case, we have: b i D b i+1 k 2 a i+1 2, where the 211-case is: = 4, x k = 2 an the other x j s are all equal to 1. This last lemma is a irect consequence of Pythagore an the Gap Lemma (which is crucial to our analysis, an to which the next section is evote): Theorem 12 (Gap Lemma). Let 2 4. There exist two constants ε 2 > 0 an D > 0 such that the following statement hols. Let [a 1,..., a 1 ] be ε- reuce vectors, u be a vector of Vor(a 1,..., a 1 ) an x 1,..., x 1 be integers. If a k (1 ε) a 1 for some k 2, then: 1 u 2 + D b k 2 u + x j b j 2, where x k 2, an if = 4 the two other x j s are not all equal to 1. This completes the overall escription of the proof of Theorem 7. Inee, choose three constants ε, D > 0 an C > 1 such that we can apply Lemmata 10 an 11. We prove that Equation (1) hols for C = min(c, D, 1 ε ) > 1. Consier a loop iteration i + 1 such that p i+1 p i. Recall that among any consecutive iterations of the loop, there is at least one such iteration. For such an iteration, we have x i+1 2. We istinguish four cases: [a i+1 1,..., a i+1 1 ] is not ε-reuce: then Lemma 10 gives the result through the i-th loop iteration. a i+1 < (1 ε) a i+1 : because p i+1 p i, we have the inequalities b i+1 < a i+1 p i = a i+1 < (1 ε) a i+1. We are in the 211-case, i.e. = 4 with x πi = 2 an the other x j s are all equal to 1, we refer to the etail analysis of subsection 4.3. Otherwise, we apply Lemma 11, which gives the expecte result through the (i + 1)-th loop iteration. We escribe our strategy to prove that the greey algorithm is polynomialtime up to imension four. One can further prove that the bit complexity is in fact quaratic, by carefully assessing the costs of each loop iteration an combining them, but the proof is much more technical than in the two-imensional case. j=1

13 Concluing in Dimension Four In the previous subsections, we showe that the greey algorithm is polynomialtime in imensions two an three, but we notice that a new ifficulty arose in imension four: the Gap Lemma is useless in the so-calle 211-case. This is because there are three-imensional Minkowski-reuce bases [b 1, b 2, b 3 ] for which 2b i + s 1 b j + s 2 b k with {i, j, k} = {1, 2, 3} an s 1 = s 2 = 1 is a Voronoï vector. Inee consier the lattice spanne by the columns b 1, b 2, b 3 of the following matrix: M = This basis is Minkowski-reuce an b 1 + b 2 + 2b 3 = b 1 + b 2 (2k 1 + 1)b 1 + (2k 2 + 1)b 1 + 2k 3 b 3 for any k 1, k 2, k 3 Z. Therefore, a vector in the Voronoï cell centere in b 1 + b 2 + 2b 3 can avoi being significantly shortene when translate insie the Voronoï cell centere in 0. The Gap Lemma cannot tackle this problem. However, we note that (1, 1, 2) is rarely a Voronoï coorinate (with respect to a Minkowski-reuce basis), an when it is, it cannot be a strict Voronoï coor: we can prove that if (1, 1, 2) is a Voronoï coor, then b 1 +b 2 = b 1 +b 2 +2b 3, which tells us that b 1 +b 2 +2b 3 is not the only vector in its coset of L/2L reaching the minimum. It turns out that the lattice spanne by the columns of M is essentially the only one for which (1, 1, 2) moulo any change of sign an permutation of coorinates can be a Voronoï coor. More precisely, if (1, 1, 2) moulo any change of sign an permutation of coorinates is a Voronoï coor for a lattice basis, then the basis matrix can be written as rum where r is any non-zero real number an U is any orthogonal matrix. Since a basis can be arbitrarily close to one of these without being one of them, we nee to consier a small compact set of normalize bases aroun the annoying ones. More precisely, this subset is: 1 {[b 1, b 2, b 3 ] ε-reuce / σ S 3, b 3 2 G(b σ(1), b σ(2), b σ(3) ) M t M ε}, for some sufficiently small ε > 0, where M is the maximum of the absolute values of the matrix M an M is the matrix of the absolute values. Now, consier we are in the 211-case at some loop iteration i + 1. We istinguish three cases: [a i+1 1, a i+1 2, a i+1 3 ] is outsie the compact. In this case, a variant of the Gap Lemma (Lemma 29) prove in Section 5 is vali, an can be use to show that b i+1 4 is significantly shorter than a i+1 4. [a i+1 1, a i+1 2, a i+1 3 ] is insie the compact, but a i+1 4 is far from the Voronoï cell Vor(a i+1 1, a i+1 2, a i+1 3 ). In this case, b i+1 4 is significantly shorter than a i+1 4. Otherwise the overall geometry of [a i+1 1, a i+1 2, a i+1 3, a i+1 4 ] is very precisely known, an we can show that there remain at most O(1) loop iterations. More precisely, by using Lemma 29, we show that:

14 14 Lemma 13. There exist three constants C, ε > 0, an c Z such that the following hols. Consier an execution of the four-imensional greey algorithm, an a loop iteration i + 1 for which p i+1 p i, [a i+1 1, a i+1 2, a i+1 3 ] is ε-reuce, a i+1 p i (1 ε) a i+1 4, an ( x σ(1), x σ(2), x σ(3) ) = (1, 1, 2) for some permu- 4 or: σ(2), ai+1 σ(3), ai+1 4 ) A ε, with : A = tation σ of {1, 2, 3}. Then either a i+1 4 (1 + C) b i+1 1 a i G(ai+1 σ(1), ai+1 In orer to prove this result, we restrict more an more the possible geometry of [a i 1, ai 2, ai 3, ai 4 ]. Note that this critical geometry correspons to the root lattice D 4. We treat this last case by applying the following lemma, which roughly says that if the Gram matrix of a basis is sufficiently close to some invertible matrix, then the number of short vectors generate by the basis remains boune. Lemma 14. Let A be an invertible matrix, an B > 0. Then there exists ε, N > 0 such that for any basis (b 1,..., b ), if G(b 1,..., b ) A ε, then: {(x 1,..., x ) / x 1 b x b B} N. 4.4 Failure in Dimension Five In this subsection, we explain why the analysis of the greey algorithm breaks own in imension five. First of all, the basis returne by the algorithm is not necessarily Minkowski reuce, since greey an Minkowski reuctions iffer in imension five. Consier the lattice spanne by the columns of the following matrix: , ε where 0 < ε < 1. This basis is clearly greey reuce, but the vector (0, 0, 0, 0, ε) t belongs to the lattice. Moreover, for a small ε, this shows that a greey reuce basis can be arbitrarily far from the first minimum, an can have an arbitrarily large orthogonality efect: b 5 is very small towars b 5. For ε close to 1, this basis shows that the length ecrease factor through one loop iteration of the five-imensional greey algorithm can be arbitrarily close to 1. Nevertheless, the greey algorithm is well-efine in imension five (the fourimensional greey algorithm can be use for Step 4, an since it returns a Minkowski reuce basis, the CVP algorithm of Theorem 5 can be use in Step 5). Despite the fact that the algorithm oes not return a Minkowski reuce basis, one may woner if the analysis remains vali, an if the number of loop iterations of the 5-imensional greey algorithm is linear in log b 5.

15 The analysis in imensions two, three an four essentially relies on the fact that if one of the x j s foun at Step 5 has absolute value higher than 2, then b i is significantly shorter than a i. This fact is erive from the so-calle Gap Lemma. In imension four, this was only partly true, but the exception (the 211-case) happene in very few cases an coul be ealt by consiering the very specific shape of the lattices for which it coul go wrong. Things worsen in imension five. Inee, for Minkowski-reuce bases, (1, 1, 1, 2) an (1, 1, 2, 2) moulo any change of sign an permutation of coorinates are possible Voronoï coors. Here is an example of a lattice where (1, 1, 2, 2) is a Voronoï coor: The lattice basis given by the columns is Minkowski-reuce, but: b 1 +b 2 +2b 3 +2b 4 = 2 = b 1 +b 2 (2k 1 +1)b 1 +(2k 2 +1)b 2 +2k 3 b 3 +2k 4 b 4, for any k 1, k 2, k 3, k 4 Z. Note that (1, 1, 2, 2) cannot be a strict Voronoï coor: if b 1 + b 2 + 2b 3 + 2b 4 reaches the length minimum of its coset of L/2L, then so oes b 1 +b 2. Thus it might be possible to work aroun the ifficulty coming from (1, 1, 2, 2) like in the previous subsection. However, the case (1, 1, 1, 2) woul still remain, an this possible Voronoï coorinate can be strict The Geometry of Low-Dimensional Lattices In this section, we give some results about Voronoï cells in imensions two an three, which are crucial to our complexity analysis of the greey algorithm escribe in Section 3. More precisely, the analysis is base on the Gap Lemma (subsection 5.3), which is erive from the stuy of Voronoï cells in the case of ε-reuce vectors (subsection 5.2), itself erive from the stuy of Voronoï cells for Minkowski-reuce bases (subsection 5.1). 5.1 Voronoï Cells in the Case of Minkowski-Reuce Bases We give simple bouns on the iameter of the Voronoï cell an on the Gram- Schmit orthogonalization of a Minkowski-reuce basis: Lemma 15. Let 1. Let [b 1,..., b ] be a basis of a lattice L. Then ρ(l) 2 b. As a consequence, if 4 an if [b 1,..., b ] is a Minkowski-reuce basis, then b 5 2 b. The following lemma provies the possible Voronoï vectors of a two-imensional lattice given by a Minkowski-reuce basis. Such a basis confines the coorinates of Voronoï vectors:

16 16 Lemma 16. In imension two, the possible Voronoï coors are (1, 0) an (1, 1), moulo any change of signs an permutation of coorinates, i.e. any nonzero (ε 1, ε 2 ) where ε 1, ε 2 1. The proof relies on a etaile stuy of the expression (2x 1 +ε 1 ) b 1 +(2x 2 +ε 2 ) b 2 2 ε 1 b 1 + ε 2 b 2 2, where [b 1, b 2 ] is Minkowski-reuce, ε 1, ε 2 {0, 1} an x 1, x 2 Z. Inee, since Voronoï coors of a lattice L are given by the minima of the non-zero cosets of L/2L, it suffices to show that if x 1 x 2 0, then this expression is strictly positive. We o this by a rather technical stuy. We generalize this analysis to the three-imensional case. The unerlying ieas of the proof are the same, but because of the increasing number of variables, the analysis becomes more teious. Lemma 17. In imension three, the possible Voronoï coorinates are (1, 0, 0), (1, 1, 0), (1, 1, 1) an (2, 1, 1), moulo any change of signs an permutation of coorinates. The possible Voronoï coor (2, 1, 1) creates ifficulties when analyzing the greey algorithm in imension four, because it contains a two, which cannot be hanle with the greey argument use for the ones. We tackle this problem as follows: we show that when (2, 1, 1) happens to be a Voronoï coor, the lattice has a very specific shape, for which the behavior of the algorithm is well-unerstoo. Lemma 18. Suppose [b 1, b 2, b 3 ] is a Minkowski-reuce basis. 1. If any of (s 1, s 2, 2) is a Voronoï coor with s i = ±1 for i {1, 2}, then b 1 = b 2 = b 3, b 1, b 2 = 0 an b i, b 3 = s i b 1 2 /2 for i = 1, If any of (s 1, 2, s 3 ) is a Voronoï coor with s i = ±1 for i {1, 3}, then b 1 = b 2. Moreover, if b 1 = b 2 = b 3, then b 1, b 3 = 0 an b i, b 2 = s i b 1 2 /2 for i = 1, If any of (2, s 2, s 3 ) is a Voronoï coor with s i = ±1 for i {2, 3} an b 1 = b 2 = b 3, then b 2, b 3 = 0 an b i, b 1 = s i b 1 2 /2 for i = 2, Voronoï Cells in the Case of ε-reuce Vectors We exten the results of the previous subsection to the case of ε-reuce vectors. The iea is that if we compact the set of Minkowski-reuce bases an slightly enlarge it, the possible Voronoï coors remain the same. Unfortunately, by oing so, some of the vectors we consier may be zero, an this creates an infinity of possible Voronoï coors: for example, if b 1 = 0, any pair (x 1, 0) is a Voronoï coor of [b 1, b 2 ]. To tackle this problem, we restrict to b i with similar lengths. More precisely, we use the so-calle Topological Lemma: if we can guarantee that the possible Voronoï coors of the enlargement of the initial compact set of bases are boune, then for a sufficiently small enlargement, the possible Voronoï coors remain the same. We first give rather simple results on ε-reuce vectors an their Gram-Schmit orthogonalization, then we introuce the Topological Lemma (Lemma 21), from which we finally erive the relaxe versions of Lemmata 16, 17 an 18.

17 Lemma 19. There exists a constant c > 0 such that for any sufficiently small ε > 0, if [b 1, b 2, b 3 ] are ε-reuce, then the following inequalities hol: b i, b j 1 + cε b i 2 2 for any i < j, b 3, ε 1 b 1 + ε 2 b cε ε 1 b 1 + ε 2 b for any ε 1, ε 2 { 1, 1}. This result implies that if [b 1,..., b ] are ε-reuce, the only case for which the b i s can be linearly epenent is when some of them are zero, but this case cannot be avoie since we nee compacting the set of Minkowski-reuce bases. The following lemma generalizes Lemma 15. It shows that even with ε-reuce vectors, if the imension is below four, then the Gram-Schmit orthogonalization process cannot arbitrarily ecrease the lengths of the initial vectors. Lemma 20. There exists C > 0 such that for any 1 4 an any sufficiently small ε > 0, if [b 1,..., b ] are ε-reuce vectors, then we have b C b. The Topological Lemma is the key argument when extening the results on possible Voronoï coors from Minkowski-reuce bases to ε-reuce vectors. When applying it, X 0 will correspon to the x i s, K 0 to the b i s, X to the possible Voronoï coorinates, an f to the continuous function of real variables f : (y i ) i, (b i ) i y 1 b y b. Lemma 21 (Topological Lemma). Let n, m 1. Let X 0 an K 0 be compact sets of R n an R m. Let f be a continuous function from K 0 X 0 to R. For any a K 0 we efine M a = {x X 0 Z n / f(a, x) = min x X 0 Z n(f(a, x ))}. Let K K 0 be a compact an X = a K M a X 0 Z n. With these notations, there exists ε > 0 such that if b K 0 satisfies ist(b, K) ε, we have M b X. In orer to apply the Topological Lemma, we nee to map the relaxe bases into a compact set. For any ε 0 an any α [0, 1], we efine: K 2 (ε, α) = {(b 1, b 2 )/b 1, b 2 ε-reuce, α b 1 b 2 = 1} K 3 (ε, α) = {(b 1, b 2, b 3 )/b 1, b 2, b 3 ε-reuce, α b 1 b 2 b 3 = 1}. Lemma 22. If ε 0 an α [0, 1], K 2 (ε, α) an K 3 (ε, α) are compact sets. The following lemma is the relaxe version of Lemma 16. It can also be viewe as a reciprocal to Lemma 19. Lemma 23. For any α ]0, 1] an any sufficiently small ε > 0, the possible Voronoï coors of [b 1, b 2 ] K 2 (ε, α) are the same as for Minkowski-reuce bases, i.e. (1, 0) an (1, 1), moulo any change of signs an permutation of coorinates. We now relax Lemma 17 in the same manner. 17

18 18 Lemma 24. For any α ]0, 1] an any sufficiently small ε > 0, the possible Voronoï coors of [b 1, b 2, b 3 ] K 3 (ε, α) are the same as for Minkowski-reuce bases. The following result generalizes Lemma 18 about the possible Voronoï coor (1, 1, 2). As oppose to the two previous results, there is no nee to use the Topological Lemma in this case, because only a finite number of (x 1, x 2, x 3 ) s is consiere. Lemma 25. There exists a constant c > 0 such that for any sufficiently small ε > 0, if [b 1, b 2, b 3 ] are ε-reuce an b 3 = 1, then: 1. If any of (s 1, s 2, 2) is a Voronoï coor with s i = ±1 for i {1, 2}, then: b 1 1 c ε, b 1, b 2 c b ε an b i, b 3 + s 1 2 i 2 c ε for i = 1, If any of (s 1, 2, s 3 ) is a Voronoï coor with s i = ±1 for i {1, 3}, then (1 c ε) b 2 b 1 b 2. Moreover, if b 1 1 ε, then: b 1, b 3 c ε b an b i, b 2 + s 1 2 i 2 c ε for i = 1, If any of (2, s 2, s 3 ) is a Voronoï coor with s i = ±1 for i {2, 3} an if b 1 1 ε, then: b 2, b 3 c b ε an b i, b 1 + s 1 2 i 2 c ε for i = 2, The Gap Lemma The goal of this subsection is to prove that even with relaxe bases, if one as a lattice vector with not too small coorinates to a vector of the Voronoï cell, this vector becomes significantly longer. This result will be use the other way roun: if the x i s foun at Step 5 of the greey algorithm are not too small, then b is significantly shorter than a. We first nee to generalize the compact sets K 2 an K 3. For any ε 0 an any α [0, 1], we efine: K 2 (ε, α) = {(b 1, b 2, u)/(b 1, b 2 ) K 2 (ε, α), u Vor(b 1, b 2 )} K 3(ε, α) = {(b 1, b 2, b 3, u)/(b 1, b 2, b 3 ) K 3 (ε, α), u Vor(b 1, b 2 )}. Lemma 26. If ε > 0 an α [0, 1], K 2 (ε, α) an K 3 (ε, α) are compact sets. The next result is the two-imensional version of the Gap Lemma. Lemma 27. There exist two constants ε, C > 0 such that for any ε-reuce vectors [b 1, b 2 ] an any u Vor(b 1, b 2 ), if at least one of the following conitions hols, then: u + x 1 b 1 + x 2 b 2 2 u 2 + C b 2 2. (1) x 2 2, (2) x 1 2 an b 1 2 b 2 2 /2. We now give the three-imensional Gap Lemma, on which relies the analysis of the four-imensional greey algorithm. Lemma 28. There exist two constants ε, C > 0 such that for any ε-reuce vectors [b 1, b 2, b 3 ] an any u Vor(b 1, b 2, b 3 ), if at least one of the following conitions hols, then: u + x 1 b 1 + x 2 b 2 + x 3 b 3 2 u 2 + C b 3 2.

19 19 (1) x 3 3, or x 3 = 2 an ( x 1, x 2 ) (1, 1); (2) b 2 b 3 /2 an: x 2 3, or x 2 = 2 with ( x 1, x 3 ) (1, 1); (3) b 1 b 3 /2 an: x 1 3, or x 1 = 2 with ( x 2, x 3 ) (1, 1); Like in the previous subsections, we now consier the case of the possible Voronoï coors (±1, ±1, ±2) moulo any permutation of coorinates. Lemma 29. There exist two constants ε, C > 0 such that for any ε-reuce vectors [b 1, b 2, b 3 ] an any u Vor(b 1, b 2, b 3 ), if at least one of the following conitions hols, then: u + x 1 b 1 + x 2 b 2 + x 3 b 3 2 u 2 + C b (x 1, x 2, x 3 ) = (s 1, s 2, 2), s i = 1 for i {1, 2} an at least one of the following conitions hols: b 1 (1 ε) b 3, or b 2 (1 ε) b 3, or b 1, b 2 ε b 3 2, or b b 1, b 3 + s ε b 3 2 b, or b 2, b 3 + s ε b a- (x 1, x 2, x 3 ) = (s 1, 2, s 3 ), s i = 1 for i {1, 3} an b 1 (1 ε) b 3. 2b- (x 1, x 2, x 3 ) = (s 1, 2, s 3 ), s i = 1 for i {1, 3}, b 1 (1 ε) b 3 an at least one of the following conitions hols: b 1, b 3 ε b 3 2, or b b 1, b 2 + s ε b 3 2, or b 3, b 2 + s 3 b ε b (x 1, x 2, x 3 ) = (2, s 2, s 3 ), s i = 1 for i {2, 3}, b 1 (1 ε) b 3 an at least one of the following conitions hols: b 2, b 3 ε b 3 2, or b 2, b 1 + s 2 b ε b 3 2, or b 3, b 1 + s 3 b ε b 3 2. Acknowlegements. We thank Ali Akhavi, Florian Heß, Igor Semaev an Jacques Stern for helpful iscussions an comments. References 1. M. Ajtai. Generating har instances of lattice problems (extene abstract). In Proc. of the 28th Symposium on the Theory of Computing, pages ACM Press, M. Ajtai. The shortest vector problem in L 2 is NP-har for ranomize reuctions (extene abstract). In Proc. of the 30th Symposium on the Theory of Computing, pages ACM Press, A. Akhavi an C. Moreira os Santos. Another view of the Gaussian algorithm. In Proc. of LATIN 04, Lecture Notes in Computer Science. Springer-Verlag, J.W.S. Cassels. An Introuction to the Geometry of Numbers. Springer-Verlag, Berlin, C.F. Gauss. Disquisitiones Arithmeticæ. Leipzig, M. Gruber an C. G. Lekkerkerker. Geometry of Numbers. North-Hollan, Amsteram, B. Helfrich. Algorithms to construct Minkowski reuce an Hermite reuce lattice bases. Th. Computer Science, 41: , 1985.

20 20 8. C. Hermite. Extraits e lettres e M. Hermite à M. Jacobi sur ifférents objets e la théorie es nombres, euxième lettre. J. Reine Angew. Math., 40: , C. Hermite. Œuvres. Gauthier-Villars, Paris, M. Kaib an C. P. Schnorr. The generalize Gauss reuction algorithm. J. of Algorithms, 21(3): , A. Korkine an G. Zolotareff. Sur les formes quaratiques. Math. Ann., 6: , J. C. Lagarias. Worst-case complexity bouns for algorithms in the theory of integral quaratic forms. J. of Algorithms, 1: , A. K. Lenstra, H. W. Lenstra, Jr., an L. Lovász. Factoring polynomials with rational coefficients. Math. Ann., 261: , J. Martinet. Perfect Lattices in Eucliean Spaces. Springer-Verlag, Heielberg, D. Micciancio. The shortest vector problem is NP-har to approximate to within some constant. In Proc. of the 39th Symposium on the Founations of Computer Science, pages IEEE, D. Micciancio an S. Golwasser. Complexity of lattice problems: A cryptographic perspective. Kluwer Acaemic Publishers, Boston, H. Minkowski. Geometrie er Zahlen. Teubner-Verlag, Leipzig, P. Q. Nguyen an J. Stern. The two faces of lattices in cryptology. In Proc. of CALC 01, volume 2146 of Lecture Notes in Computer Science, pages Springer-Verlag, S. S. Ryskov. On Hermite, Minkowski an Venkov reuction of positive quaratic forms in n variables. Soviet Math. Doklay, 13: , C. P. Schnorr. A hierarchy of polynomial lattice basis reuction algorithms. Th. Computer Science, 53: , C. P. Schnorr an M. Euchner. Lattice basis reuction: improve practical algorithms an solving subset sum problems. Math. Programming, 66: , C. P. Schnorr an H. H. Hörner. Attacking the Chor-Rivest cryptosystem by improve lattice reuction. In Proc. of Eurocrypt 95, volume 921 of Lecture Notes in Computer Science, pages Springer-Verlag, A. Schönhage an V. Strassen. Schnelle Multiplikation grosser Zahlen. Computing, 7: , I. Semaev. A 3-imensional lattice reuction algorithm. In Proc. of CALC 01, volume 2146 of Lecture Notes in Computer Science, pages Springer-Verlag, C. L. Siegel. Lectures on the Geometry of Numbers. Springer-Verlag, Berlin, Heielberg, M. I. Stogrin. Regular Dirichlet-Voronoï partitions for the secon triclinic group. American Mathematical Society, English translation of the Proceeings of the Steklov Institute of Mathematics, Number 123 (1973). 27. B. Vallée. Une Approche Géométrique e la Réuction e Réseaux en Petite Dimension. PhD thesis, Université e Caen, B. Vallée. Gauss algorithm revisite. J. of Algorithms, 12(4): , B. L. van er Waeren. Die Reuktionstheorie er positiven quaratischen Formen. Acta Mathematica, 96: , G. Voronoï. Nouvelles applications es paramètres continus à la théorie es formes quaratiques. J. Reine Angew. Math., 134: , 1908.

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

Lecture 5. Symmetric Shearer s Lemma

Lecture 5. Symmetric Shearer s Lemma Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Lattice Reduction Algorithms: Theory and Practice

Lattice Reduction Algorithms: Theory and Practice Lattice Reduction Algorithms: Theory and Practice Phong Q. Nguyen INRIA and ENS, Département d informatique, 45 rue d Ulm, 75005 Paris, France http://www.di.ens.fr/~pnguyen/ Abstract. Lattice reduction

More information

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Practical Analysis of Key Recovery Attack against Search-LWE Problem Practical Analysis of Key Recovery Attack against Search-LWE Problem IMI Cryptography Seminar 28 th June, 2016 Speaker* : Momonari Kuo Grauate School of Mathematics, Kyushu University * This work is a

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

A Sketch of Menshikov s Theorem

A Sketch of Menshikov s Theorem A Sketch of Menshikov s Theorem Thomas Bao March 14, 2010 Abstract Let Λ be an infinite, locally finite oriente multi-graph with C Λ finite an strongly connecte, an let p

More information

arxiv: v4 [cs.ds] 7 Mar 2014

arxiv: v4 [cs.ds] 7 Mar 2014 Analysis of Agglomerative Clustering Marcel R. Ackermann Johannes Blömer Daniel Kuntze Christian Sohler arxiv:101.697v [cs.ds] 7 Mar 01 Abstract The iameter k-clustering problem is the problem of partitioning

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d

A new proof of the sharpness of the phase transition for Bernoulli percolation on Z d A new proof of the sharpness of the phase transition for Bernoulli percolation on Z Hugo Duminil-Copin an Vincent Tassion October 8, 205 Abstract We provie a new proof of the sharpness of the phase transition

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Practical Analysis of Key Recovery Attack against Search-LWE Problem

Practical Analysis of Key Recovery Attack against Search-LWE Problem Practical Analysis of Key Recovery Attack against Search-LWE Problem Royal Holloway an Kyushu University Workshop on Lattice-base cryptography 7 th September, 2016 Momonari Kuo Grauate School of Mathematics,

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7.

1 dx. where is a large constant, i.e., 1, (7.6) and Px is of the order of unity. Indeed, if px is given by (7.5), the inequality (7. Lectures Nine an Ten The WKB Approximation The WKB metho is a powerful tool to obtain solutions for many physical problems It is generally applicable to problems of wave propagation in which the frequency

More information

12.11 Laplace s Equation in Cylindrical and

12.11 Laplace s Equation in Cylindrical and SEC. 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential 593 2. Laplace s Equation in Cylinrical an Spherical Coorinates. Potential One of the most important PDEs in physics an engineering

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio

CSE 206A: Lattice Algorithms and Applications Spring Basis Reduction. Instructor: Daniele Micciancio CSE 206A: Lattice Algorithms and Applications Spring 2014 Basis Reduction Instructor: Daniele Micciancio UCSD CSE No efficient algorithm is known to find the shortest vector in a lattice (in arbitrary

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations

Diophantine Approximations: Examining the Farey Process and its Method on Producing Best Approximations Diophantine Approximations: Examining the Farey Process an its Metho on Proucing Best Approximations Kelly Bowen Introuction When a person hears the phrase irrational number, one oes not think of anything

More information

The Shortest Vector Problem (Lattice Reduction Algorithms)

The Shortest Vector Problem (Lattice Reduction Algorithms) The Shortest Vector Problem (Lattice Reduction Algorithms) Approximation Algorithms by V. Vazirani, Chapter 27 - Problem statement, general discussion - Lattices: brief introduction - The Gauss algorithm

More information

Math 1271 Solutions for Fall 2005 Final Exam

Math 1271 Solutions for Fall 2005 Final Exam Math 7 Solutions for Fall 5 Final Eam ) Since the equation + y = e y cannot be rearrange algebraically in orer to write y as an eplicit function of, we must instea ifferentiate this relation implicitly

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Separation of Variables

Separation of Variables Physics 342 Lecture 1 Separation of Variables Lecture 1 Physics 342 Quantum Mechanics I Monay, January 25th, 2010 There are three basic mathematical tools we nee, an then we can begin working on the physical

More information

International Journal of Pure and Applied Mathematics Volume 35 No , ON PYTHAGOREAN QUADRUPLES Edray Goins 1, Alain Togbé 2

International Journal of Pure and Applied Mathematics Volume 35 No , ON PYTHAGOREAN QUADRUPLES Edray Goins 1, Alain Togbé 2 International Journal of Pure an Applie Mathematics Volume 35 No. 3 007, 365-374 ON PYTHAGOREAN QUADRUPLES Eray Goins 1, Alain Togbé 1 Department of Mathematics Purue University 150 North University Street,

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

On colour-blind distinguishing colour pallets in regular graphs

On colour-blind distinguishing colour pallets in regular graphs J Comb Optim (2014 28:348 357 DOI 10.1007/s10878-012-9556-x On colour-blin istinguishing colour pallets in regular graphs Jakub Przybyło Publishe online: 25 October 2012 The Author(s 2012. This article

More information

Another View of the Gaussian Algorithm

Another View of the Gaussian Algorithm Another View of the Gaussian Algorithm Ali Akhavi and Céline Moreira Dos Santos GREYC Université de Caen, F-14032 Caen Cedex, France {ali.akhavi,moreira}@info.unicaen.fr Abstract. We introduce here a rewrite

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

Approximate Constraint Satisfaction Requires Large LP Relaxations

Approximate Constraint Satisfaction Requires Large LP Relaxations Approximate Constraint Satisfaction Requires Large LP Relaxations oah Fleming April 19, 2018 Linear programming is a very powerful tool for attacking optimization problems. Techniques such as the ellipsoi

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors

Math Notes on differentials, the Chain Rule, gradients, directional derivative, and normal vectors Math 18.02 Notes on ifferentials, the Chain Rule, graients, irectional erivative, an normal vectors Tangent plane an linear approximation We efine the partial erivatives of f( xy, ) as follows: f f( x+

More information

Sharp Thresholds. Zachary Hamaker. March 15, 2010

Sharp Thresholds. Zachary Hamaker. March 15, 2010 Sharp Threshols Zachary Hamaker March 15, 2010 Abstract The Kolmogorov Zero-One law states that for tail events on infinite-imensional probability spaces, the probability must be either zero or one. Behavior

More information

Lattice Basis Reduction and the LLL Algorithm

Lattice Basis Reduction and the LLL Algorithm Lattice Basis Reduction and the LLL Algorithm Curtis Bright May 21, 2009 1 2 Point Lattices A point lattice is a discrete additive subgroup of R n. A basis for a lattice L R n is a set of linearly independent

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

On the Enumeration of Double-Base Chains with Applications to Elliptic Curve Cryptography

On the Enumeration of Double-Base Chains with Applications to Elliptic Curve Cryptography On the Enumeration of Double-Base Chains with Applications to Elliptic Curve Cryptography Christophe Doche Department of Computing Macquarie University, Australia christophe.oche@mq.eu.au. Abstract. The

More information

Math 1B, lecture 8: Integration by parts

Math 1B, lecture 8: Integration by parts Math B, lecture 8: Integration by parts Nathan Pflueger 23 September 2 Introuction Integration by parts, similarly to integration by substitution, reverses a well-known technique of ifferentiation an explores

More information

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx

Many problems in physics, engineering, and chemistry fall in a general class of equations of the form. d dx. d dx Math 53 Notes on turm-liouville equations Many problems in physics, engineering, an chemistry fall in a general class of equations of the form w(x)p(x) u ] + (q(x) λ) u = w(x) on an interval a, b], plus

More information

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations

Lecture XII. where Φ is called the potential function. Let us introduce spherical coordinates defined through the relations Lecture XII Abstract We introuce the Laplace equation in spherical coorinates an apply the metho of separation of variables to solve it. This will generate three linear orinary secon orer ifferential equations:

More information

Quantum Mechanics in Three Dimensions

Quantum Mechanics in Three Dimensions Physics 342 Lecture 20 Quantum Mechanics in Three Dimensions Lecture 20 Physics 342 Quantum Mechanics I Monay, March 24th, 2008 We begin our spherical solutions with the simplest possible case zero potential.

More information

Combinatorica 9(1)(1989) A New Lower Bound for Snake-in-the-Box Codes. Jerzy Wojciechowski. AMS subject classification 1980: 05 C 35, 94 B 25

Combinatorica 9(1)(1989) A New Lower Bound for Snake-in-the-Box Codes. Jerzy Wojciechowski. AMS subject classification 1980: 05 C 35, 94 B 25 Combinatorica 9(1)(1989)91 99 A New Lower Boun for Snake-in-the-Box Coes Jerzy Wojciechowski Department of Pure Mathematics an Mathematical Statistics, University of Cambrige, 16 Mill Lane, Cambrige, CB2

More information

Worst case complexity of the optimal LLL algorithm

Worst case complexity of the optimal LLL algorithm Worst case complexity of the optimal LLL algorithm ALI AKHAVI GREYC - Université de Caen, F-14032 Caen Cedex, France aliakhavi@infounicaenfr Abstract In this paper, we consider the open problem of the

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

Shortest Vector Problem (1982; Lenstra, Lenstra, Lovasz)

Shortest Vector Problem (1982; Lenstra, Lenstra, Lovasz) Shortest Vector Problem (1982; Lenstra, Lenstra, Lovasz) Daniele Micciancio, University of California at San Diego, www.cs.ucsd.edu/ daniele entry editor: Sanjeev Khanna INDEX TERMS: Point lattices. Algorithmic

More information

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas

Resistant Polynomials and Stronger Lower Bounds for Depth-Three Arithmetical Formulas Resistant Polynomials an Stronger Lower Bouns for Depth-Three Arithmetical Formulas Maurice J. Jansen University at Buffalo Kenneth W.Regan University at Buffalo Abstract We erive quaratic lower bouns

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP

THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP THE GENUINE OMEGA-REGULAR UNITARY DUAL OF THE METAPLECTIC GROUP ALESSANDRA PANTANO, ANNEGRET PAUL, AND SUSANA A. SALAMANCA-RIBA Abstract. We classify all genuine unitary representations of the metaplectic

More information

Robustness and Perturbations of Minimal Bases

Robustness and Perturbations of Minimal Bases Robustness an Perturbations of Minimal Bases Paul Van Dooren an Froilán M Dopico December 9, 2016 Abstract Polynomial minimal bases of rational vector subspaces are a classical concept that plays an important

More information

MARTIN HENK AND MAKOTO TAGAMI

MARTIN HENK AND MAKOTO TAGAMI LOWER BOUNDS ON THE COEFFICIENTS OF EHRHART POLYNOMIALS MARTIN HENK AND MAKOTO TAGAMI Abstract. We present lower bouns for the coefficients of Ehrhart polynomials of convex lattice polytopes in terms of

More information

Discrete Operators in Canonical Domains

Discrete Operators in Canonical Domains Discrete Operators in Canonical Domains VLADIMIR VASILYEV Belgoro National Research University Chair of Differential Equations Stuencheskaya 14/1, 308007 Belgoro RUSSIA vlaimir.b.vasilyev@gmail.com Abstract:

More information

Chapter 6: Energy-Momentum Tensors

Chapter 6: Energy-Momentum Tensors 49 Chapter 6: Energy-Momentum Tensors This chapter outlines the general theory of energy an momentum conservation in terms of energy-momentum tensors, then applies these ieas to the case of Bohm's moel.

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

All s Well That Ends Well: Supplementary Proofs

All s Well That Ends Well: Supplementary Proofs All s Well That Ens Well: Guarantee Resolution of Simultaneous Rigi Boy Impact 1:1 All s Well That Ens Well: Supplementary Proofs This ocument complements the paper All s Well That Ens Well: Guarantee

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth

MA 2232 Lecture 08 - Review of Log and Exponential Functions and Exponential Growth MA 2232 Lecture 08 - Review of Log an Exponential Functions an Exponential Growth Friay, February 2, 2018. Objectives: Review log an exponential functions, their erivative an integration formulas. Exponential

More information

Implementing Gentry s Fully-Homomorphic Encryption Scheme Preliminary Report

Implementing Gentry s Fully-Homomorphic Encryption Scheme Preliminary Report Implementing Gentry s Fully-Homomorphic Encryption Scheme Preliminary Report Craig Gentry Shai Halevi August 5, 2010 Abstract We escribe a working implementation of a variant of Gentry s fully homomorphic

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Some Sieving Algorithms for Lattice Problems

Some Sieving Algorithms for Lattice Problems Foundations of Software Technology and Theoretical Computer Science (Bangalore) 2008. Editors: R. Hariharan, M. Mukund, V. Vinay; pp - Some Sieving Algorithms for Lattice Problems V. Arvind and Pushkar

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Integration Review. May 11, 2013

Integration Review. May 11, 2013 Integration Review May 11, 2013 Goals: Review the funamental theorem of calculus. Review u-substitution. Review integration by parts. Do lots of integration eamples. 1 Funamental Theorem of Calculus In

More information

Lecture 5: CVP and Babai s Algorithm

Lecture 5: CVP and Babai s Algorithm NYU, Fall 2016 Lattices Mini Course Lecture 5: CVP and Babai s Algorithm Lecturer: Noah Stephens-Davidowitz 51 The Closest Vector Problem 511 Inhomogeneous linear equations Recall that, in our first lecture,

More information

Dimension-Preserving Reductions Between Lattice Problems

Dimension-Preserving Reductions Between Lattice Problems Dimension-Preserving Reductions Between Lattice Problems Noah Stephens-Davidowitz Courant Institute of Mathematical Sciences, New York University. noahsd@cs.nyu.edu Last updated September 6, 2016. Abstract

More information

1 Shortest Vector Problem

1 Shortest Vector Problem Lattices in Cryptography University of Michigan, Fall 25 Lecture 2 SVP, Gram-Schmidt, LLL Instructor: Chris Peikert Scribe: Hank Carter Shortest Vector Problem Last time we defined the minimum distance

More information

Efficient Construction of Semilinear Representations of Languages Accepted by Unary NFA

Efficient Construction of Semilinear Representations of Languages Accepted by Unary NFA Efficient Construction of Semilinear Representations of Languages Accepte by Unary NFA Zeněk Sawa Center for Applie Cybernetics, Department of Computer Science Technical University of Ostrava 17. listopau

More information

Linear Algebra- Review And Beyond. Lecture 3

Linear Algebra- Review And Beyond. Lecture 3 Linear Algebra- Review An Beyon Lecture 3 This lecture gives a wie range of materials relate to matrix. Matrix is the core of linear algebra, an it s useful in many other fiels. 1 Matrix Matrix is the

More information

Iterated Point-Line Configurations Grow Doubly-Exponentially

Iterated Point-Line Configurations Grow Doubly-Exponentially Iterate Point-Line Configurations Grow Doubly-Exponentially Joshua Cooper an Mark Walters July 9, 008 Abstract Begin with a set of four points in the real plane in general position. A to this collection

More information

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold

CHAPTER 1 : DIFFERENTIABLE MANIFOLDS. 1.1 The definition of a differentiable manifold CHAPTER 1 : DIFFERENTIABLE MANIFOLDS 1.1 The efinition of a ifferentiable manifol Let M be a topological space. This means that we have a family Ω of open sets efine on M. These satisfy (1), M Ω (2) the

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

1 Lecture 20: Implicit differentiation

1 Lecture 20: Implicit differentiation Lecture 20: Implicit ifferentiation. Outline The technique of implicit ifferentiation Tangent lines to a circle Derivatives of inverse functions by implicit ifferentiation Examples.2 Implicit ifferentiation

More information

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation

Thermal conductivity of graded composites: Numerical simulations and an effective medium approximation JOURNAL OF MATERIALS SCIENCE 34 (999)5497 5503 Thermal conuctivity of grae composites: Numerical simulations an an effective meium approximation P. M. HUI Department of Physics, The Chinese University

More information

Vectors in two dimensions

Vectors in two dimensions Vectors in two imensions Until now, we have been working in one imension only The main reason for this is to become familiar with the main physical ieas like Newton s secon law, without the aitional complication

More information

Introduction to variational calculus: Lecture notes 1

Introduction to variational calculus: Lecture notes 1 October 10, 2006 Introuction to variational calculus: Lecture notes 1 Ewin Langmann Mathematical Physics, KTH Physics, AlbaNova, SE-106 91 Stockholm, Sween Abstract I give an informal summary of variational

More information

SYNCHRONOUS SEQUENTIAL CIRCUITS

SYNCHRONOUS SEQUENTIAL CIRCUITS CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

Proof of SPNs as Mixture of Trees

Proof of SPNs as Mixture of Trees A Proof of SPNs as Mixture of Trees Theorem 1. If T is an inuce SPN from a complete an ecomposable SPN S, then T is a tree that is complete an ecomposable. Proof. Argue by contraiction that T is not a

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

2Algebraic ONLINE PAGE PROOFS. foundations

2Algebraic ONLINE PAGE PROOFS. foundations Algebraic founations. Kick off with CAS. Algebraic skills.3 Pascal s triangle an binomial expansions.4 The binomial theorem.5 Sets of real numbers.6 Surs.7 Review . Kick off with CAS Playing lotto Using

More information

θ x = f ( x,t) could be written as

θ x = f ( x,t) could be written as 9. Higher orer PDEs as systems of first-orer PDEs. Hyperbolic systems. For PDEs, as for ODEs, we may reuce the orer by efining new epenent variables. For example, in the case of the wave equation, (1)

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

THE MONIC INTEGER TRANSFINITE DIAMETER

THE MONIC INTEGER TRANSFINITE DIAMETER MATHEMATICS OF COMPUTATION Volume 00, Number 0, Pages 000 000 S 005-578(XX)0000-0 THE MONIC INTEGER TRANSFINITE DIAMETER K. G. HARE AND C. J. SMYTH ABSTRACT. We stuy the problem of fining nonconstant monic

More information

Counting Lattice Points in Polytopes: The Ehrhart Theory

Counting Lattice Points in Polytopes: The Ehrhart Theory 3 Counting Lattice Points in Polytopes: The Ehrhart Theory Ubi materia, ibi geometria. Johannes Kepler (1571 1630) Given the profusion of examples that gave rise to the polynomial behavior of the integer-point

More information

FIRST YEAR PHD REPORT

FIRST YEAR PHD REPORT FIRST YEAR PHD REPORT VANDITA PATEL Abstract. We look at some potential links between totally real number fiels an some theta expansions (these being moular forms). The literature relate to moular forms

More information

A New Vulnerable Class of Exponents in RSA

A New Vulnerable Class of Exponents in RSA A ew Vulnerable Class of Exponents in RSA Aberrahmane itaj Laboratoire e Mathématiues icolas Oresme Campus II, Boulevar u Maréchal Juin BP 586, 4032 Caen Ceex, France. nitaj@math.unicaen.fr http://www.math.unicaen.fr/~nitaj

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

Pseudo-Free Families of Finite Computational Elementary Abelian p-groups

Pseudo-Free Families of Finite Computational Elementary Abelian p-groups Pseuo-Free Families of Finite Computational Elementary Abelian p-groups Mikhail Anokhin Information Security Institute, Lomonosov University, Moscow, Russia anokhin@mccme.ru Abstract We initiate the stuy

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

05 The Continuum Limit and the Wave Equation

05 The Continuum Limit and the Wave Equation Utah State University DigitalCommons@USU Founations of Wave Phenomena Physics, Department of 1-1-2004 05 The Continuum Limit an the Wave Equation Charles G. Torre Department of Physics, Utah State University,

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Pure Further Mathematics 1. Revision Notes

Pure Further Mathematics 1. Revision Notes Pure Further Mathematics Revision Notes June 20 2 FP JUNE 20 SDB Further Pure Complex Numbers... 3 Definitions an arithmetical operations... 3 Complex conjugate... 3 Properties... 3 Complex number plane,

More information

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential Avances in Applie Mathematics an Mechanics Av. Appl. Math. Mech. Vol. 1 No. 4 pp. 573-580 DOI: 10.4208/aamm.09-m0946 August 2009 A Note on Exact Solutions to Linear Differential Equations by the Matrix

More information

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation

A Novel Decoupled Iterative Method for Deep-Submicron MOSFET RF Circuit Simulation A Novel ecouple Iterative Metho for eep-submicron MOSFET RF Circuit Simulation CHUAN-SHENG WANG an YIMING LI epartment of Mathematics, National Tsing Hua University, National Nano evice Laboratories, an

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information