Lattice Cryptography

Similar documents
Lattice Cryptography

CHALMERS GÖTEBORGS UNIVERSITET. TDA352 (Chalmers) - DIT250 (GU) 11 April 2017, 8:30-12:30

Notes for Lecture 16

Background: Lattices and the Learning-with-Errors problem

How to Use Short Basis : Trapdoors for Hard Lattices and new Cryptographic Constructions

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004

Cryptology. Scribe: Fabrice Mouhartem M2IF

Introduction to Cybersecurity Cryptography (Part 4)

1 Cryptographic hash functions

Introduction to Cybersecurity Cryptography (Part 4)

1 Cryptographic hash functions

Public Key Cryptography

Fully homomorphic encryption scheme using ideal lattices. Gentry s STOC 09 paper - Part II

Notes for Lecture Decision Diffie Hellman and Quadratic Residues

Gentry s SWHE Scheme

El Gamal A DDH based encryption scheme. Table of contents

Notes on Alekhnovich s cryptosystems

Lecture 1: Introduction to Public key cryptography

Introduction to Cryptography. Lecture 8

Lecture 10 - MAC s continued, hash & MAC

Public-Key Cryptography. Lecture 9 Public-Key Encryption Diffie-Hellman Key-Exchange

Lecture 9 Julie Staub Avi Dalal Abheek Anand Gelareh Taban. 1 Introduction. 2 Background. CMSC 858K Advanced Topics in Cryptography February 24, 2004

Notes for Lecture A can repeat step 3 as many times as it wishes. We will charge A one unit of time for every time it repeats step 3.

Outline Proxy Re-Encryption NTRU NTRUReEncrypt PS-NTRUReEncrypt Experimental results Conclusions. NTRUReEncrypt

Question 2.1. Show that. is non-negligible. 2. Since. is non-negligible so is μ n +

CSE 206A: Lattice Algorithms and Applications Winter The dual lattice. Instructor: Daniele Micciancio

Solutions to homework 2

Lecture 15 & 16: Trapdoor Permutations, RSA, Signatures

Lecture 5, CPA Secure Encryption from PRFs

Notes. Number Theory: Applications. Notes. Number Theory: Applications. Notes. Hash Functions I

Lecture 14: Cryptographic Hash Functions

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2

Solution of Exercise Sheet 7

Lecture 7: CPA Security, MACs, OWFs

An intro to lattices and learning with errors

Winter 2008 Introduction to Modern Cryptography Benny Chor and Rani Hod. Assignment #2

The Gaussians Distribution

Lecture 3: Error Correcting Codes

5.4 ElGamal - definition

Lecture 2: Perfect Secrecy and its Limitations

5199/IOC5063 Theory of Cryptology, 2014 Fall

Lectures 2+3: Provable Security

6.892 Computing on Encrypted Data September 16, Lecture 2

Lecture 11: Key Agreement

TECHNISCHE UNIVERSITEIT EINDHOVEN Faculty of Mathematics and Computer Science Exam Cryptology, Tuesday 30 October 2018

Solutions for week 1, Cryptography Course - TDA 352/DIT 250

Cryptanalysis of Pseudorandom Generators

Asymptotically Efficient Lattice-Based Digital Signatures

TECHNISCHE UNIVERSITEIT EINDHOVEN Faculty of Mathematics and Computer Science Exam Cryptology, Friday 25 January 2019

Public-Key Encryption: ElGamal, RSA, Rabin

Lecture 22: RSA Encryption. RSA Encryption

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography

Lecture 19: Public-key Cryptography (Diffie-Hellman Key Exchange & ElGamal Encryption) Public-key Cryptography

Master of Logic Project Report: Lattice Based Cryptography and Fully Homomorphic Encryption

1 Number Theory Basics

Lattice Based Crypto: Answering Questions You Don't Understand

Open problems in lattice-based cryptography

Provable Security for Public-Key Schemes. Outline. I Basics. Secrecy of Communications. Outline. David Pointcheval

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 7

Number Theory: Applications. Number Theory Applications. Hash Functions II. Hash Functions III. Pseudorandom Numbers

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today:

Trapdoors for Lattices: Simpler, Tighter, Faster, Smaller

Fully-secure Key Policy ABE on Prime-Order Bilinear Groups

Cryptography. Lecture 2: Perfect Secrecy and its Limitations. Gil Segev

Lecture 11: Non-Interactive Zero-Knowledge II. 1 Non-Interactive Zero-Knowledge in the Hidden-Bits Model for the Graph Hamiltonian problem

CSE 206A: Lattice Algorithms and Applications Spring Basic Algorithms. Instructor: Daniele Micciancio

CPSC 467b: Cryptography and Computer Security

UNIVERSITY OF CONNECTICUT. CSE (15626) & ECE (15284) Secure Computation and Storage: Spring 2016.

Exercise Sheet Cryptography 1, 2011

Notes for Lecture 9. 1 Combining Encryption and Authentication

Lecture 7: ElGamal and Discrete Logarithms

Cryptography and Security Final Exam

Lecture 7: Boneh-Boyen Proof & Waters IBE System

Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle

Lattices. A Lattice is a discrete subgroup of the additive group of n-dimensional space R n.

Authentication. Chapter Message Authentication

Provable security. Michel Abdalla

Notes for Lecture 15

1: Introduction to Lattices

Notes for Lecture 9. Last time, we introduced zero knowledge proofs and showed how interactive zero knowledge proofs could be constructed from OWFs.

ASYMMETRIC ENCRYPTION

Public Key Cryptography

6.892 Computing on Encrypted Data October 28, Lecture 7

9 Knapsack Cryptography

Multikey Homomorphic Encryption from NTRU

ENEE 457: Computer Systems Security 10/3/16. Lecture 9 RSA Encryption and Diffie-Helmann Key Exchange

Notes for Lecture 17

Cryptographic Hash Functions

Practical Analysis of Key Recovery Attack against Search-LWE Problem

1 Indistinguishability for multiple encryptions

Lecture Summary. 2 Simplified Cramer-Shoup. CMSC 858K Advanced Topics in Cryptography February 26, Chiu Yuen Koo Nikolai Yakovenko

Indistinguishability and Pseudo-Randomness

Short Exponent Diffie-Hellman Problems

Fully Homomorphic Encryption

U.C. Berkeley CS276: Cryptography Luca Trevisan February 5, Notes for Lecture 6

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

Public Key Cryptography

Technische Universität München (I7) Winter 2013/14 Dr. M. Luttenberger / M. Schlund SOLUTION. Cryptography Endterm

Solving All Lattice Problems in Deterministic Single Exponential Time

Practice Final Exam Winter 2017, CS 485/585 Crypto March 14, 2017

Transcription:

CSE 06A: Lattice Algorithms and Applications Winter 01 Instructor: Daniele Micciancio Lattice Cryptography UCSD CSE Many problems on point lattices are computationally hard. One of the most important hard problems on point lattices is the closest vector problem (CVP): given a lattice Λ R n, a target point t R n, and a distance bound d, find a lattice point v L at distance t v d from the target (provided such a lattice point exists). In the exact version of CVP, the distance bound is taken to be the distance d = µ(t, Λ) = min v Λ t v between the target and the lattice. In the approximate problem CV P γ one sets d = γ µ. Two special versions of CV P that play a prominent role in cryptography are: (1) the Bounded Distance Decoding problem (BDD), where d < λ/, and () the Absolute Distance Decoding problem (ADD), where d ρ. The importance of these specific parameter settings is that when d < λ/, if a solution exists, then it is unique. On the other hand, when d ρ, a solution is always guaranteed to exist (for any target t), and it is generally not unique. Easier variants of BDD and ADD are obtained by introducing a slackness factor γ 1, and strengthening the constrains on d to d < λ/(γ) (for BDD γ ) and d γρ (for ADD γ ). Informally, BDD γ is the problem of finding the lattice vector closest to a target when the target is very close to the lattice, while ADD γ is the problem of finding a lattice vector not too far from the target, where far is measured with respect to the absolute bound ρ within which a solution is always guaranteed to exist. As usual, the approximation factor γ can be a function of the dimension. No efficient algorithm is known to solve BDD or ADD even approximately, for approximation factors that are exponential in the dimension of the lattice in the worst-case. However, cryptography requires problems that are hard to solve on average, so that when you choose your cryptographic key at random, you know with high confidence that the resulting cryptanalysis problem is indeed hard. So, in order to develop cryprographic applications, we need to define appropriate probability distributions (on lattice Λ and target t). Later we will show that for some of these distributions it is possible to prove that the resulting problems are hard on average, based on worst case complexity assumptions. But let us begin with the cryptography first. 1. Random Lattices The two most common distributions over n-dimensional lattices encountered in cryptography are defined as follows. Fix positive integers k n q, where k (or n k) serves as the main security parameter. Typically n is a small multiple of k (e.g., n = O(k) or n = O(k log k)) and q is a small prime with O(log k) bits. Notice that q is very small, not at all like the large primes (with O(k) bits) used in number theoretic cryptography. For any matrix A Z k n q define Λ q (A) = {x Z n : Ax = 0 mod q} Λ q (A) = {x Z n : x = A T s mod q for some s Z k q}.

Intuitively, Λ q (A) is the lattice generated by the rows of A modulo q, while Λ q (A) is the set of solutions of the system of k linear equations modulo q defined by the rows of A. It is easy to check that Λ q (A) and Λ q (A) are subgroups of Z n, and therefore they are lattices. It is also apparent from the definition that these lattices are q-ary, i.e., they are periodic modulo q: one can take the finite set Q of lattice points with coordinates in {0,..., q 1}, and recover the whole lattice by tiling the space with copies Q + qz n. The matrix A used to represent them is not a lattice basis. A lattice basis B for the corresponding lattices can be efficiently computed from A using linear algebra, but it is typically not needed: cryptographic operations are usually expressed and implemented directly in terms of A. A random lattice is obtained by picking A Z k n q uniformly at random. The corresponding distributions are denoted Λ q (n, k) and Λ q (n, k). Regarding the error vector distribution, one possible way to choose x may be to select it uniformly at random among all integer vectors of bounded norm, but for technical reasons a different distribution is often more convenient. In lattice cryptography, perturbation vectors are typically chosen according to the Gaussian distribution D α which picks each x Z n with probability (roughly) proportional to exp( π x/α ). The Gaussian distribution has the analytical advantage that the probability of a point x depends only on its norm x, and still the coordinates of x can be chosen independently (namely, each with probability proportional to exp( π x i /α )). It can be shown that when x is chosen according to this distribution (over Z n ), Pr{ x > nα} is exponentially small. So, by truncating a negligibly small tail, D α can be regarded as a probability distribution over the integer vectors of norm bounded by α n. Before providing a theoretical justification for using these distributions in cryptography, let us try to get a better understanding of the lattices. We begin by observing that Λ q (A) and Λ q (A) are dual to each other, up to a scaling factor q. Exercise 1. Show that Λ q (A) = q Λ q (A) and Λ q (A) = q Λ q (A). In particular, det(λ q (A)) det(λ q (A)) = q n. Moreover, for any A Z k n q, we have det(λ q (A)) q k and det(λ q (A)) q n k. To better understand the relation between Λ q (n, k) and Λ q (n, k), it is convenient to define two auxiliary distributions. Let Λ q (n, k) the conditional distribution of a lattice chosen according to distributions Λ q (n, k), given that the lattice has determinant exactly q k. Similarly, let Λ q (n, k) be the conditional distribution of a lattice chosen according to Λ q (n, n k), given that the determinant of the lattice is q n k. In both cases, when q is a prime, the condition is equivalent to requiring that the rows of A are linearly independent modulo q. How much do these conditional distributions differ from the original ones? Not much. Exercise. Show that if A Z k n q is chosen uniformly at random, then Pr{det(Λ q (A)) = q k } = Pr{det(Λ q (A)) = q n k } 1 1/q n k. Moreover, the conditional distributions Λ q (n, k) = Λ q (n, n k) are identical. So, for typical settings of the parameters (e.g., n k), lattices chosen according to Λ q (n, k) or Λ q (n, n k) have determinant q k except with negligible probability ɛ q k, and the distributions Λ q (n, k) and Λ q (n, n k) are almost identical because they are both statistically close to Λ q (n, k) = Λ q (n, n k).

We now move to estimating the parameters of lattices chosen according to these distributions. Clearly, we always have λ 1 λ n q and ρ nq because qi gives a set of n linearly independent vectors of length q. In fact, from Minkowski s Theorem and Exercise 1, we know that λ(λ) nq k/n for any Λ Λ q (n, k). This upper bound is essentially tight. Exercise 3. There is a constant δ > 0 such that if Λ is chosen according to Λ q (n, k), then Pr{λ(Λ) < δ nq k/n } 1/ n. [Hint: consider all integer vectors of norm at most δ nq k/n and use a union bound.] What about the other parameters λ n, ρ? Also these parameters are very close to Minkowski s upper bound with high probability. Exercise 4. If L is chosen according to Λ q (n, k), then Pr{ 1 δ n q k/n ρ(λ)} 1/ n.[hint: Prove the bound for Λ q (n, n k) Λ q (n, k) instead, and use duality and the transference theorems.] In summary, when a lattice is chosen according to Λ q (n, k) Λ q (n, k) = Λ q (n, n k) Λ q (n, n k), all the parameters λ 1,..., λ n, ρ are within a constant factor from Minkowski s bound nq k/n with overwhelming probability. This provides very useful information about what it means to solve ADD or BDD on these random lattices. Let Λ be a lattice chosen according to Λ q (n, k), and let t = v + x be a lattice point v Λ perturbed by a noise vector x chosen according to distribution D cq k/n over B(c nq k/n ). Finding a lattice point within distance c nq k/n from t is an ADD problem when c > δ and it is a BDD problem when c < 1/δ.. One way functions We now show that solving ADD and BDD on random lattices can be formulated as the problem of inverting the function (.1) f A (x) = Ax (mod q). when the matrix A Z k n q is chosen uniformly at random and the input is chosen according to distribution D cq k/n, for some c > 0. Of course, given a matrix A and a value b = f A (x), recovering a possible preimage of y under f A is just a matter of performing some linear algebra, and it can be efficiently accomplished in a variety of ways. In order to get a hardto-invert function from (.1), one needs to regard D cq k/n as a probability distribution over B(c nq k/n ), and consider f A as a function with this domain. The relation between inverting f A (x) = b (i.e., finding small solutions to the inhomogeneous system Ax = b (mod q)) and lattice problems is easily explained. Using linear algebra, one can efficiently find an arbitrary solution t Z n q to the system, but this solution will generally have large entries and not belong to the domain of f A. Linear algebra gives us a little more than an arbitrary solution. It tells us that any solution to the inhomogeneous system Ax = b can be expressed as the sum of any fixed specific solution t to At = b (mod q) and a solution z to the homogeneous system Az = 0 (mod q). But the set of solutions to the homogeneous system is precisely the lattice Λ q (A). So, finding a small x = t + z is equivalent to finding a lattice point z Λ q (A) within distance t ( z) = x from the target t. In summary, inverting f A is equivalent to the problem of finding lattice vectors in Λ q (A) within distance c nq k/n

from the target. If A is chosen uniformly at random, this corresponds to selecting the lattice according to distribution Λ q (n, k) and error vector distribution D α. Depending of the value of c, this is the ADD c problem (for c > δ ) or the BDD 1/c problem (for c < 1/δ). These two different ranges for c also correspond to very different statistical properties of the function f A. Namely, when c < 1/δ, the function f A is injective with high probability (which corresponds to BDD having at most one solution). Similarly, when c > δ, the function f A is surjective with high probability. In fact, one can say even more: when c > δ all the output values in Z n q have almost the same probability under f A (D cq k/n). However, while the statistical properties of f A are quite different for c < 1/δ and c > δ, it turns out that if f A is a one-way function, then it will look like pretty much like a bijection to any computationally bounded adversary regardless of the value of c: (1) No polynomial time adversary can distinguish the output distribution f A (D cq k/n) from the uniform distribution over Z n q. () No polynomial time adversary can find two distinct input values x y such that f A (x) = f A (y). The above properties can be formally proves based on the assumption that f A is hard to invert. I.e., any efficient algorithm that finds collisions f A (x) = f A (y), or tells the two distributions f A (D cq k/n) and Z n q apart (with nonnegligible advantage over the choice of A), can be turned into an efficient algorithm to invert f A (again, for randomly chosen A). We will formally prove some of these properties later on. For now, we observe that these properties immediately gives simple cryptographic applications like collision resistant hash functions and pseudorandom generators, and move on to describe more complex applications like public key encryption. Lattice duality can be used to give a different (but essentially equivalent) construction of one-way functions. Assume the lattice is chosen according to the (almost identical) distribution Λ q (n, n k), i.e., Λ = Λ q (A) for a random A Z n (n k) q. Then, the ADD/BDD instance corresponding to lattice Λ and error vector x D α can also be formulated by picking a random lattice point v Λ, and perturbing it by x to obtain the target t = v + x. Since the lattice is periodic modulo q, all vectors can be reduced modulo q, and the lattice point can be chosen uniformly at random from the finite set Λ mod q. The random lattice point can be chosen as v = As mod q for s Zq n k, and the resulting one-way function is: g A (s, x) = A T s + x (mod q). Notice that the input to this function consists of two parts: a uniformly random s Z n k q and a short (typically gaussian) error vector x which corresponds the input of the original function f A. We remark that the two matrices A and A associated to the two functions have different dimensions. In particular, they are not the same, rather they correspond to the primal and dual definition of the same q-ary lattice Λ q (A ) = Λ q (A). But, syntactic and representational issues aside, f A and g A are essentially the same one-way function. Both formulations are convenient to use in different contexts, and we will use the two formulations interchangeably. A more important issue the length of error vector x. This vector is drawn from a (truncated) distribution D α that produces vectors of length at most α n. When α is sufficiently small, both functions are one-to-one (and produce pseudorandom output), while when α is sufficiently large, the output is statistically close to uniform and the function is collision resistant.

3. Public Key Encryption A public key encryption scheme consists of three algorithms: (1) A key generation algorithm KeyGen that on input a security parameter, outputs a pair of secret and public keys (sk, pk) () An encryption algorithm Enc(pk, m; r) that on input the public key pk, a message m and some randomness r, output a ciphertext c (3) A decryption algorithm Dec(sk, c) that on input the secret key sk and the ciphertext c, recovers the original message m. To simplify the presentation and security definitions, we will consider encryption schemes where the message m 0, 1 consists of a single bit. Longer messages can be encrypted one bit at a time. In practice, it is easy to extend the schemes described here to more efficient systems that allow to encrypt longer messages in one shot. A bit encryption scheme is secure if no polynomial time adversary A can distringuish the encryption of 0 from the encryption of 1. More precisely, consider the following experiment: (1) (sk, pk) KeyGen(k) () b 0, 1 (3) b A(pk, Enc(pk, b)) The adversary is successful in the attack if b = b. Clearly, an adversary can always success with probability 1 by guessing at random. The ecryption scheme is secure if the advantage Pr b = b 1 ɛ, for an appropriately small ɛ > 0.1 How can we encrypt with random lattices? Let A Zq k n be a randomly chosen matrix. We can think of A as a global parameter shared by all users, or let the key generation algorithm choose it at random and include it as part of the public key. As already discussed, A defines two lattices Λ q (A) and Λ q (A). Public key generation and encryption work as follows. Key Generation: The secret key sk is chosen simply as a random short vector x Dα. n This error vector and lattice Λ q (A) define a hard CVP instance (\Lambda q (A), t) where t = g A (s, x) = A T s + x (mod q), where s Z n k q is chosen uniformly at random. The vector t is used as a public key pk. The public key generation can be interpreted as the process as taking a random lattice Λ q (A) and augmenting it by planting in it a very short vector x (the secret key) to obtain Λ = Λ q (x T, A) = Λ q (t T, A) Λ. Since the output of g A is pseudorandom, the public key vector t is computationally indistinguishable from a uniformly random vector in Z n q, and the lattice Λ = Λ q (t T, A) follows a distribution that is computationally indistinguishable from Λ(n, k + 1). Encryption: the randomness used for encryption is also a short vector r D n β, which defines a CVP instance in the (scaled) dual lattice q (Λ ) = qλ q (t T, A) = Λ q (t T, A). This time it is convenient to use formulate the CVP instance as the problem of inverting the function f A (r) = A r. The message is encrypted by first encoding it 1 The security of cryptographic functions is usually measured in bits, where k bits of security means that an adversary running in time at most k has advantage at most ɛ k We use notation (x T, A) for a matrix with x T as its first row, and the other rows given by A.

into a vector encode(m) Z k+1 q and then using f A (r) as a one-time pad to compute the ciphertest c = f A (r)+encode(m). The encoding function should be chosen in such a way to allow decryption, but before we get to that, let us examine the security of the scheme. We will show that the scheme is secure regardless of the encoding function. Assume there is an adversary A that breaks the security of the scheme. Security is defined by the following experiment: (1) A matrix A and vectors s, x are chosen with the appropriate probability distribution. Let A = (ga T (s, x), A). () Pick a random r and compute y = f A (r) (3) Choose a message bit m 0, 1 at random and compute the ciphertext c m = y+encode(m) (4) Run the adversary on input A(A, c). The adversary wins if it outputs m. Let the probability that A correctly guesses m in the experiment above be δ 1. We want to prove that the advantage δ 1 is small. Consider a modified experiment where the first step is replaced by choosing A Z q (k+1) n uniformly at random, and let δ be the success probability of A in this modified experiement. Notice that this is the same as choosing A as before, and replacing ga T (s, x) with a uniformly random vector. Since gt A (s, x) is computationally indistinguishable from a uniformly random vector, it must be δ δ 1. Now consider a further modification of the experiment, where, beside replacing step 1 with a randomly chosen A,we also replace step by choosing y Z k+1 q uniformly at random. Let δ 3 be the success probability of the adversary in this third experiment. Again, since f A (r) is pseudorandom, it must be δ 3 δ. Finally, we observe that δ 3 = 1 because in the input to the adversary A is a uniformly random matrix, statistically independent of the message m. So, δ 1 δ δ 3 = 1, proving the security of the scheme. At this point we are only left with the problem of finding an appropriate message encoding that allows to decrypt (with high probability). Exercise 5. Show that the encoding function encode(m) = ( m q/, 0,..., 0) allows to decrypt ciphertext using the secret key, provided α, β are small enough.