Hardness of Non-Interactive Differential Privacy from One-Way Functions

Size: px
Start display at page:

Download "Hardness of Non-Interactive Differential Privacy from One-Way Functions"

Transcription

1 Hardness of Non-Interactive Differential Privacy from One-Way Functions Lucas Kowalczyk Tal Malkin Jonathan Ullman Daniel Wichs May 30, 208 Abstract A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset D X n consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q. Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation (io), no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption. In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [Vad6]. Columbia University Department of Computer Science. luke@cs.columbia.edu. Columbia University Department of Computer Science. tal@cs.columbia.edu. Northeastern University College of Computer and Information Science. jullman@ccs.neu.edu. Northeastern University College of Computer and Information Science. wichs@ccs.neu.edu

2 Introduction A central challenge in privacy research is to generate rich private summaries of a sensitive dataset. Doing so creates a tension between two competing goals. On one hand we would like to ensure differential privacy [DMNS06] a strong notion of individual privacy that guarantees no individual s data has a significant influence on the summary. On the other hand, the summary should enable a user to obtain approximate answers to some large set of queries. Since the summary must be generated without knowing which queries the user will need to answer, we would like Q to be very large. This problem is sometimes called non-interactive query release, in contrast with interactive query release where the user is required specify the (much smaller) set of queries that he needs to answer in advance, and the private answers may be tailored to just these queries. More specifically, there is a sensitive dataset D = (D,...,D n ) X n where each element of D is the data of some individual, and comes from some data universe X. We are interested in generating a summary that allows the user to answer statistical queries on D, which are queries of the form What fraction of the individuals in the dataset satisfy some property q? [Kea93]. Given a set of statistical queries Q and a data universe X, we would like to design a differentially private algorithm M that takes a dataset D X n and outputs a summary that can be used to obtain an approximate answer to every query in Q. Since differential privacy requires hiding the information of single individuals, for a fixed (X,Q) generating a private summary becomes easier as n becomes larger. The overarching goal is to find algorithms that are both private and accurate for X and Q as large as possible and n as small as possible. Since differential privacy is a strong guarantee, a priori we might expect differentially private algorithms to be very limited. However, a seminal result of Blum, Ligett, and Roth [BLR3] showed how to generate a differentially private summary encoding answers to exponentially many queries. After a series of improvements and extensions [DNR + 09, DRV0, RR0, HR4, GRU2, HLM2, NTZ6, Ull5], we know that any set of queries Q over any universe X can be answered given a dataset of size n log X log Q [HR4]. Thus, it is information-theoretically possible to answer huge sets of queries using a small dataset. Unfortunately, all of these algorithms have running time poly(n, X, Q ), which can be exponential in the dimension of the dataset, and in the description of a query. For example if X = {0,} d, so each individual s data consists of d binary attributes, then the dataset has size nd but the running time will be at least 2 d. Thus, these algorithms are only efficient when both X and Q have polynomial size. There are computationally efficient algorithms when one of Q and X is very large, provided that the other is extremely small at most n 2 Ω(). Specifically, () the classical technique of perturbing the answer to each query with independent noise requires a dataset of size n Q [DN03, DN04, BDMN05, DMNS06], and (2) the folklore noisy histogram algorithm (see e.g. [Vad6]) requires a dataset of size n X log Q. Thus there are huge gaps between the power of information-theoretic and computationally efficient differentially private algorithms. Beginning with the work of Dwork et al. [DNR + 09], there has been a series of results giving evidence that this gap is inherent using a connection to traitor-tracing schemes [CFN94]. The first such result by Dwork et al. [DNR + 09] showed the first separation between efficient and inefficient differentially private algorithms, proving a polynomial-factor separation in sample complexity between the two cases assuming bilinear cryptography. Subsequently, Boneh and Zhandry [BZ4] proved that, under the much stronger assumption that indistinguishability obfuscation (io) exists, then for a worst-case family of statistical queries, there is no computationally efficient algorithm with poly(log Q + log X ) sample complexity. More recently, Kowalczyk et al. [KMUZ6] strengthened these results to show that the two efficient algorithms mentioned above independent perturbation and the noisy histogram are optimal up to polynomial factors, also assuming io.

3 Reference Data Universe # of Queries Dataset Size Assumption X κ Q κ n(κ) [DNR + 09, BSW06] exp(κ) exp(κ) κ 2 Ω() Bilinear Maps [DNR + 09, BZ4] exp(κ) exp(κ) poly(κ) io + OWF [KMUZ6] exp(κ) Õ(n7 ) poly(κ) io + OWF [KMUZ6] Õ(n7 ) exp(κ) poly(κ) io + OWF This work exp(κ o() ) exp(κ) poly(κ) OWF Table : Comparison of Hardness Results for Offline Differentially Private Query Release. Each row corresponds to an informal statement of the form If the assumption holds, then there is no general purpose differentially private algorithm that works when the data universe has size at least X, the number of queries is at least Q, and the size of the dataset is at most n. All assumptions are polynomial-time hardness. These results give a relatively clean picture of the complexity of non-interactive differential privacy, but only if we assume the existence of io. Recently, in his survey on the foundations of differential privacy [Vad6], Vadhan posed it as an open question to prove hardness of noninteractive differential privacy using standard cryptographic assumptions. In this work, we resolve this open question by proving a strong hardness result for non-interactive differential privacy making only the standard assumption that one-way functions (OWF) exist. Theorem.. There is a sequence of pairs {(X κ,q κ )} κ N where X κ = 2 2poly(loglogκ) = 2 κo() and Q κ = 2 κ such that, assuming the existence of one-way functions, for every polynomial n = n(κ), there is no polynomial time differentially private algorithm that takes a dataset D X n κ and outputs an accurate answer to every query in Q κ up to an additive error of ±/3. We remark that, in addition to removing the assumption of io, Theorem. is actually stronger than that of Boneh and Zhandry [BZ4], since the data universe size can be subexponential in κ, even if we only make standard polynomial-time hardness assumptions. We leave it as an interesting open question to obtain quantitatively optimal hardness results matching (or even improving) those of [KMUZ6] using standard assumptions. Table summarizes existing hardness results as compared to our work. Like all of the aforementioned hardness results, the queries constructed in Theorem. are somewhat complex, and involve computing some cryptographic functionality. A major research direction in differential privacy has been to construct efficient non-interactive algorithms for specific large families of simple queries, or prove that this problem is hard. The main technique for constructing such algorithms has been to leverage efficient PAC learning algorithms. Specifically, a series of works [BLR3, GHRU3, GRU2, HRS2] have shown that an efficient PAC learning algorithm for a class of concepts related to Q can be used to obtain efficient differentially private algorithms for answering the queries Q. Thus, hardness results for differential privacy imply hardness results for PAC learning. However, it is relatively easy to show the hardness of PAC learning using just OWFs [PV88], and one can even show the hardness of learning simple concept classes (e.g. DNF formulae [DLS4, DSS6]) by using more structured complexity assumptions. One roadblock to proving hardness results for privately answering simple families of queries is that, prior to our work, even proving hardness results for worst-case families of queries required 2

4 using extremely powerful cryptographic primitives like io, leaving little room to utilize more structured complexity assumptions to obtain hardness for simple queries. By proving hardness results for differential privacy using only the assumption of one-way functions, we believe our results are an important step towards proving hardness results for simpler families of queries. Relationship to [GKW7]. A concurrent and independent work by Goyal, Koppula, and Waters also shows how to prove hardness results for non-interactive differential privacy from weaker assumptions than io. Specifically, they propose a new primitive called risky traitor tracing that has weaker security than standard traitor tracing, but is still strong enough to rule out the existence of computationally efficient differentially private algorithms, and construct such schemes under certain assumptions on composite-order bilinear maps. Unlike our work, their new primitive has applications outside of differential privacy. However, within the context of differential privacy, Theorem. is stronger than what they prove in two respects: () their bilinear-map assumptions are significantly stronger than our assumption of one-way functions, and (2) their hardness result requires a data universe of size X κ = exp(κ), rather than our result, which allows X κ = exp(κ o() ).. Techniques Differential Privacy and Traitor-Tracing Schemes. Our results build on the connection between differentially private algorithms for answering statistical queries and traitor-tracing schemes, which was discovered by Dwork et al. [DNR + 09]. Traitor-tracing schemes were introduced by Chor, Fiat, and Naor [CFN94] for the purpose of identifying pirates who violate copyright restrictions. Roughly speaking, a (fully collusion-resilient) traitor-tracing scheme allows a sender to generate keys for n users so that ) the sender can broadcast encrypted messages that can be decrypted by any user, and 2) any efficient pirate decoder capable of decrypting messages can be traced to at least one of the users who contributed a key to it, even if an arbitrary coalition of the users combined their keys in an arbitrary efficient manner to construct the decoder. Dwork et al. show that the existence of traitor-tracing schemes implies hardness results for differential privacy. Very informally, they argue as follows. Suppose a coalition of users takes their keys and builds a dataset D X n where each element of the dataset contains one of their user keys. The family Q will contain a query q c for each possible ciphertext c. The query q c asks What fraction of the elements (user keys) in D would decrypt the ciphertext c to the message? Every user can decrypt, so if the sender encrypts a message b {0,} as a ciphertext c, then every user will decrypt c to b. Thus, the answer to the statistical query q c will be b. Now, suppose there were an efficient algorithm that outputs an accurate answer to each query q c in Q. Then the coalition could use it to efficiently produce a summary of the dataset D that enables one to efficiently compute an approximate answer to every query q c, which would also allow one to efficiently decrypt the ciphertext. Such a summary can be viewed as an efficient pirate decoder, and thus the tracing algorithm can use the summary to trace one of the users in the coalition. However, if there is a way to identify one of the users in the dataset from the summary, then the summary is not private. Hardness of Privacy from OWF. In order to instantiate this outline, we need a sufficiently good traitor-tracing scheme. Traitor-tracing schemes can be constructed from any functional encryption scheme for comparison functions [BSW06] This is a cryptographic scheme in which secret keys are associated with functions f and ciphertexts are associated with a message x, and decrypting the These were called private linear broadcast encryption schemes by [BSW06], but we use the more modern terminology of functional encryption. 3

5 ciphertext with a secret key corresponding to f reveals f (x) and nothing else. In our application, the functions are of the form f z where f z (x) = if and only if x z (as integers). Using techniques from [KMUZ6] (also closely related to arguments in [BZ6]), we show that, in order to prove hardness results for differentially private algorithms it suffices to have a functional encryption scheme for comparison functions that is non-adaptively secure for just two ciphertexts and n secret keys. That is, if an adversary chooses to receive keys for n functions f,...,f n, and ciphertexts for two messages x,x 2, then he learns nothing more than {f i (x ),f i (x 2 )} i [n]. Moreover, the comparison functions only need to support inputs in {0,,...,n} (i.e. logn bits). Lastly, it suffices for us to have a symmetric-key functional encryption scheme where both the encryption and key generation can require a private master secret key. We then construct this type of functional encryption (FE) using the techniques of Gorbunov, Vaikuntanathan and Wee [GVW2] who constructed bounded-collusion FE from any public-key encryption. There are two important differences between the type of FE that we need and boundedcollusion FE in [GVW2]: () we want a symmetric-key FE based on one-way functions (OWFs), whereas they constructed public-key FE using public-key encryption, (2) we want security for only 2 ciphertexts but many secret keys, whereas they achieved security for many ciphertexts but only a small number of secret keys. It turns out that their construction can be rather easily scaled down from the public-key to the symmetric-key setting by replacing public-key encryption with symmetric-key encryption (as previously observed by, e.g., [BS8]). Going from many ciphertexts and few secret keys to many secret keys and few ciphertexts essentially boils down to exchanging the role of secret keys and ciphertexts in their scheme, but this requires care. We give the full description and analysis of this construction. Lastly, we rely on one additional property: for the simple functions we consider with logarithmic input length, we can get a scheme where the ciphertext size is extremely small κ o(), where κ is the security parameter, while being able to rely on standard polynomial hardness of OWFs. To do so, we replace the garbled circuits used in the construction of [GVW2] with information-theoretic randomized encodings for simple functions and leverage the fact that we are in the more restricted nonadaptive secret-key setting. The resulting small ciphertext size allows us to get DP lower bounds even when the data universe is of size X = exp(κ o() ). We remark that Tang and Zhang [TZ7] proved that any black-box construction of a traitortracing scheme from a random oracle must have either keys or ciphertexts of length n Ω(), provided that the scheme does not make calls to the random oracle when generating the user keys. Our construction uses one-way functions during key generation, and thus circumvents this barrier. Why Two-Ciphertext Security? In the hardness reduction sketched above, the adversary for the functional encryption scheme will use the efficient differentially private algorithm to output some stateless program (the summary) that correctly decrypts ciphertexts for the functional encryption scheme (by approximately answering statistical queries). The crux of the proof is to use differential privacy to argue that the scheme must violate security of the functional encryption scheme by distinguishing encryptions of the messages x and x even if it does not possess a secret key for the function f x, which is the only function in the family of comparison functions that would give different output on these two messages, and therefore an adversary without this key should not be able to distinguish between these two messages. Thus, in order to obtain a hardness result for differential privacy we need a functional encryption scheme with the following non-standard security definition: for every polynomial time adversary that obtains a set of secret keys corresponding to functions other than f x and outputs some stateless program, with high probability that program has small advantage in distinguishing encryptions of x from x. Implicit in the work of Kowalczyk et al. [KMUZ6] is a lemma that says 4

6 that this property is satisfied by any functional encryption scheme that satisfies the standard notion of security for two messages. At a high level, security for one-message allow for the possibility that the adversary sometimes outputs a program with large positive advantage and sometimes outputs a program with large negative advantage, whereas two-message security bounds the average squared advantage, meaning that the advantage must be small with high probability. This argument is similar to one used by Dodis and Yu [DY3] in a completely different setting..2 Additional Related Work (Hardness of) Interactive Differential Privacy. Another area of focus is interactive differential privacy, where the mechanism gets the dataset D and a (relatively small) set of queries Q chosen by the analyst and must output answers to each query in Q. Most differentially private algorithms for answering a large number of arbitrary queries actually work in this setting [DNR + 09, DRV0, HLM2], or even in a more challenging setting where the queries in Q arrive online and may be adaptively chosen. [RR0, RR0, GRU2, Ull5]. Ullman [Ull6] showed that, assuming oneway functions exist, there is no polynomial-time differentially private algorithm that takes a dataset D X n and a set of Õ(n2 ) arbitrary statistical queries and outputs an accurate answer to each of these queries. The hardness of interactive differential privacy has also been extended to a seemingly easier model of interactive data analysis [HU4, SU5], which is closely related to differential privacy [DFH + 5, BNS + 6], even though privacy is not an explicit requirement in that model. These results however do not give any specific set of queries Q that can be privately summarized information-theoretically but not by a computationally efficient algorithm, and thus do not solve the problem addressed in thus work. The Complexity of Simple Statistical Queries. As mentioned above, a major open research direction is to design non-interactive differentially private algorithms for simple families of statistical queries. For example, there are polynomial time differentially private algorithms with polynomial sample complexity for summarizing point queries and threshold queries [BNS3, BNSV5], using an information-theoretically optimal number of samples. Another class of focus has been marginal queries [GHRU3, HRS2, TUV2, CTUW4, DNT4]. A marginal query is defined on the data universe {0,} κ. It is specified by a set of positions S {,...,κ}, and a pattern t {0,} S and asks What fraction of elements of the dataset have each coordinate j S set to t j? Specifically, Thaler et al. [TUV2], building on the work of Hardt et al. [HRS2] gave an efficient differentially private algorithm for answering all marginal queries up to an additive error of ±.0 when the dataset is of size n 2 κ. If we assume sufficiently hard one-way functions exist, then Theorem. would show that these parameters are not achievable for an arbitrary set of queries. It remains a central open problem in differential privacy to either design an optimal computationally efficient algorithm for marginal queries or to give evidence that this problem is hard. Hardness of Synthetic Data. There have been several other attempts to explain the accuracy vs. computation tradeoff in differential privacy by considering restricted classes of algorithms. For example, Ullman and Vadhan [UV] (building on Dwork et al. [DNR + 09]) show that, assuming one-way functions, no differentially private and computationally efficient algorithm that outputs a synthetic dataset can accurately answer even the very simple family of 2-way marginals. A synthetic dataset is a specific type of summary that is interchangeable with the real dataset it is a set ˆD = ( ˆD,..., ˆD n ) X n such that the answer to each query on ˆD is approximately the same as the answer to the same query on D. 2-way marginals are just the subset of marginal queries above where we only allow S 2, and these queries capture the mean covariances of the attributes. This result is incomparable to ours, since it applies to a very small and simple family of statistical 5

7 queries, but only applies to algorithms that output synthetic data. Information-Theoretic Lower Bounds. A related line of work [BUV4, DTTZ4, BST4, BU7, SU7] uses ideas from fingerprinting codes [BS98] to prove information-theoretic lower bounds on the number of queries that can be answered by differentially private algorithms, and also devise realistic attacks against the privacy of algorithms that attempt to answer too many queries [DSS + 5, DSSU7]. Most relevant to this work is the result of [BUV4] which says that if the size of the data universe is 2 n2, then there is a fixed set of n 2 queries that no differentially private algorithm, even a computationally unbounded one, can answer accurately. Although these results are orthogonal to ours, the techniques are quite related, as fingerprinting codes are essentially the informationtheoretic analogue of traitor-tracing schemes. 2 Differential Privacy Preliminaries 2. Differentially Private Algorithms A dataset D X n is an ordered set of n rows, where each row corresponds to an individual, and each row is an element of some data universe X. We write D = (D,...,D n ) where D i is the i-th row of D. We will refer to n as the size of the dataset. We say that two datasets D,D X are adjacent if D can be obtained from D by the addition, removal, or substitution of a single row, and we denote this relation by D D. In particular, if we remove the i-th row of D then we obtain a new dataset D i D. Informally, an algorithm A is differentially private if it is randomized and for any two adjacent datasets D D, the distributions of A(D) and A(D ) are similar. Definition 2. (Differential Privacy [DMNS06]). Let A : X n S be a randomized algorithm. We say that A is (ε,δ)-differentially private if for every two adjacent datasets D D and every E S, In this definition, ε,δ may be functions of n. P[A(D) E] e ε P [ A(D ) E ] + δ. 2.2 Algorithms for Answering Statistical Queries In this work we study algorithms that answer statistical queries (which are also sometimes called counting queries, predicate queries, or linear queries in the literature). For a data universe X, a statistical query on X is defined by a predicate q : X {0,}. Abusing notation, we define the evaluation of a query q on a dataset D = (D,...,D n ) X n to be n n q(d i ). i= A single statistical query does not provide much useful information about the dataset. However, a sufficiently large and rich set of statistical queries is sufficient to implement many natural machine learning and data mining algorithms [Kea93], thus we are interested in differentially private algorithms to answer such sets. To this end, let Q = {q : X {0,}} be a set of statistical queries on a data universe X. Informally, we say that a mechanism is accurate for a set Q of statistical queries if it answers every query in the family to within error ±α for some suitable choice of α > 0. Note that 0 q(d), so this definition of accuracy is meaningful when α < /2. 6

8 Before we define accuracy, we note that the mechanism may represent its answer in any form. That is, the mechanism outputs may output a summary S S that somehow represents the answers to every query in Q. We then require that there is an evaluator Eval : S Q [0,] that takes the summary and a query and outputs an approximate answer to that query. That is, we think of Eval(S,q) as the mechanism s answer to the query q. We will abuse notation and simply write q(s) to mean Eval(S,q). 2 Definition 2.2 (Accuracy). For a family Q of statistical queries on X, a dataset D X n and a summary S S, we say that S is α-accurate for Q on D if q Q q(d) q(s) α. For a family of statistical queries Q on X, we say that an algorithm A : X n S is (α,β)-accurate for Q given a dataset of size n if for every D X n, P[A(D) is α-accurate for Q on X] β. In this work we are typically interested in mechanisms that satisfy the very weak notion of (/3, o(/n))-accuracy, where the constant /3 could be replaced with any constant < /2. Most differentially private mechanisms satisfy quantitatively much stronger accuracy guarantees. Since we are proving hardness results, this choice of parameters makes our results stronger. 2.3 Computational Efficiency Since we are interested in asymptotic efficiency, we introduce a computation parameter κ N. We then consider a sequence of pairs {(X κ,q κ )} κ N where Q κ is a set of statistical queries on X κ. We consider databases of size n where n = n(κ) is a polynomial. We then consider algorithms A that take as input a dataset X n κ and output a summary in S κ where {S κ } κ N is a sequence of output ranges. There is an associated evaluator Eval that takes a query q Q κ and a summary s S κ and outputs a real-valued answer. The definitions of differential privacy and accuracy extend straightforwardly to such sequences. We say that such an algorithm is computationally efficient if the running time of the algorithm and the associated evaluator run in time polynomial in the computation parameter κ. In principle, it could require as many as X bits even to specify a statistical query, in which case we cannot hope to answer the query efficiently, even ignoring privacy constraints. Thus, we restrict attention to statistical queries that are specified by a circuit of size polylog X, and thus can be evaluated in time polylog X, and so are not the bottleneck in computation. To remind the reader of this fact, we will often say that Q is a family of efficiently computable statistical queries. 2.4 Notational Conventions Given a boolean predicate P, we will write I{P } to denote the value if P is true and 0 if P is false. We also say that a function ε = ε(n) is negligible if ε(n) = O(/n c ) for every constant c > 0, and denote this by ε(n) = negl(n). 2 If we do not restrict the running time of the algorithm, then it is without loss of generality for the algorithm to simply output a list of real-valued answers to each queries by computing Eval(S,q) for every q Q. However, this transformation makes the running time of the algorithm at least Q. The additional generality of this framework allows the algorithm to run in time sublinear in Q. This generality is crucial for our results, which apply to settings where the family of queries is superpolynomially large in the size of the dataset. 7

9 3 Weakly Secure Traitor-Tracing Schemes In this section we describe a very relaxed notion of traitor-tracing schemes whose existence will imply the hardness of differentially private data release. 3. Syntax and Correctness For a function n : N N and a sequence {K κ,c κ } κ N, an (n,{k κ,c κ })-traitor-tracing scheme is a tuple of efficient algorithms Π = (Setup,Enc,Dec) with the following syntax. Setup takes as input a security parameter κ, runs in time poly(κ), and outputs n = n(κ) secret user keys sk,...,sk n K κ and a secret master key msk. We will write sk = (sk,...,sk n,msk) to denote the set of keys. Enc takes as input a master key msk and an index i {0,,...,n}, and outputs a ciphertext c C κ. If c R Enc(j,msk) then we say that c is encrypted to index j. Dec takes as input a ciphertext c and a user key sk i and outputs a single bit b {0,}. We assume for simplicity that Dec is deterministic. Correctness of the scheme asserts that if sk are generated by Setup, then for any pair i,j, Dec(sk i,enc(msk,j)) = I{i j}. For simplicity, we require that this property holds with probability over the coins of Setup and Enc, although it would not affect our results substantively if we required only correctness with high probability. Definition 3. (Perfect Correctness). An (n,{k κ,c κ })-traitor-tracing scheme is perfectly correct if for every κ N, and every i,j {0,,...,n} P [Dec(sk i,c) = I{i j}] =. sk=setup(κ), c=enc(msk,j) 3.2 Index-Hiding Security Intuitively, the security property we want is that any computationally efficient adversary who is missing one of the user keys sk i cannot distinguish ciphertexts encrypted with index i from index i, even if that adversary holds all n other keys sk i. In other words, an efficient adversary cannot infer anything about the encrypted index beyond what is implied by the correctness of decryption and the set of keys he holds. More precisely, consider the following two-phase experiment. First the adversary is given every key except for sk i, and outputs a decryption program S. Then, a challenge ciphertext is encrypted to either i or to i. We say that the traitor-tracing scheme is secure if for every polynomial time adversary, with high probability over the setup and the decryption program chosen by the adversary, the decryption program has small advantage in distinguishing the two possible indices. Definition 3.2 (Index Hiding). A traitor-tracing scheme Π satisfies (weak) index-hiding security if for every sufficiently large κ N, every i [n(κ)], and every poly(κ)-time adversary A, [ P P[S(Enc(msk,i )) = ] P[S(Enc(msk,i )) = ] > ] sk=setup(κ),s=a(sk i ) 4en 4en In the above, the inner probabilities are taken over the coins of Enc and S. () 8

10 Note that in the above definition we have fixed the success probability of the adversary for simplicity. Moreover, we have fixed these probabilities to relatively large ones. Requiring only a polynomially small advantage is crucial to achieving the key and ciphertext lengths we need to obtain our results, while still being sufficient to establish the hardness of differential privacy. 3.3 Index-Hiding Security Implies Hardness for Differential Privacy It was shown by Kowalczyk et al. [KMUZ6] (refining similar results from [DNR + 09, Ull6]) that a traitor-tracing scheme satisfying index-hiding security implies a hardness result for non-interactive differential privacy. Theorem 3.3. Suppose there is an (n,{k κ,c κ })-traitor-tracing scheme that satisfies perfect correctness (Definition 3.) and index-hiding security (Definition 3.2). Then there is a sequence of of pairs {X κ,q κ } κ N where Q κ is a set of statistical queries on X κ, Q κ = C κ, and X κ = K κ such that there is no algorithm A that is simultaneously. computationally efficient, 2. (,/4n)-differentially private, and 3. (/3,/2n)-accurate for Q κ on datasets D X n(κ) κ. 3.4 Two-Index-Hiding-Security While Definition 3.2 is the most natural to prove hardness of privacy, it is not consistent with the usual security definition for functional encryption because of the nested probability-ofprobabilities. In order to apply more standard notions of functional encryption, we show that index-hiding security follows from a more natural form of security for two ciphertexts. First, consider the following IndexHiding game. The challenger generates keys sk = (sk,...,sk n,msk) R Setup(κ). The adversary A is given keys sk i and outputs a decryption program S. The challenger chooses a bit b R {0,} The challenger generates an encryption to index i b, c R Enc(msk,i b) The adversary makes a guess b = S(c) Figure : IndexHiding i Let IndexHiding i, sk,s be the game IndexHiding i where we fix the choices of sk and S. Also, define [ Adv i, sk,s = P b = b ] 2. so that IndexHiding i, sk,s [ P b = b ] IndexHiding i 2 = E [ ] Advi sk=setup(κ), sk,s S=A(sk i ) Then the following statement implies () in Definition 3.2: P sk=setup(κ),s=a(sk i ) [ Adv i, sk,s > 4en 9 ] 2en (2)

11 The challenger generates keys sk = (sk,...,sk n,msk) R Setup. The adversary A is given keys sk i and outputs a decryption program S. Choose b 0 R {0,} and b R {0,} independently. Let c 0 R Enc(i b 0 ;msk) and c R Enc(i b ;msk). Let b = S(c 0,c ). Figure 2: TwoIndexHiding i We can define a related two-index-hiding game. Analogous to what we did with IndexHiding, we can define TwoIndexHiding i, sk,s to be the game TwoIndexHiding i where we fix the choices of sk and S, and define TwoAdv i = P TwoIndexHiding i [ b = b 0 b ] 2 Kowalczyk et al. [KMUZ6] proved the following lemma that will be useful to connect our new construction to the type of security definition that implies hardness of differential privacy. Lemma 3.4. Let Π be a traitor-tracing scheme such that for every efficient adversary A, every κ N, and index i [n(κ)], TwoAdv i 300n 3 Then Π satisfies weak index-hiding security. In the rest of the paper, we will construct a scheme satisfying the assumption of the above lemma with suitable key and ciphertexts lengths, which we can immediately plug into Theorem 3.3 to obtain Theorem. in the introduction. 4 Cryptographic Tools 4. Decomposable Randomized Encodings Let F = { f : {0,} l {0,} k} be a family of Boolean functions. An (information-theoretic) decomposable randomized encoding for F is a pair of efficient algorithms (DRE.Encode,DRE.Decode) such that the following hold: DRE.Encode takes as input a function f F and randomness R and outputs a randomized encoding consisting of a set of l pairs of labels { } F F(f,R) = (f,0,r) F l (f,0,r) F (f,,r) F l (f,,r) where the i-th pair of labels corresponds to the i-th bit of the input x. (Correctness) DRE.Decode takes as input a set of l labels corresponding to some function f and input x and outputs f (x). Specifically, f F, x {0,} l DRE.Decode ( F (f,x,r),..., F l (f,x l,r) ) = f (x) with probability over the randomness R. 0

12 (Information-Theoretic Security) For every function f and input y, the set of labels corresponding to f and y reveal nothing other than f (y). Specifically, there exists a randomized simulator DRE.Sim that depends only on the output f (x) such that { f F, x {0,} l F (f,x,r),..., F l (f,x l,r) } DRE.Sim(f (x)) where denotes that the two random variables are identically distributed. The length of the randomized encoding is the maximum length of F(f,R) over all choices of f F and the randomness R. We will utilize the fact that functions computable in low depth have small decomposable randomized encodings. Theorem 4. ([Bar86, Kil88]). If F is a family of functions such that a universal function for F, U(f,x) = f (x) can be computed by Boolean formulae of depth d (with fan-in 2, over the basis {,, }), then F has an information-theoretic decomposable randomized encoding of length O(4 d ). 4.2 Private Key Functional Encryption Let F = { f : {0,} l {0,} k} be a family of functions. A private key functional encryption scheme for F is a tuple of polynomial-time algorithms Π FE = (FE.Setup,FE.KeyGen,FE.Enc,FE.Dec) with the following syntax and properties: FE.Setup takes a security parameter κ and outputs a master secret key FE.msk. FE.KeyGen takes a master secret key FE.msk and a function f F and outputs a secret key FE.sk f corresponding to the function f. FE.Enc takes the master secret key FE.msk and an input x {0,} l and outputs a ciphertext c corresponding to the input x. (Correctness) FE.Dec takes a secret key FE.sk corresponding to a function f and a ciphertext c corresponding to an input x and outputs f (x). Specifically, for every FE.msk is in the support of FE.Setup FE.Dec(FE.KeyGen(FE.msk, f ), FE.Enc(FE.msk, x)) = f (x) The key length is the maximum length of FE.sk over all choices of f F and the randomness of FE.Setup, FE.Enc. The ciphertext length is the maximum length of c over all choices of x {0,} l and the randomness of FE.Setup,FE.Enc. (Security) We will use a non-adaptive simulation-based definition of security. In particular, we are interested in security for a large number of keys n and a small number of ciphertexts m. We define security through the pair of games in Figure 3. We say that Π FE is (n,m,ε)- secure if there exists a polynomial-time simulator FE.Sim such that for every polynomial-time adversary A and every κ, P [ Eκ,n,m(Π real FE,A) = ] P [ Eκ,n,m(Π ideal FE,A,FE.Sim) = ] ε(κ) Our goal is to construct a functional encryption scheme that is (n,2, 300n 3 )-secure and has short ciphertexts and keys, where n = n(κ) is a polynomial in the security parameter. Although it is not difficult to see, in Section 7 we prove that the definition of security above implies the definition of two-index-hiding security that we use in Lemma 3.4.

13 E real κ,n,m(π FE,A) : A outputs at most n functions f,...,f n and m inputs x,...,x m Let FE.msk R FE.Setup( κ ) and i [n] FE.sk fi R FE.KeyGen(FE.msk,f i ) j [m] c j R FE.Enc(FE.msk,x j ) A receives {FE.sk fi } n i= and {c j} m j= and outputs b Eκ,n,m(Π ideal FE,A,FE.Sim) : A outputs at most n functions f,...,f n and m inputs x,...,x m ( {FE.skfi } n i=, { c j } m j=) R FE.Sim A receives {FE.sk fi } n i= and {c j} m j= and outputs b ({ f i, { f i (x j ) } m j= } n i= ) Figure 3: Security of Functional Encryption 4.2. Function-Hiding Functional Encryption As an ingredient in our construction we also need a notion of function-hiding security for a (onemessage) functional encryption scheme. Since we will only need this definition for a single message, we will specialize to that case in order to simplify notation. We say that Π FE is function-hiding (n,,ε)-secure if there exists a polynomial-time simulator FE.Sim such that for every polynomialtime adversary A and every κ, P [ Ēκ,n, real (Π FE,A) = ] P [ Ēκ,n, ideal (Π FE,A,FE.Sim) = ] ε(κ) where Ēκ,n, real,ēideal κ,n, are the same experiments as as Ereal κ,n,,eideal κ,n, except that the simulator in Ēideal κ,n, is not given the functions f i as input. Namely, in Ēκ,n, ideal: ({ } n FE.skfi, c) i= R FE.Sim ( {f i (x)} n i=) A main ingredient in the construction will be a function-hiding functional encryption scheme that is (n,,negl(κ))-secure. The construction is a small variant of the constructions of Sahai and Seyalioglu [SS0] and Gorbunov, Vaikuntanathan, and Wee [GVW2] Theorem 4.2 (Variant of [SS0, GVW2]). Let F be a family of functions such that a universal function for F has a decomposable randomized encoding of length L. That is, the function U(f,x) = f (x) has a DRE of length L. If one-way functions exist, then for any polynomial n = n(κ) there is an (n,,negl(κ))- function-hiding-secure functional encryption scheme Π with key length L and ciphertext length O(κL). Although this theorem follows in a relatively straightforward way from the techniques of [SS0, GVW2], we will give a proof of this theorem in Section 5. The main novelty in the theorem is to verify that in settings where we have a very short DRE shorter than the security parameter κ we can make the secret keys have length proportional to the length of the DRE rather than proportional to the security parameter. 2

14 5 One-Message Functional Encryption We will now construct Π OFE = (OFE.Setup,OFE.KeyGen,OFE.Enc,OFE.Dec): a function-hiding (n,,negl(κ))-secure functional encryption scheme for functions with an (information-theoretic) decomposable randomized encoding DRE. The construction is essentially the same as the (public key) variants given by [SS0, GVW2] except we consider information theoretic randomized encodings instead of computationally secure ones and instead of encrypting the labels under a public key encryption scheme, we take advantage of the private-key setting to use an encryption method that produces ciphertexts with size equal to the message if the message is smaller than the security parameter. Encrypting the labels of a short randomized encoding, this allows us to argue that keys for our scheme are small. To perform this encryption, we use a PRF evaluated on known indices to mask each short label of DRE. Let n = poly(κ) denote the number of users for the scheme. We assume for simplicity that lgn is an integer. Our construction will rely on the following primitives: A pseudorandom function family {PRF sk : {0,} lgn {0,} lgn s {0,} κ }. A decomposable randomized encoding of f y (x) = I{x y} where x,y {0,} logn. Setup( κ ) : Choose seeds sk k,b R {0,} κ for k [lgn],b {0,}. Choose randomness R j for the randomized encoding for each j [n]. Choose x R {0,} lgn. Define { K (j) k,b where k {,..,lgn},b {0,} Let each user s secret key be sk j = (j,{k (j) k,b }). Let the master key be msk = {sk k,b }. Enc(i,msk = {sk k,b }) : Output c i = (y := x i,sk,y,...,sk lgn,ylgn ) Dec(c i,sk j ): } := { PRF skk,b (j) F k (f j,x k b,r j ) } Output DRE.Decode(PRF sk,y (j) K (j),y,...,prf sklgn,ylgn (j) K (j) lgn,y lgn ) Figure 4: Our scheme Π OFE. 5. Proof of Correctness Dec(c i,sk j ) = DRE.Decode(PRF sk,y (j) K (j),y,...,prf sklgn,ylgn (j) K (j) lgn,y lgn ) = DRE.Decode( F (f j,y x,r j ),..., F lgn (f j,y lgn x lgn,r j )) = DRE.Decode( F (f j,i,r j ),..., F lgn (f j,i lgn,r j )) = f j (i) 3

15 Where the last step uses the (perfect) correctness of the randomized encoding scheme. So: [ P [Dec(c i,sk i ) = I{i j}] = P DRE.Decode( F (f j,i,r j ),..., F lgn (f j,i lgn,r j )) = f j (i) ] =. sk=setup(κ),c i =Enc(msk,i) R 5.2 Proof of Security Lemma 5.. P [ Eκ,n,m(Π real OFE,A) = ] P [ Eκ,n,m(Π ideal OFE,A,FE.Sim) = ] ε(κ) Proof. Consider the hybrid scheme Π OFE defined in Figure 5, which uses a truly random string instead of the output of a PRF for the encrypted labels corresponding to the off-bits of y = x i. Note that this scheme is only useful in the nonadaptive security game, where i is known at time of Setup (since it is needed to compute y). We can easily show that the scheme is indistinguishable from the original scheme in the nonadaptive security game. Setup( κ ) : Choose seeds sk k R {0,} κ for k [lgn]. Choose randomness R j for the randomized encoding for each j [n]. Choose x R {0,} lgn. Let y = x i. Define { } K (j) k,y := { PRF skk (j) F k k (f j,i k,r j ) } Construct { K (j) k,y k } R {0,} lgn where k {,..,lgn} Let each user s secret key be sk j = (j,{k (j) k,b }). Let the master key be msk = {sk k }. Enc(i,msk = {sk k }) : Output c i = (y = x i,sk,...,sk lgn ) Dec(c i,sk j ): Output DRE.Decode(PRF sk (j) K (j),y,...,prf sklgn (j) K (j) lgn,y lgn ) Figure 5: Hybrid scheme Π OFE. Lemma 5.2. P [ Eκ,n,m(Π real OFE,A) = ] P [ Eκ,n,m(Π real OFE,A) = ] lgn PRF.Adv(κ) = ε(κ) Proof. Follows easily by the security of the PRF (applied lgn times in a hybrid for each function in the off-bits of y). In Figure 6 we define a simulator for the ideal setting that is indistinguishable from our hybrid scheme Π OFE. The simulator uses the simulator for the decomposable randomized encoding scheme to generate the labels to be encrypted using only the knowledge of the output value of the functions on the input. 4

16 OFE.Sim: Input: κ, and the evaluation of n (unknown) functions on unknown input: {y i } n i= Let DRE.Sim be the simulator for information-theoretic randomized encoding DRE. // Generate ciphertext Choose y R {0,} lgn. For k [lgn]: Choose sk k R {0,} κ Let c = (y,sk,...,sk lgn ) // Generate keys for j [n] using DRE.Sim For j [n]: ( ) Generate: F (j),..., F (j) lgn DRE.Sim(y j ) Let: K (j) k,y = PRF skk (j) F (j) k k for k [lgn] Choose: K (j) k,y R {0,} lgn for k [lgn] k { Let: OFE.sk j = K (j) k,b } k [lgn], b {0,} // Output the n simulated secret keys and simulated ciphertext Output: {OFE.sk i } n i=, c Figure 6: The Simulator OFE.Sim for Π OFE. Lemma 5.3. P [ Eκ,n,m(Π real OFE,A) = ] P [ Eκ,n,m(Π ideal OFE,A,FE.Sim) = ] = 0 Proof. Follows easily by the information-theoretic security of the randomized encoding. Adding the statements of Lemma 5.2 and Lemma 5.3 gives us the original statement of security. So, Π OFE is a function-hiding-secure private-key functional encryption scheme (n,,negl(κ)) with security based on the hardness of a PRF (which can be instantiated using a one-way function) and the existence of an information-theoretic randomized encoding for the family of comparison functions {f j : {0,} lgn {0,}} of length L. Furthermore, note that the length of ciphertexts is: lgn κ + lgn = O(κL) and the length of each key is L, satisfying the conditions of Theorem A Two-Message Functional Encryption Scheme for Comparison We now use the one-message functional encryption scheme Π OFE described in Section 5 to construct a functional encryption scheme Π FE that is (n,2, 300n 3 )-secure for the family of comparison functions. For any y {0,} l, let f y (x) = I{x y} where the comparison operation treats x,y as numbers in binary. We define the family of functions F comp := { f y : {0,} l {0,} y {0,} l } 5

17 In our application, we need x,y {0,,...,n}, so we will set l = log 2 (n + ) = O(logn). One important property of our construction is that the user key length will be fairly small as a function of l, so that when l = O(logn), the overall length of user keys will be n o() (in fact, nearly polylogarithmic in n). 6. Construction Our construction will be for a generic family of functions F, and we will only specialize the construction to F comp when setting the parameters and bounding the length of the scheme. Before giving the formal construction, let s gather some notation and ingredients. Note that we will introduce some additional parameters that are necessary to specify the scheme, but we will leave many of these parameters to be determined later. Let n be a parameter bounding the number of user keys in the scheme, and let F be a finite field whose size we will determine later. Let F = { f : {0,} l {0,} } be a family of functions. For each function f F we define an associated polynomial f : F l+ F as follows:. Let f ˆ : F l F be a polynomial computing f 2. Define f : F l+ F to be f (x,...,x l,z) = f ˆ (x,...,x l ) + z Let D and S be such that every for every f F, the associated polynomial f has degree at most D and can be computed by an arithmetic circuit of size at most S. These degree and size parameters will depend on F. Let P D,S,F be the set of all univariate polynomials p : F F of degree at most D and size at most S. Let Π OFE = (OFE.Setup,OFE.KeyGen,OFE.Enc,OFE.Dec) be an (n,,negl(κ))- function-hiding-secure functional encryption scheme (i.e. secure for n keys and one message) for the family of polynomials P D,S,F. We re now ready to describe the construction of the two-message functional encryption scheme Π FE. The scheme is specified in Figure 7. Correctness of Π FE. Before going on to prove security, we will verify that encryption and decryption are correct for our scheme. Fix any f i F and let f i : F l F be the associated polynomial, and fix any input x F l. Let r i : F F be the degree RD polynomial chosen by FE.KeyGen on input f i and let f i,ri (,t) be the function used to generate the key OFE.sk i,t. Let q : F F l be the degree R polynomial map chosen by FE.Enc on input x. Observe that, by correctness of Π OFE, when we run FE.Dec we will have p(t) = OFE.Dec(OFE.sk i,t,c t ) = f i,ri (q(t),t) = f i (q(t)) + r i (t). Now, consider the polynomial f i,q,ri : F F defined by f i,q,ri (t) = f i (q(t)) + r i (t). Since f i has degree at most D, q has degree at most R, and r i has degree at most RD, the degree of f i,q,ri is at most RD. Since U = RD +, the polynomial p agrees with f i,q,ri at RD + distinct points, and thus p f i,q,ri. In particular, p(0) = f i,q,ri (0). Since we chose r i and q such that r i (0) = 0 and q(0) = x, we have p(0) = f i (q(0)) + r i (0) = f i (x). This completes the proof of correctness. 6

18 Global Parameters: A family of functions F = {f : {0,} l {0,} l } is a family of functions, a finite field F, a bound D on the degree of polynomials f : F l+ F computing a function in F, a parameter R N, and parameters U = RD +, and T = U 2. FE.Setup( κ ) : Generate T independent master keys for OFE.msk t R OFE.Setup( κ ) Output FE.msk = {OFE.msk t } T t= FE.KeyGen(FE.msk,f i ) : // To aid in our proofs later, we index invocations of FE.KeyGen with i Let f i : F l F be a multivariate polynomial computing f i Choose a random polynomial r i : F F of degree RD such that r i (0) = 0 For every t [T ]: Define the function f i,ri (x,t) = f i (x) + r i (t) Let OFE.sk i,t R OFE.KeyGen(OFE.msk t, f i,ri (,t)) Output FE.sk i = {OFE.sk i,t } T t= FE.Enc(FE.msk, x) : Choose a random polynomial map q : F F l of degree R so that q(0) = x. Choose a random U [T ] of size U For every t U: let c t R OFE.Enc(OFE.msk t,q(t)) Output c = {c t } t U FE.Dec(FE.sk i,c) : Let FE.sk i = {OFE.sk i,t } T t=, c = {c t} t U For every t U, let p(t) = OFE.Dec(OFE.sk i,t,c t ) Extend the polynomial p(t) to all of F by interpolation Output p(0) Figure 7: A Functional Encryption Scheme for 2 Messages 6.2 Security for Two Messages Theorem 6.. For every polynomial n = n(κ), Π FE is (n,2,δ)-secure for δ = T negl(κ) + 2 Ω(R). First we describe at a high level how to simulate. To do so, it will be useful to first introduce some terminology. Recall that FE.Setup instantiates T independent copies of the one-message scheme T. We refer to each instantiation as a component. Thus, when we talk about generating a secret key for a function f i we will talk about generating each of the T components of that key and similarly when we talk about generating a ciphertext for an input x b we will talk about generating each of the U components of that ciphertext. Thus the simulator has to generate a total of nt components of keys and 2U components of ciphertexts. The simulator will consider several types of components: Components t U U 2 where U,U 2 are the random sets of components chosen by the encryption scheme for the two inputs, respectively. The adversary obtains two ciphertexts for these components, so we cannot use the simulator for the one-message scheme. Thus for these components we simply choose uniformly random values for all the keys and ciphertexts and use the real one-message scheme. 7

Answering n^{2+o(1)} Counting Queries with Differential Privacy is Hard

Answering n^{2+o(1)} Counting Queries with Differential Privacy is Hard Answering n^{2+o(1)} Counting Queries with Differential ivacy is Hard Jonathan Ullman Harvard University ullman@seas.harvard.edu ABSTRACT A central problem in differentially private data analysis is how

More information

Answering Many Queries with Differential Privacy

Answering Many Queries with Differential Privacy 6.889 New Developments in Cryptography May 6, 2011 Answering Many Queries with Differential Privacy Instructors: Shafi Goldwasser, Yael Kalai, Leo Reyzin, Boaz Barak, and Salil Vadhan Lecturer: Jonathan

More information

1 Indistinguishability for multiple encryptions

1 Indistinguishability for multiple encryptions CSCI 5440: Cryptography Lecture 3 The Chinese University of Hong Kong 26 September 2012 1 Indistinguishability for multiple encryptions We now have a reasonable encryption scheme, which we proved is message

More information

Notes for Lecture A can repeat step 3 as many times as it wishes. We will charge A one unit of time for every time it repeats step 3.

Notes for Lecture A can repeat step 3 as many times as it wishes. We will charge A one unit of time for every time it repeats step 3. COS 533: Advanced Cryptography Lecture 2 (September 18, 2017) Lecturer: Mark Zhandry Princeton University Scribe: Mark Zhandry Notes for Lecture 2 1 Last Time Last time, we defined formally what an encryption

More information

On the Complexity of Differentially Private Data Release

On the Complexity of Differentially Private Data Release On the Complexity of Differentially Private Data Release Efficient Algorithms and Hardness Results Cynthia Dwork Microsoft Research SVC Moni Naor Weizmann Institute Omer Reingold Weizmann Institute ABSTRACT

More information

Fully-secure Key Policy ABE on Prime-Order Bilinear Groups

Fully-secure Key Policy ABE on Prime-Order Bilinear Groups Fully-secure Key Policy ABE on Prime-Order Bilinear Groups Luke Kowalczyk, Jiahui Liu, Kailash Meiyappan Abstract We present a Key-Policy ABE scheme that is fully-secure under the Decisional Linear Assumption.

More information

Lecture 5, CPA Secure Encryption from PRFs

Lecture 5, CPA Secure Encryption from PRFs CS 4501-6501 Topics in Cryptography 16 Feb 2018 Lecture 5, CPA Secure Encryption from PRFs Lecturer: Mohammad Mahmoody Scribe: J. Fu, D. Anderson, W. Chao, and Y. Yu 1 Review Ralling: CPA Security and

More information

Lecture Notes on Secret Sharing

Lecture Notes on Secret Sharing COMS W4261: Introduction to Cryptography. Instructor: Prof. Tal Malkin Lecture Notes on Secret Sharing Abstract These are lecture notes from the first two lectures in Fall 2016, focusing on technical material

More information

1 Number Theory Basics

1 Number Theory Basics ECS 289M (Franklin), Winter 2010, Crypto Review 1 Number Theory Basics This section has some basic facts about number theory, mostly taken (or adapted) from Dan Boneh s number theory fact sheets for his

More information

Modern Cryptography Lecture 4

Modern Cryptography Lecture 4 Modern Cryptography Lecture 4 Pseudorandom Functions Block-Ciphers Modes of Operation Chosen-Ciphertext Security 1 October 30th, 2018 2 Webpage Page for first part, Homeworks, Slides http://pub.ist.ac.at/crypto/moderncrypto18.html

More information

From Secure MPC to Efficient Zero-Knowledge

From Secure MPC to Efficient Zero-Knowledge From Secure MPC to Efficient Zero-Knowledge David Wu March, 2017 The Complexity Class NP NP the class of problems that are efficiently verifiable a language L is in NP if there exists a polynomial-time

More information

Scribe for Lecture #5

Scribe for Lecture #5 CSA E0 235: Cryptography 28 January 2016 Scribe for Lecture #5 Instructor: Dr. Arpita Patra Submitted by: Nidhi Rathi 1 Pseudo-randomness and PRG s We saw that computational security introduces two relaxations

More information

Bootstrapping Obfuscators via Fast Pseudorandom Functions

Bootstrapping Obfuscators via Fast Pseudorandom Functions Bootstrapping Obfuscators via Fast Pseudorandom Functions Benny Applebaum October 26, 2013 Abstract We show that it is possible to upgrade an obfuscator for a weak complexity class WEAK into an obfuscator

More information

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4 CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky Lecture 4 Lecture date: January 26, 2005 Scribe: Paul Ray, Mike Welch, Fernando Pereira 1 Private Key Encryption Consider a game between

More information

COS 597C: Recent Developments in Program Obfuscation Lecture 7 (10/06/16) Notes for Lecture 7

COS 597C: Recent Developments in Program Obfuscation Lecture 7 (10/06/16) Notes for Lecture 7 COS 597C: Recent Developments in Program Obfuscation Lecture 7 10/06/16 Lecturer: Mark Zhandry Princeton University Scribe: Jordan Tran Notes for Lecture 7 1 Introduction In this lecture, we show how to

More information

Lattice-Based Non-Interactive Arugment Systems

Lattice-Based Non-Interactive Arugment Systems Lattice-Based Non-Interactive Arugment Systems David Wu Stanford University Based on joint works with Dan Boneh, Yuval Ishai, Sam Kim, and Amit Sahai Soundness: x L, P Pr P, V (x) = accept = 0 No prover

More information

Computational security & Private key encryption

Computational security & Private key encryption Computational security & Private key encryption Emma Arfelt Stud. BSc. Software Development Frederik Madsen Stud. MSc. Software Development March 2017 Recap Perfect Secrecy Perfect indistinguishability

More information

A survey on quantum-secure cryptographic systems

A survey on quantum-secure cryptographic systems A survey on quantum-secure cryptographic systems Tomoka Kan May 24, 2018 1 Abstract Post-quantum cryptography refers to the search for classical cryptosystems which remain secure in the presence of a quantum

More information

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Kuan Cheng Yuval Ishai Xin Li December 18, 2017 Abstract We study the question of minimizing the computational complexity of (robust) secret

More information

Lectures 2+3: Provable Security

Lectures 2+3: Provable Security Lectures 2+3: Provable Security Contents 1 Motivation 1 2 Syntax 3 3 Correctness 5 4 Security Definitions 6 5 Important Cryptographic Primitives 8 6 Proofs of Security 10 7 Limitations of Provable Security

More information

Lecture 7: CPA Security, MACs, OWFs

Lecture 7: CPA Security, MACs, OWFs CS 7810 Graduate Cryptography September 27, 2017 Lecturer: Daniel Wichs Lecture 7: CPA Security, MACs, OWFs Scribe: Eysa Lee 1 Topic Covered Chosen Plaintext Attack (CPA) MACs One Way Functions (OWFs)

More information

Hierarchical Functional Encryption

Hierarchical Functional Encryption Hierarchical Functional Encryption Zvika Brakerski Gil Segev Abstract Functional encryption provides fine-grained access control for encrypted data, allowing each user to learn only specific functions

More information

Lecture 19: Public-key Cryptography (Diffie-Hellman Key Exchange & ElGamal Encryption) Public-key Cryptography

Lecture 19: Public-key Cryptography (Diffie-Hellman Key Exchange & ElGamal Encryption) Public-key Cryptography Lecture 19: (Diffie-Hellman Key Exchange & ElGamal Encryption) Recall In private-key cryptography the secret-key sk is always established ahead of time The secrecy of the private-key cryptography relies

More information

Outline. The Game-based Methodology for Computational Security Proofs. Public-Key Cryptography. Outline. Introduction Provable Security

Outline. The Game-based Methodology for Computational Security Proofs. Public-Key Cryptography. Outline. Introduction Provable Security The Game-based Methodology for Computational s David Pointcheval Ecole normale supérieure, CNRS & INRIA Computational and Symbolic Proofs of Security Atagawa Heights Japan April 6th, 2009 1/39 2/39 Public-Key

More information

Lecture 9 Julie Staub Avi Dalal Abheek Anand Gelareh Taban. 1 Introduction. 2 Background. CMSC 858K Advanced Topics in Cryptography February 24, 2004

Lecture 9 Julie Staub Avi Dalal Abheek Anand Gelareh Taban. 1 Introduction. 2 Background. CMSC 858K Advanced Topics in Cryptography February 24, 2004 CMSC 858K Advanced Topics in Cryptography February 24, 2004 Lecturer: Jonathan Katz Lecture 9 Scribe(s): Julie Staub Avi Dalal Abheek Anand Gelareh Taban 1 Introduction In previous lectures, we constructed

More information

Lecture 28: Public-key Cryptography. Public-key Cryptography

Lecture 28: Public-key Cryptography. Public-key Cryptography Lecture 28: Recall In private-key cryptography the secret-key sk is always established ahead of time The secrecy of the private-key cryptography relies on the fact that the adversary does not have access

More information

Pr[C = c M = m] = Pr[C = c] Pr[M = m] Pr[M = m C = c] = Pr[M = m]

Pr[C = c M = m] = Pr[C = c] Pr[M = m] Pr[M = m C = c] = Pr[M = m] Midterm Review Sheet The definition of a private-key encryption scheme. It s a tuple Π = ((K n,m n,c n ) n=1,gen,enc,dec) where - for each n N, K n,m n,c n are sets of bitstrings; [for a given value of

More information

Order- Revealing Encryp2on and the Hardness of Private Learning

Order- Revealing Encryp2on and the Hardness of Private Learning Order- Revealing Encryp2on and the Hardness of Private Learning January 11, 2016 Mark Bun Mark Zhandry Harvard MIT Let s do some science! Scurvy: a problem throughout human history Caused by vitamin C

More information

1 Secure two-party computation

1 Secure two-party computation CSCI 5440: Cryptography Lecture 7 The Chinese University of Hong Kong, Spring 2018 26 and 27 February 2018 In the first half of the course we covered the basic cryptographic primitives that enable secure

More information

8 Security against Chosen Plaintext

8 Security against Chosen Plaintext 8 Security against Chosen Plaintext Attacks We ve already seen a definition that captures security of encryption when an adversary is allowed to see just one ciphertext encrypted under the key. Clearly

More information

CPA-Security. Definition: A private-key encryption scheme

CPA-Security. Definition: A private-key encryption scheme CPA-Security The CPA Indistinguishability Experiment PrivK cpa A,Π n : 1. A key k is generated by running Gen 1 n. 2. The adversary A is given input 1 n and oracle access to Enc k, and outputs a pair of

More information

On the Cryptographic Complexity of the Worst Functions

On the Cryptographic Complexity of the Worst Functions On the Cryptographic Complexity of the Worst Functions Amos Beimel 1, Yuval Ishai 2, Ranjit Kumaresan 2, and Eyal Kushilevitz 2 1 Dept. of Computer Science, Ben Gurion University of the Negev, Be er Sheva,

More information

Notes on Property-Preserving Encryption

Notes on Property-Preserving Encryption Notes on Property-Preserving Encryption The first type of specialized encryption scheme that can be used in secure outsourced storage we will look at is property-preserving encryption. This is encryption

More information

Output-Compressing Randomized Encodings and Applications

Output-Compressing Randomized Encodings and Applications Output-Compressing Randomized Encodings and Applications Huijia Lin Rafael Pass Karn Seth Sidharth Telang December 18, 2015 Abstract We consider randomized encodings (RE) that enable encoding a Turing

More information

1 Cryptographic hash functions

1 Cryptographic hash functions CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 24 October 2012 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length

More information

Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security

Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security Boaz Barak November 21, 2007 Cyclic groups and discrete log A group G is cyclic if there exists a generator

More information

A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis

A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis Moritz Hardt Guy N. Rothblum Abstract We consider statistical data analysis in the interactive setting. In this setting a trusted

More information

Attribute-based Encryption & Delegation of Computation

Attribute-based Encryption & Delegation of Computation Lattices and Homomorphic Encryption, Spring 2013 Instructors: Shai Halevi, Tal Malkin Attribute-based Encryption & Delegation of Computation April 9, 2013 Scribe: Steven Goldfeder We will cover the ABE

More information

Lecture 2: Perfect Secrecy and its Limitations

Lecture 2: Perfect Secrecy and its Limitations CS 4501-6501 Topics in Cryptography 26 Jan 2018 Lecture 2: Perfect Secrecy and its Limitations Lecturer: Mohammad Mahmoody Scribe: Mohammad Mahmoody 1 Introduction Last time, we informally defined encryption

More information

Function-Hiding Inner Product Encryption

Function-Hiding Inner Product Encryption Function-Hiding Inner Product Encryption Allison Bishop Columbia University allison@cs.columbia.edu Abhishek Jain Johns Hopkins University abhishek@cs.jhu.edu Lucas Kowalczyk Columbia University luke@cs.columbia.edu

More information

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today:

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today: Today: Introduction to the class. Examples of concrete physical attacks on RSA A computational approach to cryptography Pseudorandomness 1 What are Physical Attacks Tampering/Leakage attacks Issue of how

More information

Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption

Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption Indistinguishability Obfuscation from the Multilinear Subgroup Elimination Assumption Craig Gentry Allison ewko Amit Sahai Brent Waters November 4, 2014 Abstract We revisit the question of constructing

More information

How many rounds can Random Selection handle?

How many rounds can Random Selection handle? How many rounds can Random Selection handle? Shengyu Zhang Abstract The construction of zero-knowledge proofs can be greatly simplified if the protocol is only required be secure against the honest verifier.

More information

Standard Security Does Not Imply Indistinguishability Under Selective Opening

Standard Security Does Not Imply Indistinguishability Under Selective Opening Standard Security Does Not Imply Indistinguishability Under Selective Opening Dennis Hofheinz 1, Vanishree Rao 2, and Daniel Wichs 3 1 Karlsruhe Institute of Technology, Germany, dennis.hofheinz@kit.edu

More information

1 Cryptographic hash functions

1 Cryptographic hash functions CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 23 February 2011 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length

More information

On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses

On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses Per Austrin, Kai-Min Chung, Mohammad Mahmoody, Rafael Pass, Karn Seth November 12, 2012 Abstract We initiate

More information

Indistinguishability Obfuscation for All Circuits from Secret-Key Functional Encryption

Indistinguishability Obfuscation for All Circuits from Secret-Key Functional Encryption Indistinguishability Obfuscation for All Circuits from Secret-Key Functional Encryption Fuyuki Kitagawa 1 Ryo Nishimaki 2 Keisuke Tanaka 1 1 Tokyo Institute of Technology, Japan {kitagaw1,keisuke}@is.titech.ac.jp

More information

Stronger Security for Reusable Garbled Circuits, General Definitions and Attacks

Stronger Security for Reusable Garbled Circuits, General Definitions and Attacks Stronger Security for Reusable Garbled Circuits, General Definitions and Attacks Shweta Agrawal Abstract We construct a functional encryption scheme for circuits which simultaneously achieves and improves

More information

Multi-Input Functional Encryption for Unbounded Arity Functions

Multi-Input Functional Encryption for Unbounded Arity Functions Multi-Input Functional Encryption for Unbounded Arity Functions Saikrishna Badrinarayanan, Divya Gupta, Abhishek Jain, and Amit Sahai Abstract. The notion of multi-input functional encryption (MI-FE) was

More information

From FE Combiners to Secure MPC and Back

From FE Combiners to Secure MPC and Back From FE Combiners to Secure MPC and Back Prabhanjan Ananth Saikrishna Badrinarayanan Aayush Jain Nathan Manohar Amit Sahai Abstract Functional encryption (FE) has incredible applications towards computing

More information

Modern symmetric-key Encryption

Modern symmetric-key Encryption Modern symmetric-key Encryption Citation I would like to thank Claude Crepeau for allowing me to use his slide from his crypto course to mount my course. Some of these slides are taken directly from his

More information

Lecture 2: Program Obfuscation - II April 1, 2009

Lecture 2: Program Obfuscation - II April 1, 2009 Advanced Topics in Cryptography Lecture 2: Program Obfuscation - II April 1, 2009 Lecturer: S. Goldwasser, M. Naor Scribe by: R. Marianer, R. Rothblum Updated: May 3, 2009 1 Introduction Barak et-al[1]

More information

Linear Multi-Prover Interactive Proofs

Linear Multi-Prover Interactive Proofs Quasi-Optimal SNARGs via Linear Multi-Prover Interactive Proofs Dan Boneh, Yuval Ishai, Amit Sahai, and David J. Wu Interactive Arguments for NP L C = x C x, w = 1 for some w P(x, w) V(x) accept / reject

More information

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004 CMSC 858K Advanced Topics in Cryptography February 5, 2004 Lecturer: Jonathan Katz Lecture 4 Scribe(s): Chiu Yuen Koo Nikolai Yakovenko Jeffrey Blank 1 Summary The focus of this lecture is efficient public-key

More information

SYMMETRIC ENCRYPTION. Mihir Bellare UCSD 1

SYMMETRIC ENCRYPTION. Mihir Bellare UCSD 1 SYMMETRIC ENCRYPTION Mihir Bellare UCSD 1 Syntax A symmetric encryption scheme SE = (K, E, D) consists of three algorithms: K and E may be randomized, but D must be deterministic. Mihir Bellare UCSD 2

More information

Identity-based encryption

Identity-based encryption Identity-based encryption Michel Abdalla ENS & CNRS MPRI - Course 2-12-1 Michel Abdalla (ENS & CNRS) Identity-based encryption 1 / 43 Identity-based encryption (IBE) Goal: Allow senders to encrypt messages

More information

Probabilistically Checkable Arguments

Probabilistically Checkable Arguments Probabilistically Checkable Arguments Yael Tauman Kalai Microsoft Research yael@microsoft.com Ran Raz Weizmann Institute of Science ran.raz@weizmann.ac.il Abstract We give a general reduction that converts

More information

FUNCTIONAL SIGNATURES AND PSEUDORANDOM FUNCTIONS. Elette Boyle Shafi Goldwasser Ioana Ivan

FUNCTIONAL SIGNATURES AND PSEUDORANDOM FUNCTIONS. Elette Boyle Shafi Goldwasser Ioana Ivan FUNCTIONAL SIGNATURES AND PSEUDORANDOM FUNCTIONS Elette Boyle Shafi Goldwasser Ioana Ivan Traditional Paradigm: All or Nothing Encryption [DH76] Given SK, can decrypt. Otherwise, can t distinguish encryptions

More information

Hierarchical identity-based encryption

Hierarchical identity-based encryption Hierarchical identity-based encryption Michel Abdalla ENS & CNS September 26, 2011 MPI - Course 2-12-1 Lecture 3 - Part 1 Michel Abdalla (ENS & CNS) Hierarchical identity-based encryption September 26,

More information

6.892 Computing on Encrypted Data September 16, Lecture 2

6.892 Computing on Encrypted Data September 16, Lecture 2 6.89 Computing on Encrypted Data September 16, 013 Lecture Lecturer: Vinod Vaikuntanathan Scribe: Britt Cyr In this lecture, we will define the learning with errors (LWE) problem, show an euivalence between

More information

Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation

Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation Yehuda Lindell Dept. of Computer Science and Applied Math. The Weizmann Institute of Science Rehovot 76100, Israel. lindell@wisdom.weizmann.ac.il

More information

Fully Homomorphic Encryption

Fully Homomorphic Encryption Fully Homomorphic Encryption Boaz Barak February 9, 2011 Achieving fully homomorphic encryption, under any kind of reasonable computational assumptions (and under any reasonable definition of reasonable..),

More information

Lesson 8 : Key-Policy Attribute-Based Encryption and Public Key Encryption with Keyword Search

Lesson 8 : Key-Policy Attribute-Based Encryption and Public Key Encryption with Keyword Search Lesson 8 : Key-Policy Attribute-Based Encryption and Public Key Encryption with Keyword Search November 3, 2014 teacher : Benoît Libert scribe : Florent Bréhard Key-Policy Attribute-Based Encryption (KP-ABE)

More information

Question 2.1. Show that. is non-negligible. 2. Since. is non-negligible so is μ n +

Question 2.1. Show that. is non-negligible. 2. Since. is non-negligible so is μ n + Homework #2 Question 2.1 Show that 1 p n + μ n is non-negligible 1. μ n + 1 p n > 1 p n 2. Since 1 p n is non-negligible so is μ n + 1 p n Question 2.1 Show that 1 p n - μ n is non-negligible 1. μ n O(

More information

From Minicrypt to Obfustopia via Private-Key Functional Encryption

From Minicrypt to Obfustopia via Private-Key Functional Encryption From Minicrypt to Obfustopia via Private-Key Functional Encryption Ilan Komargodski Weizmann Institute of Science Joint work with Gil Segev (Hebrew University) Functional Encryption [Sahai-Waters 05] Enc

More information

Lattice Cryptography

Lattice Cryptography CSE 06A: Lattice Algorithms and Applications Winter 01 Instructor: Daniele Micciancio Lattice Cryptography UCSD CSE Many problems on point lattices are computationally hard. One of the most important hard

More information

COS598D Lecture 3 Pseudorandom generators from one-way functions

COS598D Lecture 3 Pseudorandom generators from one-way functions COS598D Lecture 3 Pseudorandom generators from one-way functions Scribe: Moritz Hardt, Srdjan Krstic February 22, 2008 In this lecture we prove the existence of pseudorandom-generators assuming that oneway

More information

Approximate and Probabilistic Differential Privacy Definitions

Approximate and Probabilistic Differential Privacy Definitions pproximate and Probabilistic Differential Privacy Definitions Sebastian Meiser University College London, United Kingdom, e-mail: s.meiser@ucl.ac.uk July 20, 208 bstract This technical report discusses

More information

Lattice Cryptography

Lattice Cryptography CSE 206A: Lattice Algorithms and Applications Winter 2016 Lattice Cryptography Instructor: Daniele Micciancio UCSD CSE Lattice cryptography studies the construction of cryptographic functions whose security

More information

Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle

Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle CS 7880 Graduate Cryptography October 20, 2015 Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle Lecturer: Daniel Wichs Scribe: Tanay Mehta 1 Topics Covered Review Collision-Resistant Hash Functions

More information

Semantic Security and Indistinguishability in the Quantum World

Semantic Security and Indistinguishability in the Quantum World Semantic Security and Indistinguishability in the Quantum World Tommaso Gagliardoni 1, Andreas Hülsing 2, Christian Schaffner 3 1 IBM Research, Swiss; TU Darmstadt, Germany 2 TU Eindhoven, The Netherlands

More information

Constraining Pseudorandom Functions Privately

Constraining Pseudorandom Functions Privately Constraining Pseudorandom Functions Privately Dan Boneh, Kevin Lewi, and David J. Wu Stanford University {dabo,klewi,dwu4}@cs.stanford.edu Abstract In a constrained pseudorandom function (PRF), the master

More information

On the Achievability of Simulation-Based Security for Functional Encryption

On the Achievability of Simulation-Based Security for Functional Encryption On the Achievability of Simulation-Based Security for Functional Encryption Angelo De Caro 1, Vincenzo Iovino 2, Abhishek Jain 3, Adam O Neill 4, Omer Paneth 5, and Giuseppe Persiano 6 1 IBM Research Zurich,

More information

G Advanced Cryptography April 10th, Lecture 11

G Advanced Cryptography April 10th, Lecture 11 G.30-001 Advanced Cryptography April 10th, 007 Lecturer: Victor Shoup Lecture 11 Scribe: Kristiyan Haralambiev We continue the discussion of public key encryption. Last time, we studied Hash Proof Systems

More information

Point Obfuscation and 3-round Zero-Knowledge

Point Obfuscation and 3-round Zero-Knowledge Point Obfuscation and 3-round Zero-Knowledge Nir Bitansky and Omer Paneth Tel Aviv University and Boston University September 21, 2011 Abstract We construct 3-round proofs and arguments with negligible

More information

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography CS 7880 Graduate Cryptography September 10, 2015 Lecture 1: Perfect Secrecy and Statistical Authentication Lecturer: Daniel Wichs Scribe: Matthew Dippel 1 Topic Covered Definition of perfect secrecy One-time

More information

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions Columbia University - Crypto Reading Group Apr 27, 2011 Inaccessible Entropy and its Applications Igor Carboni Oliveira We summarize the constructions of PRGs from OWFs discussed so far and introduce the

More information

Fully Homomorphic Encryption from LWE

Fully Homomorphic Encryption from LWE Fully Homomorphic Encryption from LWE Based on joint works with: Zvika Brakerski (Stanford) Vinod Vaikuntanathan (University of Toronto) Craig Gentry (IBM) Post-Quantum Webinar, November 2011 Outsourcing

More information

Lecture 22. We first consider some constructions of standard commitment schemes. 2.1 Constructions Based on One-Way (Trapdoor) Permutations

Lecture 22. We first consider some constructions of standard commitment schemes. 2.1 Constructions Based on One-Way (Trapdoor) Permutations CMSC 858K Advanced Topics in Cryptography April 20, 2004 Lecturer: Jonathan Katz Lecture 22 Scribe(s): agaraj Anthapadmanabhan, Ji Sun Shin 1 Introduction to These otes In the previous lectures, we saw

More information

CSA E0 235: Cryptography March 16, (Extra) Lecture 3

CSA E0 235: Cryptography March 16, (Extra) Lecture 3 CSA E0 235: Cryptography March 16, 2015 Instructor: Arpita Patra (Extra) Lecture 3 Submitted by: Ajith S 1 Chosen Plaintext Attack A chosen-plaintext attack (CPA) is an attack model for cryptanalysis which

More information

Lecture 18 - Secret Sharing, Visual Cryptography, Distributed Signatures

Lecture 18 - Secret Sharing, Visual Cryptography, Distributed Signatures Lecture 18 - Secret Sharing, Visual Cryptography, Distributed Signatures Boaz Barak November 27, 2007 Quick review of homework 7 Existence of a CPA-secure public key encryption scheme such that oracle

More information

Lecture 10 - MAC s continued, hash & MAC

Lecture 10 - MAC s continued, hash & MAC Lecture 10 - MAC s continued, hash & MAC Boaz Barak March 3, 2010 Reading: Boneh-Shoup chapters 7,8 The field GF(2 n ). A field F is a set with a multiplication ( ) and addition operations that satisfy

More information

5 Pseudorandom Generators

5 Pseudorandom Generators 5 Pseudorandom Generators We have already seen that randomness is essential for cryptographic security. Following Kerckhoff s principle, we assume that an adversary knows everything about our cryptographic

More information

Extending Oblivious Transfers Efficiently

Extending Oblivious Transfers Efficiently Extending Oblivious Transfers Efficiently Yuval Ishai Technion Joe Kilian Kobbi Nissim Erez Petrank NEC Microsoft Technion Motivation x y f(x,y) How (in)efficient is generic secure computation? myth don

More information

Searchable encryption & Anonymous encryption

Searchable encryption & Anonymous encryption Searchable encryption & Anonymous encryption Michel Abdalla ENS & CNS February 17, 2014 MPI - Course 2-12-1 Michel Abdalla (ENS & CNS) Searchable encryption & Anonymous encryption February 17, 2014 1 /

More information

Adaptively Secure Puncturable Pseudorandom Functions in the Standard Model

Adaptively Secure Puncturable Pseudorandom Functions in the Standard Model Adaptively Secure Puncturable Pseudorandom Functions in the Standard Model Susan Hohenberger 1, Venkata Koppula 2, and Brent Waters 2 1 Johns Hopkins University, Baltimore, USA susan@cs.jhu.edu 2 University

More information

On Perfect and Adaptive Security in Exposure-Resilient Cryptography. Yevgeniy Dodis, New York University Amit Sahai, Princeton Adam Smith, MIT

On Perfect and Adaptive Security in Exposure-Resilient Cryptography. Yevgeniy Dodis, New York University Amit Sahai, Princeton Adam Smith, MIT On Perfect and Adaptive Security in Exposure-Resilient Cryptography Yevgeniy Dodis, New York University Amit Sahai, Princeton Adam Smith, MIT 1 Problem: Partial Key Exposure Alice needs to store a cryptographic

More information

Notes for Lecture 17

Notes for Lecture 17 U.C. Berkeley CS276: Cryptography Handout N17 Luca Trevisan March 17, 2009 Notes for Lecture 17 Scribed by Matt Finifter, posted April 8, 2009 Summary Today we begin to talk about public-key cryptography,

More information

Efficient MPC Oblivious Transfer and Oblivious Linear Evaluation aka How to Multiply

Efficient MPC Oblivious Transfer and Oblivious Linear Evaluation aka How to Multiply CIS 2018 Efficient MPC Oblivious Transfer and Oblivious Linear Evaluation aka How to Multiply Claudio Orlandi, Aarhus University Circuit Evaluation 3) Multiplication? How to compute [z]=[xy]? Alice, Bob

More information

2 Message authentication codes (MACs)

2 Message authentication codes (MACs) CS276: Cryptography October 1, 2015 Message Authentication Codes and CCA2 Instructor: Alessandro Chiesa Scribe: David Field 1 Previous lecture Last time we: Constructed a CPA-secure encryption scheme from

More information

On the Randomness Requirements for Privacy

On the Randomness Requirements for Privacy On the Randomness Requirements for Privacy by Carl Bosley A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Computer Science New York

More information

Cryptographic Multilinear Maps. Craig Gentry and Shai Halevi

Cryptographic Multilinear Maps. Craig Gentry and Shai Halevi Cryptographic Multilinear Maps Craig Gentry and Shai Halevi China Summer School on Lattices and Cryptography, June 2014 Multilinear Maps (MMAPs) A Technical Tool A primitive for building applications,

More information

Barriers in Cryptography with Weak, Correlated and Leaky Sources

Barriers in Cryptography with Weak, Correlated and Leaky Sources Barriers in Cryptography with Weak, Correlated and Leaky Sources Daniel Wichs December 14, 2012 Abstract There has been much recent progress in constructing cryptosystems that maintain their security without

More information

Adaptively Simulation-Secure Attribute-Hiding Predicate Encryption

Adaptively Simulation-Secure Attribute-Hiding Predicate Encryption Adaptively Simulation-Secure Attribute-Hiding Predicate Encryption by Pratish Datta 1 joint work with Tatsuaki Okamoto 1 and Katsuyuki Takashima 2 1 NTT Secure Platform Laboratories 3-9-11 Midori-cho,

More information

Lecture 13: Private Key Encryption

Lecture 13: Private Key Encryption COM S 687 Introduction to Cryptography October 05, 2006 Instructor: Rafael Pass Lecture 13: Private Key Encryption Scribe: Ashwin Machanavajjhala Till this point in the course we have learnt how to define

More information

Lecture 4: Computationally secure cryptography

Lecture 4: Computationally secure cryptography CS 7880 Graduate Cryptography September 18, 017 Lecture 4: Computationally secure cryptography Lecturer: Daniel Wichs Scribe: Lucianna Kiffer 1 Topic Covered ε-security Computationally secure cryptography

More information

6.892 Computing on Encrypted Data October 28, Lecture 7

6.892 Computing on Encrypted Data October 28, Lecture 7 6.892 Computing on Encrypted Data October 28, 2013 Lecture 7 Lecturer: Vinod Vaikuntanathan Scribe: Prashant Vasudevan 1 Garbled Circuits Picking up from the previous lecture, we start by defining a garbling

More information

Faster Private Release of Marginals on Small Databases

Faster Private Release of Marginals on Small Databases Faster Private Release of Marginals on Small Databases Karthekeyan Chandrasekaran School of Engineering and Applied Sciences Harvard University karthe@seasharvardedu Justin Thaler School of Engineering

More information

COS433/Math 473: Cryptography. Mark Zhandry Princeton University Spring 2017

COS433/Math 473: Cryptography. Mark Zhandry Princeton University Spring 2017 COS433/Math 473: Cryptography Mark Zhandry Princeton University Spring 2017 Previously on COS 433 Takeaway: Crypto is Hard Designing crypto is hard, even experts get it wrong Just because I don t know

More information

6.080 / Great Ideas in Theoretical Computer Science Spring 2008

6.080 / Great Ideas in Theoretical Computer Science Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information