All-But-Many Lossy Trapdoor Functions

Similar documents
All-But-Many Lossy Trapdoor Functions. Dennis Hofheinz (Karlsruhe Institute of Technology)

Circular chosen-ciphertext security with compact ciphertexts

Lecture 9 Julie Staub Avi Dalal Abheek Anand Gelareh Taban. 1 Introduction. 2 Background. CMSC 858K Advanced Topics in Cryptography February 24, 2004

1 Number Theory Basics

Provable security. Michel Abdalla

Lecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004

Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security

Outline. The Game-based Methodology for Computational Security Proofs. Public-Key Cryptography. Outline. Introduction Provable Security

Leakage-Resilient Chosen-Ciphertext Secure Public-Key Encryption from Hash Proof System and One-Time Lossy Filter

Lecture 1: Introduction to Public key cryptography

Optimal Security Reductions for Unique Signatures: Bypassing Impossibilities with A Counterexample

5.4 ElGamal - definition

On Notions of Security for Deterministic Encryption, and Efficient Constructions without Random Oracles

Technische Universität München (I7) Winter 2013/14 Dr. M. Luttenberger / M. Schlund SOLUTION. Cryptography Endterm

On the (Im)possibility of Projecting Property in Prime-Order Setting

Notes for Lecture A can repeat step 3 as many times as it wishes. We will charge A one unit of time for every time it repeats step 3.

RSA-OAEP and Cramer-Shoup

G Advanced Cryptography April 10th, Lecture 11

Advanced Cryptography 03/06/2007. Lecture 8

Public-Key Cryptography. Lecture 9 Public-Key Encryption Diffie-Hellman Key-Exchange

Lecture Summary. 2 Simplified Cramer-Shoup. CMSC 858K Advanced Topics in Cryptography February 26, Chiu Yuen Koo Nikolai Yakovenko

Foundations of Cryptography

Digital Signature Schemes and the Random Oracle Model. A. Hülsing

CPSC 467b: Cryptography and Computer Security

Identity-based encryption

Secure and Practical Identity-Based Encryption

Short Exponent Diffie-Hellman Problems

Lossy Trapdoor Functions and Their Applications

1 Cryptographic hash functions

Efficient Identity-Based Encryption Without Random Oracles

Cryptology. Scribe: Fabrice Mouhartem M2IF

Lossy Trapdoor Functions from Smooth Homomorphic Hash Proof Systems

ASYMMETRIC ENCRYPTION

Lecture 11: Non-Interactive Zero-Knowledge II. 1 Non-Interactive Zero-Knowledge in the Hidden-Bits Model for the Graph Hamiltonian problem

Lattice Cryptography

Public-Key Encryption with Simulation-Based Selective-Opening Security and Compact Ciphertexts

CPSC 467: Cryptography and Computer Security

On The Security of The ElGamal Encryption Scheme and Damgård s Variant

COS 597C: Recent Developments in Program Obfuscation Lecture 7 (10/06/16) Notes for Lecture 7

Katz, Lindell Introduction to Modern Cryptrography

Efficient Identity-based Encryption Without Random Oracles

Lecture 17: Constructions of Public-Key Encryption

Lecture 16 Chiu Yuen Koo Nikolai Yakovenko. 1 Digital Signature Schemes. CMSC 858K Advanced Topics in Cryptography March 18, 2004

Public-Key Encryption: ElGamal, RSA, Rabin

Solutions to homework 2

Standard versus Selective Opening Security: Separation and Equivalence Results

Lattice Cryptography

Lecture Notes, Week 6

10 Concrete candidates for public key crypto

Lossy Trapdoor Functions and Their Applications

Number Theory: Applications. Number Theory Applications. Hash Functions II. Hash Functions III. Pseudorandom Numbers

Digital Signatures. Adam O Neill based on

5199/IOC5063 Theory of Cryptology, 2014 Fall

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrovsky. Lecture 7

Lectures 2+3: Provable Security

SYMMETRIC ENCRYPTION. Mihir Bellare UCSD 1

Applied cryptography

CLASSICAL CRYPTOSYSTEMS IN A QUANTUM WORLD

Short Signatures From Diffie-Hellman: Realizing Short Public Key

Lecture 19: Public-key Cryptography (Diffie-Hellman Key Exchange & ElGamal Encryption) Public-key Cryptography

Lecture 7: ElGamal and Discrete Logarithms

A New Paradigm of Hybrid Encryption Scheme

Notes on Property-Preserving Encryption

KDM-CCA Security from RKA Secure Authenticated Encryption

The Random Oracle Paradigm. Mike Reiter. Random oracle is a formalism to model such uses of hash functions that abound in practical cryptography

Fully-secure Key Policy ABE on Prime-Order Bilinear Groups

A survey on quantum-secure cryptographic systems

ENEE 457: Computer Systems Security 10/3/16. Lecture 9 RSA Encryption and Diffie-Helmann Key Exchange

Introduction to Cybersecurity Cryptography (Part 4)

Boneh-Franklin Identity Based Encryption Revisited

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today:

Public Key Cryptography

Introduction to Elliptic Curve Cryptography

Provable Security for Public-Key Schemes. Outline. I Basics. Secrecy of Communications. Outline. David Pointcheval

Advanced Topics in Cryptography

RSA RSA public key cryptosystem

Dual System Encryption: Realizing Fully Secure IBE and HIBE under Simple Assumptions

Introduction to Cybersecurity Cryptography (Part 4)

Parallel Decryption Queries in Bounded Chosen Ciphertext Attacks

Universal Hash Proofs and a Paradigm for Adaptive Chosen Ciphertext Secure Public-Key Encryption

On the security of Jhanwar-Barua Identity-Based Encryption Scheme

Standard Security Does Not Imply Indistinguishability Under Selective Opening

CPSC 467b: Cryptography and Computer Security

1 Cryptographic hash functions

14 Diffie-Hellman Key Agreement

Notes for Lecture 9. 1 Combining Encryption and Authentication

ECS 189A Final Cryptography Spring 2011

Lecture 8 Alvaro A. Cardenas Nicholas Sze Yinian Mao Kavitha Swaminathan. 1 Introduction. 2 The Dolev-Dwork-Naor (DDN) Scheme [1]

Computational security & Private key encryption

Searchable encryption & Anonymous encryption

Verifiable Security of Boneh-Franklin Identity-Based Encryption. Federico Olmedo Gilles Barthe Santiago Zanella Béguelin

Practical Verifiable Encryption and Decryption of Discrete Logarithms

Asymmetric Encryption

A Generalization of Paillier s Public-Key System with Applications to Electronic Voting

Proofs on Encrypted Values in Bilinear Groups and an Application to Anonymity of Signatures

From Fixed-Length to Arbitrary-Length RSA Encoding Schemes Revisited

Tools for Simulating Features of Composite Order Bilinear Groups in the Prime Order Setting

Lecture 5, CPA Secure Encryption from PRFs

Lecture 11: Key Agreement

Transitive Signatures Based on Non-adaptive Standard Signatures

Transcription:

All-But-Many Lossy Trapdoor Functions Dennis Hofheinz December 30, 2011 Abstract We put forward a generalization of lossy trapdoor functions (LTFs). Namely, all-but-many lossy trapdoor functions (ABM-LTFs) are LTFs that are parametrized with tags. Each tag can either be injective or lossy, which leads to an invertible or a lossy function. The interesting property of ABM-LTFs is that it is possible to generate an arbitrary number of lossy tags by means of a special trapdoor, while it is not feasible to produce lossy tags without this trapdoor. Our definition and construction can be seen as generalizations of all-but-one LTFs (due to Peikert and Waters) and all-but-n LTFs (due to Hemenway et al.). However, to achieve ABM-LTFs (and thus a number of lossy tags which is not bounded by any polynomial), we have to employ some new tricks. Concretely, we give two constructions that use disguised variants of the Waters, resp. Boneh-Boyen signature schemes to make the generation of lossy tags hard without trapdoor. In a nutshell, lossy tags simply correspond to valid signatures. At the same time, tags are disguised (i.e., suitably blinded) to keep lossy tags indistinguishable from injective tags. ABM-LTFs are useful in settings in which there are a polynomial number of adversarial challenges (e.g., challenge ciphertexts). Specifically, building on work by Hemenway et al., we show that ABM-LTFs can be used to achieve selective opening security against chosenciphertext attacks. One of our ABM-LTF constructions thus yields the first SO-CCA secure encryption scheme with compact ciphertexts (O(1) group elements) whose efficiency does not depend on the number of challenges. Our second ABM-LTF construction yields an IND-CCA (and in fact SO-CCA) secure encryption scheme whose security reduction is independent of the number of challenges and decryption queries. Keywords: lossy trapdoor functions, public-key encryption, selective opening attacks. Karlsruhe Institute of Technology, Karlsruhe, Germany, Dennis.Hofheinz@kit.edu

1 Introduction Lossy trapdoor functions. Lossy trapdoor functions (LTFs) have been formalized by Peikert and Waters [37, in particular as a means to construct chosen-ciphertext (CCA) secure public-key encryption (PKE) schemes from lattice assumptions. In a nutshell, LTFs are functions that may be operated with an injective key (in which case a trapdoor allows to efficiently invert the function), or with a lossy key (in which case the function is highly non-injective, i.e., loses information). The key point is that injective and lossy keys are computationally indistinguishable. Hence, in a security proof (say, for a PKE scheme), injective keys can be replaced with lossy keys without an adversary noticing. But once all keys are lossy, a ciphertext does not contain any (significant) information anymore about the encrypted message. There exist quite efficient constructions of LTFs based on a variety of assumptions (e.g., [37, 8, 12, 23). Besides, LTFs have found various applications in public-key encryption [25, 8, 6, 5, 26, 21 and beyond [18, 37, 34 (where [18 implicitly uses LTFs to build commitment schemes). LTFs with tags and all-but-one LTFs. In the context of CCA-secure PKE schemes, it is useful to have LTFs which are parametrized with a tag 1. In all-but-one LTFs (ABO-LTFs), all tags are injective (i.e., lead to an injective function), except for one single lossy tag. During a proof of CCA security, this lossy tag will correspond to the (single) challenge ciphertext handed to the adversary. All decryption queries an adversary may make then correspond to injective tags, and so can be handled successfully. ABO-LTFs have been defined, constructed, and used as described by Peikert and Waters [37. Note that ABO-LTFs are not immediately useful in settings in which there is more than one challenge ciphertext. One such setting is the selective opening (SO) security of PKE schemes ([6, see also [13, 20). Here, an adversary A is presented with a vector of ciphertexts (which correspond to eavesdropped ciphertexts), and gets to choose a subset of these ciphertexts. This subset is then opened for A; intuitively, this corresponds to a number of corruptions performed by A. A s goal then is to find out any nontrivial information about the unopened ciphertexts. It is currently not known how to reduce this multi-challenge setting to a single-challenge setting (such as IND- CCA security). In particular, ABO-LTFs are not immediately useful to achieve SO-CCA security. Namely, if we follow the described route to achieve security, we would have to replace all challenge ciphertexts (and only those) with lossy ones. However, an ABO-LTF has only one lossy tag, while there are many challenge ciphertexts. All-but-N LTFs and their limitations. A natural solution has been given by Hemenway et al. [26, who define and construct all-but-n LTFs (ABN-LTFs). ABN-LTFs have exactly N lossy tags; all other tags are injective. This can be used to equip exactly the challenge ciphertexts with the lossy tags; all other ciphertexts then correspond to injective tags, and can thus be decrypted. Observe that ABN-LTFs encode the set of lossy tags in their key. (That is, a computationally unbounded adversary could always brute-force search which tags lead to a lossy function.) For instance, the construction of [26 embeds a polynomial in the key (hidden in the exponent of group elements) such that lossy tags are precisely the zeros of that polynomial. Hence, ABN-LTFs have a severe drawback: namely, the space complexity of the keys is at least linear in N. In particular, this affects the SO secure PKE schemes derived in [26: there is no single scheme that would work in arbitrary protocols (i.e., for arbitrary N). Besides, their schemes quickly become inefficient as N gets larger, since each encryption requires to evaluate a polynomial of degree N in the exponent. Our contribution: LTFs with many lossy tags. In this work, we define and construct all-but-many LTFs (ABM-LTFs). An ABM-LTF has superpolynomially many lossy tags, which however require a special trapdoor to be found. This is the most crucial difference to ABN-LTFs: with ABN-LTFs, the set of lossy tags is specified initially, at construction time. Our ABM-LTFs have a trapdoor that allows to sample on the fly from a superpolynomially large pool of lossy tags. 1 What we call tag is usually called branch. We use tag in view of our later construction, in which tags have a specific structure, and cannot be viewed as branches of a (binary) tree. 1

(Of course, without that trapdoor, and even given arbitrarily many lossy tags, another lossy tag is still hard to find.) This in particular allows for ABM-LTF instantiations with compact keys and images whose size is independent of the number of lossy tags. Our constructions can be viewed as disguised variants of the Waters, resp. Boneh-Boyen (BB) signature schemes [43, 9. Specifically, lossy tags correspond to valid signatures. However, to make lossy and injective tags appear indistinguishable, we have to blind signatures by encrypting them, or by multiplying them with a random subgroup element. We give more details on our constructions below. A DCR-based construction. Our first construction operates in Z N s+1. (Larger s yield lossier functions. For our applications, s = 2 will be sufficient.) A tag consists of two Paillier/Damgård- Jurik encryptions E(x) Z N s+1. At the core of our construction is a variant of Waters signatures over Z N s+1 whose security can be reduced to the problem of computing E(ab) from E(a) and E(b), i.e., of multiplying Paillier/DJ-encrypted messages. This multiplication problem may be interesting in its own right. If it is easy, then Paillier/DJ is fully homomorphic; if it is infeasible, then we can use it as a poor man s CDH assumption in the plaintext domain of Paillier/DJ. We stress that our construction does not yield a signature scheme; verification of Waters signatures requires a pairing operation, to which we have no equivalent in Z N s+1. However, we will be able to construct a matrix M Z 3 3 out of a tag, such that the decrypted matrix N s+1 M = D(M) Z 3 3 N has low rank iff the signature embedded in the tag is valid. Essentially, this s observation uses products of plaintexts occurring in the determinant det( M) to implicitly implement a pairing over Z N s+1 and verify the signature. Similar techniques to encode arithmetic formulas in the determinant of a matrix have been used, e.g., by [28, 2 in the context of secure computation. Our function evaluation is now a suitable multiplication of the encrypted matrix M with a plaintext vector X Z 3 N s, similar to the one from Peikert and Waters [37. Concretely, on input X, our function outputs an encryption of the ordinary matrix-vector product M X. If M is nonsingular, then we can invert this function using the decryption key. If M has low rank, however, the function becomes lossy. This construction has compact tags and function images; both consist only of a (small) constant number of group elements, and only the public key has O(k) group elements, where k is the security parameter. Thus, our construction does not scale in the number N of lossy tags. A pairing-based construction. Our second uses a product group G 1 = g 1 h 1 that allows for a pairing. We will implement BB signatures in h 1, while we blind with elements from g 1. Consequently, our security proof requires both the Strong Diffie-Hellman assumption (SDH, [9) in h 1 and a subgroup indistinguishability assumption. Tags are essentially matrices (W i,j ) i,j for W i,j G 1 = g 1 h 1. Upon evaluation, this matrix is first suitably paired entry-wise to obtain a matrix (M i,j ) i,j over G T = g T h T, the pairing s target group. This operation will ensure that (a) M i,j (for i j) always lies in g T, and (b) M i,i lies in g T iff the h 1 -factor of W i,i constitutes a valid BB signature for the whole tag. With these ideas in mind, we revisit the original matrix-based LTF construction from [37 to obtain a function with trapdoors. Unfortunately, using the matrix-based construction from [37 results in rather large tags (of size O(n 2 ) group elements for a function with domain {0, 1} n ). On the bright side, a number of random self-reducibility properties allow for a security proof whose reduction quality does not degrade with the number N of lossy tags (i.e., challenge ciphertexts) around. Specifically, neither construction nor reduction scale in N. Applications. Given the work of [26, a straightforward application of our results is the construction of an SO-CCA secure PKE scheme. (However, a slight tweak is required compared to the construction from [26 see Section 6.3 for details.) Unlike the PKE schemes from [26, both of our ABM-LTFs give an SO-CCA construction that is independent of N, the number of challenge ciphertexts. Moreover, unlike the SO-CCA secure PKE scheme from [21, our DCR-based SO-CCA scheme has compact ciphertexts of O(1) group elements. Finally, unlike both [26 and [21, our 2

pairing-based scheme has a reduction that does not depend on N and the number of decryption queries (see Section 5.3 for details). As a side effect, our pairing-based scheme can be interpreted as a new kind of CCA secure PKE scheme with a security proof that is tight in the number of challenges and decryption queries. This solves an open problem of Bellare et al. [4, although the scheme should be seen as a (relatively inefficient) proof of concept rather than a practical system. Also, to be fair, we should mention that the SDH assumption we use in our pairing-based ABM-LTF already has a flavor of accommodating multiple challenges: an SDH instance contains polynomially many group elements. Open problems. An interesting open problem is to find different, and in particular efficient and tightly secure ABM-LTFs under reasonable assumptions. This would imply efficient and tightly (SO-)CCA-secure encryption schemes. (With our constructions, one basically has to choose between efficiency and a tight reduction.) Also, our pairing-based PKE scheme achieves only indistinguishability-based, but not (in any obvious way) simulation-based SO security [6. (To achieve simulation-based SO security, a simulator must essentially be able to efficiently explain lossy ciphertexts as encryptions of any given message, see [6, 21.) However, as we demonstrate in case of our DCR-based scheme, in some cases ABM-LTFs can be equipped with an additional explainability property that leads to simulation-based SO security (see Section 7 for details). It would be interesting to find other applications of ABM-LTFs. One reviewer suggested that ABM-LTFs can be used instead of ABO-LTFs in the commitment scheme from Nishimaki et al. [34, with the goal of attaining reusable commitments. Organization. After fixing some notation in Section 2, we proceed to our definition of ABM- LTFs in Section 3. We define and analyze our DCR-based ABM-LTF in Section 4. We present our pairing-based ABM-LTF in Section 5. We then show how ABM-LTFs imply CCA-secure (indistinguishability-based) selective-opening security in Section 6. Finally, we explain how to achieve simulation-based selective-opening security in Section 7. 2 Preliminaries Notation. For n N, let [n := {1,..., n}. Throughout the paper, k N denotes the security parameter. For a finite set S, we denote by s S the process of sampling s uniformly from S. For a probabilistic algorithm A, we denote y A(x; R) the process of running A on input x and with randomness R, and assigning y the result. We let R A denote the randomness space of A; we require R A to be of the form R A = {0, 1} r. We write y A(x) for y A(x; R) with uniformly chosen R R A, and we write y 1,..., y m A(x) for y 1 A(x),..., y m A(x) with fresh randomness in each execution. If A s running time is polynomial in k, then A is called probabilistic polynomial-time (PPT). The statistical distance of two random variables X and Y over some countable domain S is defined as SD (X ; Y ) := 1 2 s S Pr [X = s Pr [Y = s. Chameleon hashing. A chameleon hash function (CHF, see [29) is collision-resistant when only the public key of the function is known. However, this collision-resistance can be broken (in a very strong sense) with a suitable trapdoor. We will assume an input domain of {0, 1}. We do not lose (much) on generality here, since one can always first apply a collision-resistant hash function on the input to get a fixed-size input. Definition 2.1 (Chameleon hash function). A chameleon hash function CH consists of the following PPT algorithms: Key generation. CH.Gen(1 k ) outputs a key pk CH along with a trapdoor td CH. Evaluation. CH.Eval(pk CH, X; R CH ) maps an input X {0, 1} to an image Y. By R CH, we denote the randomness used in the process. We require that if R CH is uniformly distributed, then so is Y (over its respective domain). Equivocation. CH.Equiv(td CH, X, R CH, X ) outputs randomness R CH with CH.Eval(pk CH, X; R CH ) = CH.Eval(pk CH, X ; R CH ) (1) 3

for the corresponding key pk CH. We require that for any X, X, if R CH is uniformly distributed, then so is R CH. We require that CH is collision-resistant in the sense that given pk CH, it is infeasible to find X, R CH, X, R CH with X X that meet (1). Formally, for every PPT B, Adv cr CH,B [X (k) := Pr X and (1) holds (X, R CH, X, R CH ) B(1k, pk CH ) is negligible, where (pk CH, td CH ) CH.Gen(1 k ). Lossy trapdoor functions. Lossy trapdoor functions (see [37) are a variant of trapdoor oneway functions. They may be operated in an injective mode (which allows to invert the function) and a lossy mode in which the function is non-injective. For simplicity, we restrict to an input domain {0, 1} n for polynomially bounded n = n(k) > 0. Definition 2.2 (Lossy trapdoor function). A lossy trapdoor function (LTF) LTF with domain Dom consists of the following algorithms: Key generation. LTF.IGen(1 k ) yields an evaluation key ek and an inversion key ik. Evaluation. LTF.Eval(ek, X) (with X Dom) yields an image Y. Write Y = f ek (X). Inversion. LTF.Invert(ik, Y ) outputs a preimage X. Write X = f 1 ik (Y ). Lossy key generation. LTF.LGen(1 k ) outputs an evaluation key ek. We require the following: Correctness. For all (ek, ik) LTF.IGen(1 k ), X Dom, it is f 1 ik (f ek(x)) = X. Indistinguishability. The first output of LTF.IGen(1 k ) is indistinguishable from the output of LTF.LGen(1 k ), i.e., Adv ind LTF,A (k) := Pr [A(1 k, ek) = 1 [ Pr A(1 k, ek ) = 1 is negligible for all PPT A, for (ek, ik) LTF.IGen(1 k ), ek LTF.LGen(1 k ). Lossiness. We say that LTF is l-lossy if for all possible ek LTF.LGen(1 k ), the image set f ek (Dom) is of size at most Dom /2 l. 3 Definition of ABM-LTFs We are now ready to define ABM-LTFs. As already discussed in Section 1, ABM-LTFs generalize ABO-LTFs and ABN-LTFs in the sense that there is a superpolynomially large pool of lossy tags from which we can sample. We require that even given oracle access to such a sampler of lossy tags, it is not feasible to produce a (fresh) non-injective tag. Furthermore, it should be hard to distinguish lossy from injective tags. Definition 3.1 (ABM-LTF). An all-but-many lossy trapdoor function (ABM-LTF) ABM with domain Dom consists of the following PPT algorithms: Key generation. ABM.Gen(1 k ) yields an evaluation key ek, an inversion key ik, and a tag key tk. The evaluation key ek defines a set T = T p {0, 1} that contains the disjoint sets of lossy tags T loss T and injective tags T inj T. Tags are of the form t = (t p, t a ), where t p T p is the core part of the tag, and t a {0, 1} is the auxiliary part of the tag. Evaluation. ABM.Eval(ek, t, X) (for t T, X Dom) produces Y =: f ek,t (X). Inversion. ABM.Invert(ik, t, Y ) (with t T inj ) outputs a preimage X =: f 1 ik,t (Y ). Lossy tag generation. ABM.LTag(tk, t a ) takes as input an auxiliary part t a {0, 1} and outputs a core tag t p such that t = (t p, t a ) is lossy. We require the following: Correctness. For all possible (ek, ik, tk) ABM.Gen(1 k ), t T inj, and X Dom, it is always f 1 ik,t (f ek,t(x)) = X. Lossiness. We say that ABM is l-lossy if for all possible (ek, ik, tk) ABM.Gen(1 k ), and all lossy tags t T loss, the image set f ek,t (Dom) is of size at most Dom /2 l. 4

Indistinguishability. Even multiple lossy tags are indistinguishable from random tags: [ Adv ind ABM,A [A(1 (k) := Pr k, ek) ABM.LTag(tk, ) = 1 Pr A(1 k, ek) O Tp ( ) = 1 is negligible for all PPT A, where (ek, ik, tk) ABM.Gen(1 k ), and O T ( ) ignores its input and returns a uniform and independent core tag t p T p. Evasiveness. Non-injective tags are hard to find, even given multiple lossy tags: [ Adv eva ABM,A (k) := Pr A(1 k, ek) ABM.LTag(tk, ) T \ T inj is negligible with (ek, ik, tk) ABM.Gen(1 k ), and for any PPT algorithm A that never outputs tags obtained through oracle queries (i.e., A never outputs tags t = (t p, t a ), where t p has been obtained by an oracle query t a ). On our tagging mechanism. Our tagging mechanism is different from the mechanism from ABO-, resp. ABN-LTFs. In particular, our tag selection involves an auxiliary and a core tag part; lossy tags can be produced for arbitrary auxiliary tags. (Conceptually, this resembles the two-stage tag selection process from Abe et al. [1 in the context of hybrid encryption.) On the other hand, ABO- and ABN-LTFs simply have fully arbitrary (user-selected) bitstrings as tags. The reason for our more complicated tagging mechanism is that during a security proof, tags are usually context-dependent and not simply random. For instance, a common trick in the public-key encryption context is the following: upon encryption, choose a one-time signature keypair (v, s), set the tag to the verification key v, and then finally sign the whole ciphertext using the signing key s. This trick has been used numerous times (e.g., [19, 14, 37, 38) and ensures that a tag cannot be re-used by an adversary in a decryption query. (To re-use that tag, an adversary would essentially have to forge a signature under v.) However, in our constructions, in particular lossy tags cannot be freely chosen. (This is different from ABO- and ABN-LTFs and stems from the fact that there are superpolynomially many lossy tags.) But as outlined, during a security proof, we would like to embed auxiliary information in a tag, while being able to force the tag to be lossy. We thus divide the tag into an auxiliary part (which can be used to embed, e.g., a verification key for a one-time signature), and a core part (which will be used to enforce lossiness). 4 A DCR-based ABM-LTF We now construct an ABM-LTF ABMD in rings Z N s+1 for composite N. Domain and codomain of our function will be Z 3 N s and (Z N ) 3, respectively. One should have in mind a value of s 2 s+1 here, since we will prove that ABMD is ((s 1) log 2 (N))-lossy. 4.1 Setting and assumptions In the following, let N = P Q for primes P and Q, and fix a positive integer s. Write ϕ(n) := (P 1)(Q 1). We will silently assume that P and Q are chosen from a distribution that depends on the security parameter. Unless indicated otherwise, all computations will take place in Z N s+1, i.e., modulo N s+1. It will be useful to establish the notation h := 1 + N Z N s+1. We also define algorithms E and D by E(x) = r N s h x for x Z N s and a uniformly and independently chosen r Z N, and D(c) = ((c ϕ(n) ) 1/ϕ(N) mod N s 1)/N s+1 Z N s for c Z N s+1. That is, E and D are Paillier/Damgård-Jurik encryption and decryption operations as in [35, 16, so that D(r N s h x ) = x and D(E(x)) = x. Moreover, D can be efficiently computed using the factorization of N. We will also apply D to vectors or matrices over Z N s+1, by which we mean component-wise application. We make the following assumptions: 5

Assumption 4.1. The s-decisional Composite Residuosity (short: s-dcr) assumption holds iff [ [ Adv s-dcr D (k) := Pr D(1 k, N, r N s ) = 1 Pr D(1 k, N, r N s h) = 1 is negligible for all PPT D, where r Z N s+1 is chosen uniformly. Assumption 4.1 is rather common and equivalent to the semantic security of the Paillier [35 and Damgård-Jurik (DJ) [16 encryption schemes. In fact, it turns out that all s-dcr assumptions are (tightly) equivalent to 1-DCR [16. Nonetheless, we make s explicit here to allow for a simpler exposition. Also note that Assumption 4.1 supports a form of random self-reducibility. Namely, given one challenge element c Z N, it is possible to generate many fresh challenges c s+1 i with the same decryption D(c i ) = D(c) by re-randomizing the r N s part. Assumption 4.2. The No-Multiplication (short: No-Mult) assumption holds iff Adv mult A (k) := Pr [A(1 k, N, c 1, c 2 ) = c Z N for D(c 2 ) = D(c 1 ) D(c 2 ) mod N s is negligible for all PPT A, where c 1, c 2 Z N 2 are chosen uniformly. The No-Mult assumption stipulates that it is infeasible to multiply Paillier-encrypted messages. If No-Mult (along with s-dcr and a somewhat annoying technical assumption explained below) hold, then our upcoming construction will be secure. But if the No-Mult problem is easy, then Paillier encryption is fully homomorphic. 2 The following technical lemma will be useful later on, because it shows how to lift Z N 2- encryptions to Z N s+1-encryptions. Lemma 4.3 (Lifting, implicit in [16). Let s 1 and τ : Z N 2 Z N s+1 be the canonical embedding with τ(c mod N 2 ) = c mod N s+1 for c Z N 2 interpreted as an integer from {0,..., N 2 1}. Then, for any c Z N 2, and X := D(τ(c)) Z N s and x := D(c) Z N, we have X = x mod N. Proof. Consider the canonical homomorphism π : Z N s+1 Z N 2. Write Z N = g s+1 s h s for some g s Z N of order ϕ(n) and h s+1 s := 1 + N mod N s+1. We have π( g s ) = g 1 and π(h x s) = h x 1 mod N. Since π ˆπ = id, this gives ZN 2 ˆπ(gu 1 hx 1 ) = gu s h x+x N s for suitable u, x. Unfortunately, we need another assumption to exclude certain corner cases: Assumption 4.4. We require that the following function is negligible for all PPT A: [ Adv noninv A (k) := Pr A(1 k, N) = c Z N 2 such that 1 < gcd(d(c), N) < N. Intuitively, Assumption 4.4 stipulates that it is infeasible to generate Paillier encryptions of funny messages. Note that actually knowing any such message allows to factor N. 4.2 Our construction Overall idea. The first idea in our construction will be to use the No-Mult assumption as a poor man s CDH assumption in order to implement Waters signatures [43 over Z N s+1. Recall that the verification of Waters signatures requires a pairing operation, which corresponds to the multiplication of two Paillier/DJ-encrypted messages in our setting. We do not have such a multiplication operation available; however, for our purposes, signatures will never actually have to be verified, so this will not pose a problem. We note that the original Waters signatures from [43 are re-randomizable and thus not strongly unforgeable. To achieve the evasiveness property from 2 Of course, there is a third, less enjoyable possibility. It is always conceivable that an algorithm breaks No-Mult with low but non-negligible probability. Such an algorithm may not be useful for constructive purposes. Besides, if either s-dcr or the annoying technical assumption do not hold, then our construction may not be secure. 6

Definition 3.1, we will thus combine Waters signatures with a chameleon hash function, much like Boneh et al. [11 did to make Waters signatures strongly unforgeable. Secondly, we will construct 3 3-matrices M = (M i,j ) i,j over Z N s+1, in which we carefully embed our variant of Waters signatures. Valid signatures will correspond to singular plaintext matrices M := (D(M i,j )) i,j ; invalid signatures correspond to full-rank matrices M. We will define our ABM-LTF f as a suitable matrix-vector multiplication of M with an input vector X Z 3 N s. For a suitable choice of s, the resulting f will be lossy if det( M) = 0. Key generation. ABM.Gen(1 k ) first chooses N = P Q, and a key pk CH along with trapdoor td CH for a chameleon hash function CH. Finally, ABM.Gen chooses a, b Z N s, as well as k + 1 values h i Z N s for 0 i k, and sets A E(a) B E(b) H i E(h i ) (for 0 i k) ek = (N, A, B, (H i ) k i=0, pk CH ) ik = (ek, P, Q) tk = (ek, a, b, (h i ) k i=0, td CH ). Tags. Recall that a tag t = (t p, t a ) consists of a core part t p and an auxiliary part t a {0, 1}. Core parts are of the form t p = (R, Z, R CH ) with R, Z Z N and randomness R s+1 CH for CH. (Thus, random core parts are simply uniform values R, Z Z N and uniform CH-randomness.) With s+1 t, we associate the chameleon hash value T := CH.Eval(pk CH, (R, Z, t a )), and a group hash value H := H 0 i T H i, where i T means that the i-th bit of T is 1. Let h := D(H) = h 0 + i T h i. Also, we associate with t the matrices Z A R z a r M = B h 1 Z 3 3 M N = b 1 0 s+1 Z 3 3 N, (2) s H 1 h h 0 1 where M = D(M) is the component-wise decryption of M, and r = D(R) and z = D(Z). It will be useful to note that det( M) = z (ab + rh). We will call t lossy if det( M) = 0, i.e., if z = ab + rh; we say that t is injective if M is invertible. Lossy tag generation. ABM.LTag(tk, t a ), given tk = ((N, A, B, (H i ) i ), a, b, (h i ) k i=0, td CH) and an auxiliary tag part t a {0, 1}, picks an image T of CH that can later be explained (using td CH ) as the image of an arbitrary preimage (R, Z, t a ). Let h := h 0 + i T h i and R E(r) for uniform r Z N s, and set Z E(z) for z = ab + rh. Finally, let R CH be CH-randomness for which T = CH.Eval(pk CH, (R, Z, t a )). Obviously, this yields uniformly distributed lossy tag parts (R, Z, R CH ). Evaluation. ABM.Eval(ek, t, X), for ek = (N, A, B, (H i ) i, pk CH ), t = ((R, Z, R CH ), t a ), and a preimage X = (X i ) 3 i=1 Z3 N s, first computes the matrix M = (M i,j) i,j as in (2). Then, ABM.Eval computes and outputs 3 3 Y := M X :=. j=1 M X j i,j Note that the decryption D(Y ) is simply the ordinary matrix-vector product D(M) X. Inversion and correctness. ABM.Invert(ik, t, Y ), given an inversion key ik, a tag t, and an image Y = (Y i ) 3 i=1, determines X = (Y i) 3 i=1 as follows. First, ABM.Invert computes the matrices M and M = D(M) as in (2), using P, Q. For correctness, we can assume that the tag t is injective, so M is invertible; let M 1 be its inverse. Since D(Y ) = M X, ABM.Invert can retrieve X as M 1 D(Y ) = M 1 M X. 4.3 Security analysis Theorem 4.5 (Security of ABMD). Assume that Assumption 4.1, Assumption 4.2, and Assumption 4.4 hold, that CH is a chameleon hash function, and that s 2. Then the algorithms described in Section 4.2 form an ABM-LTF ABMD as per Definition 3.1. i=1 7

We have yet to prove lossiness, indistinguishability, and evasiveness. Lossiness. Our proof of lossiness loosely follows Peikert and Waters [37: Lemma 4.6 (Lossiness of ABMD). ABMD is ((s 1) log 2 (N))-lossy. Proof. Assume an evaluation key ek = (N, A, B, (H i ) i, pk CH ), and a lossy tag t, so that the matrix M from (2) is of rank 2. Hence, any fixed decrypted image D(f ek,t (X)) = D(M X) = M X leaves at least one inner product C, X Z N s (for C Z 3 N s that only depends on M) completely undetermined. The additional information contained in the encryption randomness of an image Y = f ek,t (X) fixes the components of X and thus C, X only modulo ϕ(n) < N. Thus, for any given image Y, there are at least N s /ϕ(n) N s 1 possible values for C, X and thus possible preimages. The claim follows. Indistinguishability. Observe that lossy tags can be produced without knowledge of the factorization of N. Hence, even while producing lossy tags, we can use the indistinguishability of Paillier/DJ encryptions E(x). This allows to substitute the encryptions R = E(r), Z = E(z) in lossy tags by independently uniform encryptions. This step also makes the CH-randomness independently uniform, and we end up with random tags. We omit the straightforward formal proof and state: Lemma 4.7 (Indistinguishability of ABMD). Given the assumptions from Theorem 4.5, ABMD is indistinguishable. Concretely, for any PPT adversary A, there exists an s-dcr distinguisher D of roughly the same complexity as A, such that Adv ind ABMD,A (k) = Advs-dcr D (k). (3) The tightness of the reduction in (3) stems from the random self-reducibility of s-dcr. Evasiveness. It remains to prove evasiveness. Lemma 4.8 (Evasiveness of ABMD). Given the assumptions from Theorem 4.5, ABMD is evasive. Concretely, for any PPT adversary A that makes at most Q = Q(k) oracle queries, there exist adversaries B, D, and F of roughly the same complexity as A, with Adv eva ABMD,A (k) O(kQ(k)) Advmult F (k) + Adv noninv E (k) + Adv s-dcr D (k) + Adv cr CH,B (k). At its core, the proof of Lemma 4.8 adapts the security proof of Waters signatures to Z N s+1. That is, we will create a setup in which we can prepare Q(k) lossy tags (which correspond to valid signatures), and the tag the adversary finally outputs will be interpreted as a forged signature. Crucial to this argument will be a suitable setup of the group hash function (H i ) k i=0. Depending on the (group) hash value, we will either be able to create a lossy tag with that hash, or use any lossy tag with that hash to solve a underlying No-Mult challenge. With a suitable setup, we can hope that with probability O(1/(kQ(k))), Q(k) lossy tags can be created, and the adversary s output can be used to solve an No-Mult challenge. The proof of Lemma 4.8 is somewhat complicated by the fact that in order to use the collision-resistance of the employed CHF, we have to first work our way towards a setting in which the CHF trapdoor is not used. This leads to a somewhat tedious deferred analysis (see [24) and the s-dcr term in the lemma. Proof. We turn to the full proof of Lemma 4.8. Fix an adversary A. We proceed in games. In Game 1, A(ek) interacts with an ABM.LTag(tk, ) oracle that produces core tag parts for lossy tags t p = ((R, Z, R CH ), t a ) that satisfy z = ab + rh for r = D(R), z = D(Z), and h = D(H) with H = H 0 i T H i and T = CH.Eval(pk CH, (R, Z, t a )). Without loss of generality, we assume that 8

A makes exactly Q oracle queries, where Q = Q(k) is a suitable polynomial. Let bad i denote the event that the output of A in Game i is a lossy tag, i.e., lies in T loss. By definition, Pr [bad 1 = Pr [A ABM.LTag(tk, ) (ek) T loss, (4) where the keys ek and tk are generated via (ek, ik, tk) ABM.Gen(1 k ). Getting rid of (chameleon) hash collisions. To describe Game 2, let bad hash be the event that A finally outputs a tag t = ((R, Z, R CH ), t a ) with a CHF hash T = CH.Eval((R, Z, t a ); R CH ) that has already appeared as the CHF hash of an ABM.LTag output (with the corresponding auxiliary tag part input). Now Game 2 is the same as Game 1, except that we abort (and do not raise event bad 2 ) if bad hash occurs. Obviously, Pr [bad 1 Pr [bad 2 Pr [bad hash. (5) It would seem intuitive to try to use CH s collision resistance to bound Pr [bad hash. Unfortunately, we cannot rely on CH s collision resistance in Game 2 yet, since we use CH s trapdoor in the process of generating lossy tags. So instead, we use a technique called deferred analysis [24 to bound Pr [bad hash. The idea is to forget about the storyline of our evasiveness proof for the moment and develop Game 2 further up to a point at which we can use CH s collision resistance to bound Pr [bad hash. This part of the proof largely follows the argument from Lemma 4.7. Concretely, we can substitute the lossy core tag parts output by ABM.LTag by uniformly random core tag parts. At this point, CH s trapdoor is no longer required to implement the oracle A interacts with. Hence we can apply CH s collision resistance to bound Pr [bad hash in this modified game. This also implies a bound on Pr [bad hash in Game 2: since the occurrence of bad hash is obvious from the interaction between A and the experiment, Pr [bad hash must be preserved across these transformations. We omit the details, and state the result of this deferred analysis: Pr [bad hash Adv s-dcr (k) + Adv cr (k) (6) D for suitable adversaries D, E, and B. This ends the deferred analysis step, and we are back on track in our evasiveness proof. Preparing the setup for our reduction. In Game 3, we set up the group hash function given by (H i ) k i=0 differently. Namely, for 0 i k, we choose independent γ i Z N s, and set CH,B H i := A α i E(γ i ), so that h i := D(H i ) = α i a + γ i mod N s (7) for independent α i Z yet to be determined. Note that this yields an identical distribution of the H i no matter how concretely we choose the α i. For convenience, we write α = α 0 + i T α i and γ = γ 0 + i T γ i for a given tag t with associated CH-image T. This in particular implies h := D(H) = αa + γ for the corresponding group hash H = H 0 i T H i. Our changes in Game 3 are purely conceptual, and so Pr [bad 3 = Pr [bad 2. (8) To describe Game 4, let t (i) denote the i-th lossy core tag part output by ABM.LTag (including the corresponding auxiliary part t (i) a ), and let t be the tag finally output by A. Similarly, we denote with T (i), α, etc. the intermediate values for the tags output by ABM.LTag and A. Now let good setup be the event that gcd(α (i), N) = 1 for all i, and that α = 0. In Game 4, we abort (and do not raise event bad 4 ) if good setup occurs. (In other words, we only continue if each H (i) has an invertible A-component, and if H has no A-component.) Waters [43 implicitly shows that for a suitable distribution of α i, the probability Pr [ good setup can be kept reasonably high: 9

Lemma 4.9 (Waters [43, Claim 2, adapted to our setting). In the situation of Game 4, there exist efficiently computable distributions α i, such that for every possible view view that A could experience in Game 4, we have Pr [ good setup view O(1/(kQ(k))). (9) This directly implies Pr [bad 4 Pr [ good setup Pr [bad3. (10) An annoying corner case. Let Pr [bad tag be the event that A outputs a tag t for which det( M ) = z (ab + r h ) is neither invertible nor 0 modulo N. This in particular means that t is neither injective nor lossy. A straightforward reduction to Assumption 4.4 shows that Pr [bad tag Adv noninv E (k) (11) for an adversary E that simulates Game 4 and outputs Z /(h ab (R ) h ) mod N 2. The final reduction. We now claim that Pr [bad 4 Adv mult F (k) + Pr [bad tag (12) for the following adversary F on the No-Mult assumption. Our argument follows in the footsteps of the security proof of Waters signature scheme [43. Our No-Mult adversary F obtains as input c 1, c 2 Z N 2, and is supposed to output c Z N 2 with D(c 1 ) D(c 2 ) = D(c ) Z N. In order to do so, F simulates Game 4. F incorporates its own challenge as A := τ(c 1 )E(a N) and B := τ(c 2 )E(b N) for the embedding τ from Lemma 4.3 and uniform a, b Z N s 1. This gives uniformly distributed A, B Z N s+1. Furthermore, by Lemma 4.3, we have a = D(c 1 ) mod N and b = D(c 2 ) mod N for a := D(A) and b := D(B). Note that F can still compute all H i and thus ek efficiently using (7). We now describe how F constructs lossy tags, as required to implement oracle T loss of Game 4. Since the CH trapdoor td CH is under F s control, we can assume a given CH-value T to which we can later map our tag ((R, Z), t a ). By our changes from Game 4, we can also assume that the corresponding α = α 0 + i T α i is invertible modulo N s and known. We pick δ Z N s, and set R := B 1/α mod N s E(δ) Z := A αδ B γ/α E(γδ). With the corresponding CH-randomness R CH, this yields perfectly distributed lossy tags satisfying D(A) D(B) + D(R) D(H) = ab + ( b/α + δ)(αa + γ) = αδa (γ/α)b + γδ = D(Z). Note that this generation of lossy tags is not possible when α = 0. So far, we have argued that F can simulate Game 4 perfectly for A. It remains to show how F can extract an No-Mult solution out of a tag t output by A. Unless bad tag occurs or t is injective, we have z = D(Z ) = D(A) D(B) + D(R ) D(H ) = ab + r h mod N. Since we abort otherwise, we may assume that α = 0, so that z = ab+r h = ab+γ r mod N for known γ. This implies ab = z γ r mod N, so F can derive and output a Z N 2-encryption of ab as Z /(R ) γ mod N 2. This shows (12). Taking (4)-(12) together shows Lemma 4.8. 5 A pairing-based ABM-LTF In this section, we give a construction of a pairing-based all-but-many lossy trapdoor function ABMP with domain {0, 1} n. We will work in a specific group setting, which we describe next. 10

5.1 Setting and assumptions Our construction will require groups G 1, G 2, G T of composite order. We will assume that this group order is publicly known. However, for our subgroup indistinguishability assumption (and only for this assumption), it will be crucial to hide the factors of the group order. (In addition, for this assumption, it will be necessary to hide certain subgroup generators.) While the assumption of composite-order groups leads to comparatively inefficient instantiations, it is not all too uncommon (e.g., [10, 42, 22, 32). Unique to our setting, however, is the combination of several assumptions of different type over the same platform group(s). Concretely, we will work in groups G 1, G 2, G T of composite order such that G 1 = g 1 h 1 G 2 = g 2 h 2 G T = g T h T (13) for suitable not a priori known g 1, h 1, g 2, h 2, g T, h T, such that g 1, h 1, g 2, h 2 are of (distinct) unknown prime orders, and so that g T = ê(g 1, g 2 ) h T = ê(h 1, h 2 ) ê(g 1, h 2 ) = ê(h 1, g 2 ) = 1 (14) for a suitable publicly known (asymmetric) pairing ê : G 1 G 2 G T. We furthermore assume that it is possible to uniformly sample from G 1. Assumption 5.1. We make the Decisional Diffie-Hellman assumption in g 1. Namely, we require that for all PPT D, the function [ [ Adv ddh D (k) := Pr D(1 k, g 1, h 1, g 2, h 2, g1, a g1, b g1 ab ) = 1 Pr D(1 k, g 1, h 1, g 2, h 2, g1, a g1, b g1) r = 1 is negligible, where a, b, r [ord(g 1 ). We stress that the because the pairing ê is asymmetric, it does not help in any obvious way in breaking Assumption 5.1. The random self-reducibility of DDH. A useful property of DDH-like assumptions such as Assumption 5.1 is their random self-reducibility; this has already been noted and used several times (e.g., [40, 33, 39, 4). These works observe that from one DDH instance (g1 a, gb 1, gc 1 ) instance (with fixed g 1 ), we can construct many (g b i 1, gc i 1 ) pairs, such that the gb i 1 are independently uniform, and (a) if c = ab, then c i = ab i for all i, and (b) if c is independently uniform, then so are all c i. From this, a straightforward hybrid argument shows: Lemma 5.2. Let M = (M i,j ) G n n 1 with M i,j = g a ib j 1 for uniform a i, b j, and let R = (R i,j ) G n n 1 with R i,j = g r i,j 1 for uniform r i,j. Then, for every PPT D, there exists a PPT D with [ [ Pr D (1 k, g 1, h 1, g 2, h 2, M) Pr D (1 k, g 1, h 1, g 2, h 2, R) = n Adv ddh D (k). (15) We have recently been told by Jorge Villar [41 that the factor of n in (15) can be improved to O(log n). Assumption 5.3. We make the following subgroup indistinguishability assumption: we require that for all PPT E, the function [ [ Adv sub E (k) := Pr E(1 k, g 1, h 1, g 2, g s 2hŝ2, g1) r = 1 Pr E(1 k, g 1, h 1, g 2, g s 2hŝ2, g1 r h 1 ) = 1 is negligible, where r [ord(g 1 ), s [ord(g 2 ), and ŝ [ord(h 2 ). Note that in this, it is crucial to hide the generator h 2 and the group order of g 1, resp. g 2 and g T. On the other hand, the element g s 2 hŝ2 given to E is simply a uniform element from G 2. Relation to general subgroup decision problems. Subgroup indistinguishability assumptions such as Assumption 5.3 in pairing groups have turned out quite useful in recent years (e.g., [42, 30, 11

22, 31). In view of that, Bellare et al. [7 propose general subgroup decision (GSD) problems as a generalized way to look at such subgroup indistinguishability assumptions. In a GSD problem for a group with composite order i [n p i, an adversary specifies two subsets S 0, S 1 [n, then gets as challenge an element of order i S b p i, and finally must guess b. In the process, the adversary may ask for elements of order p i, as long as i (S 0 \S 1 ) (S 1 \S 0 ). (Note that if i (S 0 \S 1 ) (S 1 \S 0 ), pairing an element of order p i with the challenge would trivially solve the GSD problem). While [7 formulate GSD problems for symmetric pairings (so that G 1 = G 2 ), Assumption 5.3 can be cast as a variant of a GSD problem with n = 2 in groups with asymmetric pairing. In this case, the adversary would ask for a challenge from G 1 with S 0 = {1} and S 1 = {1, 2} (corresponding to g r 1 and gr 1 h 1, respectively). The additional elements E gets in Assumption 5.3 do not allow (in an obvious way, using the pairing) to distinguish challenges. Conversion to prime-order groups. Intuitively, Assumption 5.3 is the only reason why we work in groups of composite order. A natural question is to ask whether we can convert our construction to prime-order groups, which allow for much more efficient instantiations. Freeman [22 gives a generic mechanism for such conversions. However, the generic conversions of [22 require that the pairing is either used in a projecting or canceling way (but not both). Roughly speaking, projecting means that at some point we explicitly use a subgroup order to exponentiate away some blinding factor. Canceling means that we rely on, e.g., ê(g 1, h 2 ) = 1. Unfortunately, we use the pairing both in a projecting and canceling way, so that we cannot rely on the conversions of [22. (We note that Meiklejohn et al. [31 present a blind signature scheme with similar properties to demonstrate limitations of Freeman s conversions.) Hence, unfortunately, we do not know how to express our construction in a prime-order setting. Finally, we make an assumption corresponding to the security of the underlying (Boneh-Boyen) signature scheme: Assumption 5.4. We make the Strong Diffie-Hellman assumption [9 in h 1. Namely, we require that for all PPT F and all polynomials Q = Q(k), the function Adv sdh F (k) := [ Pr F (1 k, g 1, h 1, g 2, h 2, ord(g 1 ), ord(h 1 ), h v 2, h v 1, h (v2 ) 1,..., h (vq ) 1 ) = (e, h 1/(v+e) 1 ) with e N is negligible, where v [ord(g 1 ). 5.2 Construction Let CH be a chameleon hash function. We now describe our ABM-LTF ABMP. Key generation. ABM.Gen(1 k ) constructs groups G 1, G 2, G T with uniformly distributed generators g 1, h 1, g 2, h 2, g T, h T as in (13), along with a pairing ê : G 1 G 2 G T. We assume that ABM.Gen can do so in a way that ord(g T ), ord(h T ) are known to ABM.Gen (but not public, so that it is still realistic to make Assumption 5.3). ABM.Gen then uniformly chooses exponents v [ord(h T ), and ũ, ṽ, z [ord(g T ), as well as a chameleon hash function key pk CH along with trapdoor td CH. Then ABM.Gen sets U = gũ2 h 2 V = gṽ2h v 2 Z = g z T h 1 T ek = (g 2, U, V, Z, pk CH ) ik = ord(g T ) tk = (g 1, h 1, ord(g T ), ord(h T ), ũ, ṽ, z, v, td CH ). (16) Tags. Tags are of the form t = (t p, t a ) with an auxiliary tag part t a {0, 1}. The core tag part is t p = ((W i,j ) i,j [n, R CH ) with W i,j G 1 and randomness R CH for CH. Random core tag parts are 12

thus uniformly random elements of G n n 1 together with uniform randomness for CH. Even though the following decomposition will generally not be known, we write W i,j = g w i,j 1 hŵi,j 1. (17) Connected with a tag t as above is the matrix M = (M i,j ) i,j [n G n n T defined through ( ) i, j [n, i j : M i,j = ê(w i,j, g 2 ) = g w i,j T ( ) i [n : M i,i = ê(w i,i, U e V ) Z = g w i,i(ṽ+eũ)+ z T hŵi,i( v+e) 1 T, where e := CH.Eval(pk CH, ((W i,j ) i,j, t a ); R CH ) is the hash value of the tag. Lossy and injective tags. A tag t is injective iff for all i [n, we have M i,i G T \ g T, i.e., if every M i,i has a nontrivial h T -factor. Furthermore, t is lossy iff there exist (r i, s i ) i [n such that M i,j = g r is j T for all i, j [n. Using (16,17,18), and assuming e v for the moment, injectivity is equivalent to ŵ i,i 1/( v + e) mod ord(h T ) for all i. Similarly, lossiness is equivalent to the following three conditions: 1. ŵ i,i = 1/( v + e) mod ord(h T ), 2. w i,j = r i s j mod ord(g T ) for all i j, and 3. w i,i = (r i s i z)/(ṽ + eũ) mod ord(g T ) for all i [n for suitable (r i, s i ) i [n. Obviously, the sets of lossy and injective tags are disjoint. Lossy tag generation. ABM.LTag(tk, t a ), for tk = (g 1, h 1, ord(g T ), ord(h T ), ũ, ṽ, z, v, td CH ) and t a {0, 1} uniformly selects exponents r i, s i [ord(g T ) (for i [n). Furthermore, ABM.LTag selects a random image e of CH so that, using td CH, e can later be explained as an image of an arbitrary preimage. ABM.LTag then sets W i,j = g r is j 1 (i j) W i,i = g (r is i z)/(ṽ+eũ) mod ord(g T ) 1 h 1/( v+e) mod ord(h T ) 1 and generates randomness R CH with CH.Eval(pk CH, ((W i,j ) i,j, t a ); R CH ) = e using td CH. Finally, ABM.LTag outputs the core tag part t p = ((W i,j ) i,j, R CH ). By the discussion above, it is clear that the tag t = (t p, t a ) is lossy. Evaluation. ABM.Eval(ek, t, X), with ek = (g 2, U, V, Z, pk CH ), t = (((W i,j ) i,j [n, R CH ), t a ), and a preimage X = (X 1,..., X n ) {0, 1} n, first computes the matrix M through (18). From there, ABM.Eval proceeds similarly to the original lossy trapdoor function construction from [37. Namely, ABM.Eval computes Y i = j [n M X j i,j for i [n and sets Y := (Y 1,..., Y n ). Note that if we write Y i = gỹi (16,17), we obtain T hŷi T (18) (19) and use the notation from ỹ i = X i ( w i,i (ṽ + eũ) + z) + j i X j w i,j mod ord(g T ) ŷ i = (( v + e)ŵ i,i 1) X i mod ord(h T ). (20) Inversion and correctness. ABM.Invert(ik, t, Y ), given an inversion key ik = ord(g T ), a tag t, and an image Y = (Y 1,..., Y n ), determines X = (X 1,..., X n ) {0, 1} n as follows: if Y ord(g T ) i = 1, then set X i = 0; else set X i = 1. Intuitively, ABM.Invert thus projects the image elements Y i onto their h T -component, and sets X i = 1 iff Y i has a nontrivial h T -component. Given the decomposition (20) of Y i, we immediately get correctness. Namely, as soon as t is injective, we have ( v + e)ŵ i,i 1 0 mod ord(g T ) and thus Y ord(g T ) i = 1 Ŷi = 0 X i = 0. 13

5.3 Discussion Before turning to the security analysis of ABMP, we make two comments about ABMP. Variation with Waters signatures. We could as well have used Waters signatures [43 instead of BB signatures in the construction of ABMP. In that case, there would be two group elements in the tag per element on the diagonal of the matrix W. The derivation of the matrix M would then proceed as above for elements outside of the diagonal, and would on the diagonal essentially perform the verification of Waters signatures. interesting effects would be: (a) Instead of the SDH assumption, we would use the CDH assumption in h 1. The details are straightforward, and the most (b) Our evasiveness proof would incur a reduction factor of N (the number of issued lossy tags). (c) The evaluation key would contain a hash function (H i ) k i=0 of linearly many group elements. Tight (SO-)CCA security proofs with ABMP. As it will turn out, the security reduction of ABMP does not depend on N, the number of used lossy tags. When implementing our SO-CCA secure PKE scheme PKE from Section 6.3 with ABMP, this yields an SO-CCA reduction independent of the number of challenge ciphertexts. However, note that the reduction of Theorem 6.5 still suffers from a factor of q, the number of decryption queries an adversary makes. We will outline how to get rid of this factor using the specific structure of our upcoming evasiveness proof of ABMP. Namely, the SO-CCA proof incurs a reduction loss of q while ensuring that all decryption queries refer to injective tags. Evasiveness (as in Definition 3.1) only guarantees that an adversary cannot output a single non-injective tag. Hence, our SO-CCA proof is forced to a hybrid argument over all q decryption queries during the evasiveness reduction. However, our upcoming evasiveness proof of ABMP can actually verify signatures embedded in tags during the reduction to the SDH assumption. Hence, our proof actually shows with the same reduction factors that no adversary can generate any fresh non-injective tags, even given many tries. Plugging this proof directly into the SO-CCA proof gives a reduction that is independent of both N and q. 5.4 Security analysis Theorem 5.5 (Security of ABMP). Assume that Assumption 5.1, Assumption 5.3, and Assumption 5.4 hold, and that CH is a chameleon hash function. Then the algorithms described in Section 5.2 form an ABM-LTF ABMP in the sense of Definition 3.1. We have already commented on the correctness property; it remains to prove lossiness, indistinguishability, and evasiveness. Lossiness. Our proof of lossiness proceeds similarly to the one by Peikert and Waters [37: Lemma 5.6 (Lossiness of ABMP). ABMP is (n log 2 (ord(g T ))-lossy. Proof. Assume an evaluation key ek = (g 2, U, V, Z, pk CH ) and a tag t. We may assume that t is lossy, so that for the corresponding matrix M given by (18), we have M i,j = g r is j T for all i, j [n. Hence, evaluating via ABM.Eval(ek, t, X) gives Y = (Y 1,..., Y n ) with Y i = j [n M X j i,j j [n = g X jr i s j T = g r i( j [n X js j mod ord(g T )) T. Thus, Y depends only on j [n X js j mod ord(g T ), and lossiness as claimed follows. Indistinguishability. To prove lossy and random tags indistinguishable, we use not too surprisingly the DDH and subgroup assumptions. Namely, we first exchange the DDH-related g 1 -factors in lossy tags by random factors, and then randomize the now truly blinded h 1 -factors. The main challenge in the proof is to prepare essentially the same experiment with different kinds of setup information provided by the respective assumptions. 14