Tightly CCA-Secure Encryption without Pairings

Size: px
Start display at page:

Download "Tightly CCA-Secure Encryption without Pairings"

Transcription

1 Tghtly CCA-Secure Encrypton wthout Parngs Roman Gay 1,, Denns Hofhenz 2,, Eke Kltz 3,, and Hoeteck Wee 1, 1 ENS, Pars, France rgay,wee@d.ens.fr 2 Ruhr-Unverstät Bochum, Bochum, Germany eke.kltz@rub.de 3 Karlsruhe Insttute of Technology, Karlsruhe, Germany Denns.Hofhenz@kt.edu Abstract. We present the frst CCA-secure publc-key encrypton scheme based on DDH where the securty loss s ndependent of the number of challenge cphertexts and the number of decrypton ueres. Our constructon extends also to the standard k-ln assumpton n parng-free groups, whereas all pror constructons startng wth Hofhenz and Jager (Crypto 12) rely on the use of parngs. Moreover, our constructon mproves upon the concrete effcency of exstng schemes, reducng the cphertext overhead by about half (to only 3 group elements under DDH), n addton to elmnatng the use of parngs. We also show how to use our technues n the NIZK settng. Specfcally, we construct the frst tghtly smulaton-sound desgnated-verfer NIZK for lnear languages wthout parngs. Usng parngs, we can turn our constructon nto a hghly optmzed publcly verfable NIZK wth tght smulaton-soundness. 1 Introducton The most basc securty guarantee we reure of a publc key encrypton scheme s that of semantc securty aganst chosen-plantext attacks (CPA) [15]: t s nfeasble to learn anythng about the plantext from the cphertext. On the other hand, there s a general consensus wthn the cryptographc research communty that n vrtually every practcal applcaton, we reure semantc securty aganst adaptve chosen-cphertext attacks (CCA) [31, 13], wheren an adversary s gven access to decryptons of cphertexts of her choce. In ths work, we focus on the ssue of securty reducton and securty loss n the constructon of CPA and CCA-secure publc-key encrypton from the DDH assumpton. Suppose we have such a scheme along wth a securty reducton showng that attackng the scheme n tme t wth success probablty ɛ mples breakng the DDH assumpton n tme roughly t wth success probablty ɛ/l; we refer to L as the securty loss. In general, L would depend on the securty parameter λ as well as the number of challenge cphertexts Q enc and the number decrypton ueres Q dec, and we say that we have a tght securty reducton f L depends only on the securty parameter and s ndependent of both Q enc and Q dec. Note that for typcal settngs of parameters (e.g., λ = 80 and Q enc, Q dec 2 20, or even Q enc, Q dec 2 30 n truly large settngs), λ s much smaller than Q enc and Q dec. In the smpler settng of CPA-secure encrypton, the ElGamal encrypton scheme already has a tght securty reducton to the DDH assumpton [28, 6], thanks to random self-reducblty of DDH wth a tght securty reducton. In the case of CCA-secure encrypton, the best result s stll the Ths s the extended verson of an EUROCRYPT 2016 paper of the same name. Compared to the proceedngs verson, ths verson offers a detaled descrpton of (desgnated-verfer and publcly verfable) NIZK proof systems, and of course full proofs. CNRS. Supported by ERC Project ascend (639554). Supported by DFG grants HO 4534/2-2, HO 4534/4-1. Partally supported by DFG grant KI 795/4-1 and ERC Project ERCC (FP7/615074). CNRS and Columba Unversty. Partally supported by the Alexander von Humboldt Foundaton, NSF Award CNS and ERC Project ascend (639554).

2 semnal Cramer-Shoup encrypton scheme [11], whch acheves securty loss Q enc. 4 Ths rases the followng open problem: Does there exst a CCA-secure encrypton scheme wth a tght securty reducton to the DDH assumpton? Hofhenz and Jager [17] gave an affrmatve answer to ths problem under stronger (and parngrelated) assumptons, notably the 2-Ln assumptons n blnear groups, albet wth large cphertexts and secret keys; a seres of follow-up works [24, 26, 5, 16] leveraged technues ntroduced n the context of tghtly-secure IBE [10, 7, 19] to reduce the sze of cphertext and secret keys to a relatvely small constant. However, all of these works rely crucally on the use of parngs, and seem to shed lttle nsght on constructons under the standard DDH assumpton; n fact, a pessmst may nterpret the recent works as strong ndcaton that the use of parngs s lkely to be necessary for tghtly CCA-secure encrypton. We may then restate the open problem as elmnatng the use of parngs n these pror CCAsecure encrypton schemes whle stll preservng a tght securty reducton. From a theoretcal standpont, ths s mportant because an affrmatve answer would yeld tghtly CCA-secure encrypton under ualtatvely weaker assumptons, and n addton, shed nsght nto the broader ueston of whether tght securty comes at the cost of ualtatve stronger assumptons. Elmnatng the use of parngs s also mportant n practce as t allows us to nstantate the underlyng assumpton over a much larger class of groups that admt more effcent group operatons and more compact representatons, and also avod the use of expensve parng operatons. Smlarly, tght reductons matter n practce because as L ncreases, we should ncrease the sze of the underlyng groups n order to compensate for the securty loss, whch n turn ncreases the runnng tme of the mplementaton. Note that the mpact on performance s ute substantal, as exponentaton n a r-bt group takes tme roughly O(r 3 ). 1.1 Our Results We settle the man open problem affrmatvely: we construct a tghtly CCA-secure encrypton scheme from the DDH assumpton wthout parngs. Moreover, our constructon mproves upon the concrete effcency of exstng schemes, reducng the cphertext overhead by about half, n addton to elmnatng the use of parngs. We refer to Fgure 2 for a comparson wth pror works. Overvew of our constructon. Fx an addtvely wrtten group G of order. We rely on mplct representaton notaton [14] for group elements: for a fxed generator P of G and for a matrx M Z n t, we defne [M] := MP G n t where multplcaton s done component-wse. We rely on the D k -MDDH Assumpton [14], whch stpulates that gven [M] drawn from a matrx dstrbuton D k over Z (k+1) k, [Mr] for a random vector r n Z k p s computatonally ndstngushable from a unform vector n G k+1 ; ths s a generalzaton of the k-ln Assumpton. We outlne the constructon under the k-ln assumpton over G, of whch the DDH assumpton s a specal case correspondng to k = 1. In ths overvew, we wll consder a weaker noton of securty, namely tag-based KEM securty aganst plantext check attacks (PCA) [30]. In the PCA securty experment, the adversary gets no decrypton oracle (as wth CCA securty), but a PCA oracle that takes as nput a tag and a cphertext/plantext par and checks whether the cphertext decrypts to the plantext. Furthermore, we restrct the adversary to only uery the PCA oracle on tags dfferent from those used n the 4 We gnore contrbutons to the securty loss that depend only on a statstcal securty parameter. 2

3 challenge cphertexts. PCA securty s strctly weaker than the CCA securty we actually strve for, but allows us to present our soluton n a clean and smple way. (We show how to obtan full CCA securty separately.) The startng pont of our constructon s the Cramer-Shoup KEM. The publc key s gven by pk := ([M], [M k 0 ], [M k 1 ]) for M r Z (k+1) k. On nput pk and a tag τ, the encrypton algorthm outputs the cphertext/plantext par ([y], [z]) = ([Mr], [r M k τ ]), (1) where k τ = k 0 + τk 1 and r r Z k. Decrypton reles on the fact that y k τ = r M k τ. The KEM s PCA-secure under k-ln, wth a securty loss that depends on the number of cphertexts Q (va a hybrd argument) but ndependent of the number of PCA ueres [11, 1]. Followng the randomzed Naor-Rengold paradgm ntroduced by Chen and Wee on tghtly secure IBE [10], our startng pont s (1), where we replace k τ = k 0 + τk 1 wth k τ = λ j=1 and pk := ([M], [M k j,b ] j=1,...,λ,b=0,1 ), where (τ 1,..., τ λ ) denotes the bnary representaton of the tag τ {0, 1} λ. Followng [10], we want to analyze ths constructon by a seuence of games n whch we frst replace [y] n the challenge cphertexts by unformly random group elements va random selfreducblty of MDDH (k-ln), and then ncrementally replace k τ n both the challenge cphertexts and n the PCA oracle by k τ +m RF(τ), where RF s a truly random functon and m s a random element from the kernel of M,.e., M m = 0. Concretely, n Game, we wll replace k τ wth k τ + m RF (τ) where RF s a random functon on {0, 1} appled to the -bt prefx of τ. We proceed to outlne the two man deas needed to carry out ths transton. Lookng ahead, note that once we reach Game λ, we would have replaced k τ wth k τ + m RF(τ), upon whch securty follows from a straght-forward nformaton-theoretc argument (and the fact that cphertexts and decrypton ueres carry parwse dfferent τ). Frst dea. Frst, we show how to transton from Game to Game + 1, under the restrcton that the adversary s only allowed to uery the encrypton oracle on tags whose + 1-st bt s 0; we show how to remove ths unreasonable restrcton later. Here, we rely on an nformaton-theoretc argument smlar to that of Cramer and Shoup to ncrease the entropy from RF to RF +1. Ths s n contrast to pror works whch rely on a computatonal argument; note that the latter reures encodng secret keys as group elements and thus a parng to carry out decrypton. More precsely, we pck a random functon RF on {0, 1}, and mplctly defne RF +1 as follows: RF +1 (τ) = k j,τj { RF (τ) f τ +1 = 0 RF (τ) f τ +1 = 1 Observe all of the challenge cphertexts leak no nformaton about RF or k +1,1 snce they all correspond to tags whose + 1-st bt s 0. To handle a PCA uery (τ, [y], [z]), we proceed va a case analyss: f τ +1 = 0, then k τ + RF +1 (τ) = k τ + RF (τ) and the PCA oracle returns the same value n both Games and

4 f τ +1 = 1 and y les n the span of M, we have y m = 0 = y (k τ + m RF (τ)) = y (k τ + m RF +1 (τ)), and agan the PCA oracle returns the same value n both Games and + 1. f τ +1 = 1 and y les outsde the span of M, then y k +1,1 s unformly random gven M, M k +1,1. (Here, we crucally use that the adversary does not uery encryptons wth τ +1 = 1, whch ensures that the challenge cphertexts do not leak addtonal nformaton about k +1,1.) Ths means that y k τ s unformly random from the adversary s vew-pont, and therefore the PCA oracle wll reject wth hgh probablty n both Games and + 1. (At ths pont, we crucally rely on the fact that the PCA oracle only outputs a sngle check bt and not all of k τ + RF(τ).) Va a hybrd argument, we may deduce that the dstngushng advantage between Games and + 1 s at most Q/ where Q s the number of PCA ueres. Second dea. Next, we remove the restrcton on the encrypton ueres usng an dea of Hofhenz, Koch and Strecks [19] for tghtly-secure IBE n the mult-cphertext settng, and ts nstantaton n prme-order groups [16]. The dea s to create two ndependent copes of (m, RF ); we use one to handle encrypton ueres on tags whose + 1-st bt s 0, and the other to handle those whose + 1-st bt s 1. We call these two copes (M 0, RF(0) Concretely, we replace M r Z (k+1) k ) and (M 1, RF(1) ), where M M 0 = M M 1 = 0.. We decompose Z 3k nto the span of nto that wth M r Z 3k k the respectve matrces M, M 0, M 1, and we wll also decompose the span of M Z 3k 2k of M 0, M 1. Smlarly, we decompose M RF (τ) nto M 0 RF(0) (τ) + M 1 RF(1) (τ). We then refne the bass for span(m ) M 0 M 1 bass for Z 3k M M 0 M 1 Fg. 1. Sold lnes mean orthogonal, that s: M M 0 = M 1M 0 = 0 = M M 1 = M 0M 1. pror transton from Games to + 1 as follows: Game.0 (= Game ): pck y Z 3k M 1 RF(1) (τ); Game.1: replace y r Z 3k for cphertexts, and replace k τ wth k τ + M 0 RF(0) (τ) + wth y r span(m, M τ+1 ); Game.2: replace RF (0) (τ) wth RF (0) +1 (τ); Game.3: replace RF (1) (τ) wth RF (1) +1 (τ); Game.4 (= Game + 1): replace y r span(m, M τ+1 ) wth y r Z 3k. For the transton from Game.0 to Game.1, we rely on the fact that the unform dstrbutons over Z 3k and span(m, M τ+1 ) encoded n the group are computatonally ndstngushable, even gven a random bass for span(m ) (n the clear). Ths extends to the settng wth multple samples, wth a tght reducton to the D k -MDDH Assumpton ndependent of the number of samples. For the transton from Game.1 to.2, we rely on an nformaton-theoretc argument lke the one we just outlned, replacng span(m) wth span(m, M 1 ) and M wth M 0 n the case analyss. 4

5 In partcular, we wll explot the fact that f y les outsde span(m, M 1 ), then y k +1,1 s unformly random even gven M, Mk +1,1, M 1, M 1 k +1,1. The transton from Game.2 to.3 s completely analogous. From PCA to CCA. Usng standard technues from [11, 23, 21, 8, 4], we could transform our basc tag-based PCA-secure scheme nto a full-fledged CCA-secure encrypton scheme by addng another hash proof system (or an authentcated symmetrc encrypton scheme) and a one-tme sgnature scheme. However, ths would ncur an addtonal overhead of several group elements n the cphertext. Instead, we show how to drectly modfy our tag-based PCA-secure scheme to obtan a more effcent CCA-secure scheme wth the mnmal addtonal overhead of a sngle symmetrc-key authentcated encrypton. In partcular, the overall cphertext overhead n our tghtly CCA-secure encrypton scheme s merely one group element more than that for the best known non-tght schemes [23, 18]. To encrypt a message M n the CCA-secure encrypton scheme, we wll () pck a random y as n the tag-based PCA scheme, () derve a tag τ from y, () encrypt M usng a one-tme authentcated encrypton under the KEM key [y k τ ]. The nave approach s to derve the tag τ by hashng [y] G 3k, as n [23]. However, ths creates a crcularty n Game.1 where the dstrbuton of [y] depends on the tag. Instead, we wll derve the tag τ by hashng [y] G k, where y Z k are the top k entres of y Z 3k. We then modfy M 0, M 1 so that the top k rows of both matrces are zero, whch avods the crcularty ssue. In the proof of securty, we wll also rely on the fact that for any y 0, y 1 Z 3k, f y 0 = y 1 and y 0 span(m), then ether y 0 = y 1 or y 1 / span(m). Ths allows us to deduce that f the adversary ueres the CCA oracle on a cphertext whch shares the same tag as some challenge cphertext, then the CCA oracle wll reject wth overwhelmng probablty. Alternatve vew-pont. Our constructon can also be vewed as applyng the BCHK IBE PKE transform [8] to the scheme from [19], and then wrtng the exponents of the secret keys n the clear, thereby avodng the parng. Ths means that we can no longer apply a computatonal assumpton and the randomzed Naor-Rengold argument to the secret key space. Indeed, we replace ths wth an nformaton-theoretc Cramer-Shoup-lke argument as outlned above. Pror approaches. Several approaches to construct tghtly CCA-secure PKE schemes exst: frst, the schemes of [17, 2, 3, 25, 24, 26] construct a tghtly secure NIZK scheme from a tghtly secure sgnature scheme, and then use the tghtly secure NIZK n a CCA-secure PKE scheme followng the Naor-Yung double encrypton paradgm [29, 13]. Snce these approaches buld on the publc verfablty of the used NIZK scheme (n order to fathfully smulate a decrypton oracle), ther relance on a parng seems nherent. Next, the works of [10, 7, 19, 5, 16] used a (Naor-Rengold-based) MAC nstead of a sgnature scheme to desgn tghtly secure IBE schemes. Those IBE schemes can then be converted (usng the BCHK transformaton [8]) nto tghtly CCA-secure PKE schemes. However, the derved PKE schemes stll rely on parngs, snce the orgnal IBE schemes do (and the BCHK does not remove the relance on parngs). In contrast, our approach drectly fuses a Naor-Rengold-lke randomzaton argument wth the encrypton process. We are able to do so snce we substtute a computatonal randomzaton argument (as used n the latter lne of works) wth an nformaton-theoretc one, as descrbed above. Hence, we can apply that argument to exponents rather than group elements. Ths enables us to trade parng operatons for exponentatons n our scheme. Effcency comparson wth non-tghtly secure schemes. We fnally menton that our DDH-based scheme compares favorably even wth the most effcent (non-tghtly) CCA-secure DDH-based en- 5

6 Reference pk ct m securty loss assumpton parng CS98 [11] O(1) 3 O(Q) DDH no KD04, HK07 [23, 18] O(1) 2 O(Q) DDH no HJ12 [17] O(1) O(λ) O(1) 2-Ln yes LPJY15 [24, 26] O(λ) 47 O(λ) 2-Ln yes AHY15 [5] O(λ) 12 O(λ) 2-Ln yes GCDCT15 [16] O(λ) 10 (resp. 6k + 4) O(λ) SXDH (resp. k-ln) yes Ours 4 O(λ) 3 (resp. 3k) O(λ) DDH (resp. k-ln) no Fg. 2. Comparson amongst CCA-secure encrypton schemes, where Q s the number of cphertexts, pk denotes the sze (.e the number of groups elements, or exponent of group elements) of the publc key, and ct m denotes the cphertext overhead, gnorng smaller contrbutons from symmetrc-key encrypton. We omt [19] from ths table snce we only focus on prme-order groups here. crypton schemes [23, 18]. To make thngs concrete, assume λ = 80 and a settng wth Q enc = Q dec = The best known reductons for the schemes of [23, 18] lose a factor of Q enc = 2 30, whereas our scheme loses a factor of about 4λ 2 9. Hence, the group sze for [23, 18] should be at least 2 2 (80+30) = compared to 2 2 (80+9) = n our case. Thus, the cphertext overhead (gnorng the symmetrc encrypton part) n our scheme s = 534 bts, whch s close to = 440 bts wth [23, 18]. 5 Perhaps even more nterestngly, we can compare computatonal effcency of encrypton n ths scenaro. For smplctly, we only count exponentatons and assume a nave suare-and-multplybased exponentaton wth no further mult-exponentaton optmzatons. 6 Encrypton n [23, 18] takes about 3.5 exponentatons (where we count an exponentaton wth a (λ + log 2 (Q enc + Q dec ))- bt hash value 7 as 0.5 exponentatons). In our scheme, we have about 4.67 exponentatons, where we count the computaton of [M k τ ] whch conssts of 2λ multplcatons as 0.67 exponentatons.) Snce exponentaton (under our assumptons) takes tme cubc n the btlength, we get that encrypton wth our scheme s actually about 29% less expensve than wth [23, 18]. However, of course we should also note that publc and secret key n our scheme are sgnfcantly larger (e.g., 4λ + 3 = 323 group elements n pk) than wth [23, 18] (4 group elements n pk). Extenson: NIZK arguments. We also obtan tghtly smulaton-sound non-nteractve zero-knowledge (NIZK) arguments from our encrypton scheme n a sem-generc way. Let us start wth any desgnated-verfer uas-adaptve NIZK (short: DVQANIZK) argument system Π for a gven language. Recall that n a desgnated-verfer NIZK, proofs can only be verfed wth a secret verfcaton key, and soundness only holds aganst adversares who do not know that key. Furthermore, uas-adaptvty means that the language has to be fxed at setup tme of the scheme. Let Π PKE be the varant of Π n whch proofs are encrypted usng a CCA-secure PKE scheme PKE. Publc and secret key of PKE are of course made part of CRS and verfcaton key, respectvely. Observe that Π PKE enjoys smulaton-soundness, assumng that smulated proofs are smply encryptons of random plantexts. Indeed, the CCA securty of PKE guarantees that authentc Π PKE -proofs can be substtuted wth smulated ones, whle beng able to verfy (usng a decrypton oracle) a purported Π PKE -proof generated by an adversary. Furthermore, f PKE s tghtly secure, then so s Π PKE. 5 In ths calculaton, we do not consder the symmetrc authentcated encrypton of the actual plantext (and a correspondng MAC value), whch s the same wth [23, 18] and our scheme. 6 Here, optmzatons would mprove the schemes of [23, 18] and ours smlarly, snce the schemes are very smlar. 7 It s possble to prove the securty of [23, 18] usng a target-collson-resstant hash functon, such that τ = λ. However, n the mult-user settng, a hybrd argument s reured, such that the output sze of the hash functon wll have to be ncreased to at least τ = λ + log 2 (Q enc + Q dec ). 6

7 When usng a hash proof system for Π and our encrypton scheme for PKE, ths mmedately yelds a tghtly smulaton-sound DVQANIZK for lnear languages (.e., languages of the form {[Mr] r Z t } for some matrx M Z n t wth t < n) that does not reure parngs. We stress that our DVQANIZK s tghtly secure n a settng wth many smulated proofs and many adversaral verfcaton ueres. Usng the sem-generc transformaton of [22], we can then derve a tghtly smulaton-sound QANIZK proof system (wth publc verfcaton), that however reles on parngs. We note that the transformaton of [22] only reures a DVQANIZK that s secure aganst a sngle adversaral verfcaton uery, snce the parng enables the publc verfablty of proofs. Hence, we can frst optmze and trm down our DVQANIZK (such that only a sngle adversaral verfcaton uery s supported), and then apply the transformaton. Ths yelds a QANIZK wth partcularly compact proofs. See Fgure 3 for a comparson wth relevant exstng proof systems. Reference type crs π sec. loss assumpton parng CCS09 [9] NIZK O(1) 2n + 6t + 52 O(Q sm ) 2-Ln yes HJ12 [17] NIZK O(1) 500 O(1) 2-Ln yes LPJY14 [25] QANIZK O(n + λ) 20 O(Q sm ) 2-Ln yes KW15 [22] QANIZK O(kn) 2k + 2 O(Q sm ) k-ln yes LPJY15 [26] QANIZK O(n + λ) 42 O(λ) 2-Ln yes Ours 6.2 DVQANIZK O(t + kλ) 3k + 1 O(λ) k-ln no Ours 6.3 QANIZK O(k 2 λ + kn) 2k + 1 O(λ) k-ln yes Fg. 3. (DV)QANIZK schemes for subspaces of G n of dmenson t < n. crs and π denote the sze (n group elements) of the CRS and of proofs. Q sm s the number of smulated proofs n the smulaton-soundness experment. The scheme from [22] (as well as our own schemes) can also be generalzed to matrx assumptons [14], at the cost of a larger CRS. Roadmap. We recall some notaton and basc defntons (ncludng those concernng our algebrac settng and for tghtly secure encrypton) n Secton 2. Secton 3 presents our basc PCA-secure encrypton scheme and represents the core of our results. In Secton 4, we present our optmzed CCA-secure PKE scheme. Our NIZK-related applcatons are presented n Secton 6. 2 Prelmnares 2.1 Notatons If x B n, then x denotes the length n of the vector. Further, x r B denotes the process of samplng an element x from set B unformly at random. For any bt strng τ {0, 1}, we denote by τ the th bt of τ. We denote by λ the securty parameter, and by negl( ) any neglgble functon wth l > k, A Z k k denotes the upper suare matrx of A and A Z l k k denotes the lower l k rows of A. Wth span(a) := {Ar r Z k } Z l, we denote the span of A. of λ. For all matrx A Z l k 2.2 Collson resstant hashng A hash functon generator s a PPT algorthm H that, on nput 1 λ, outputs an effcently computable functon H : {0, 1} {0, 1} λ. Defnton 1 (Collson Resstance). We say that a hash functon generator H outputs collsonresstant functons H f for all PPT adversares A, Adv cr H(A) := Pr[x x H(x) = H(x ) H r H(1 λ ), (x, x ) A(1 λ, H)] = negl(λ). 7

8 2.3 Prme-order groups Let GGen be a probablstc polynomal tme (PPT) algorthm that on nput 1 λ returns a descrpton G = (G,, P ) of an addtve cyclc group G of order for a λ-bt prme, whose generator s P. We use mplct representaton of group elements as ntroduced n [14]. For a Z, defne [a] = ap G as the mplct representaton of a n G. More generally, for a matrx A = (a j ) Z n m we defne [A] as the mplct representaton of A n G: a 11 P... a 1m P [A] := G n m a n1 P... a nm P We wll always use ths mplct notaton of elements n G,.e., we let [a] G be an element n G. Note that from [a] G t s generally hard to compute the value a (dscrete logarthm problem n G). Obvously, gven [a], [b] G and a scalar x Z, one can effcently compute [ax] G and [a + b] G. 2.4 Matrx Dffe-Hellman Assumpton We recall the defntons of the Matrx Decson Dffe-Hellman (MDDH) Assumpton [14]. Defnton 2 (Matrx Dstrbuton). Let k, l N, wth l > k. We call D l,k a matrx dstrbuton f t outputs matrces n Z l k of full rank k n polynomal tme. We wrte D k := D k+1,k. Wthout loss of generalty, we assume the frst k rows of A r D l,k form an nvertble matrx. The D l,k -Matrx Dffe-Hellman problem s to dstngush the two dstrbutons ([A], [Aw]) and ([A], [u]) where A r D l,k, w r Z k and u r Z l. Defnton 3 (D l,k -Matrx Dffe-Hellman Assumpton D l,k -MDDH). Let D l,k be a matrx dstrbuton. We say that the D l,k -Matrx Dffe-Hellman (D l,k -MDDH) Assumpton holds relatve to GGen f for all PPT adversares A, Adv mddh D l,k,ggen(a) := Pr[A(G, [A], [Aw]) = 1] Pr[A(G, [A], [u]) = 1] = negl(λ), where the probablty s taken over G r GGen(1 λ ), A r D k, w r Z k, u r Z l. For each k 1, [14] specfes dstrbutons L k, SC k, C k (and others) over Z (k+1) k such that the correspondng D k -MDDH assumptons are genercally secure n blnear groups and form a herarchy of ncreasngly weaker assumptons. L k -MDDH s the well known k-lnear Assumpton k-ln wth 1-Ln = DDH. In ths work we are mostly nterested n the unform matrx dstrbuton U l,k. Defnton 4 (Unform dstrbuton). Let l, k N, wth l > k. We denote by U l,k the unform dstrbuton over all full-rank l k matrces over Z. Let U k := U k+1,k. Lemma 1 (U k -MDDH U l,k -MDDH). Let l, k N, wth l > k. For any PPT adversary A, there exsts an adversary B (and vce versa) such that T(B) T(A) and Adv mddh U l,k,ggen (A) = Adv mddh U k,ggen (B). Proof. Ths follows from the smple fact that a U l,k -MDDH nstance ([A], [z]) can be transformed nto an U k -MDDH nstance ([A ] = [TA], [z ] = [Tz]) for a random (k +1) l matrx T. If z = Aw, then z = TAw = A w; f z s unform, so s z. Smlarly, a U k -MDDH nstance ([A ], [z ]) can 8

9 be transformed nto an U l,k -MDDH nstance ([A] = [T A ], [z] = [T z ]) for a random l (k + 1) matrx T. Among all possble matrx dstrbutons D l,k, the unform matrx dstrbuton U k s the hardest possble nstance, so n partcular k-ln U k -MDDH. Lemma 2 (D l,k -MDDH U k -MDDH, [14]). Let D l,k be a matrx dstrbuton. For any PPT adversary A, there exsts an adversary B such that T(B) T(A) and Adv mddh D l,k,ggen(a) = Advmddh U k,ggen (B). Let Q 1. For W r Z k Q, U r Z l Q, we consder the Q-fold D l,k -MDDH Assumpton whch conssts n dstngushng the dstrbutons ([A], [AW]) from ([A], [U]). That s, a challenge for the Q-fold D l,k -MDDH Assumpton conssts of Q ndependent challenges of the D l,k -MDDH Assumpton (wth the same A but dfferent randomness w). In [14] t s shown that the two problems are euvalent, where (for Q l k) the reducton loses a factor l k. In combnaton wth Lemma 1 we obtan the followng tghter verson for the specal case of D l,k = U l,k. Lemma 3 (Random self-reducblty of U l,k -MDDH, [14]). Let l, k, Q N wth l > k. For any PPT adversary A, there exsts an adversary B such that T(B) T(A) + Q poly(λ) wth poly(λ) ndependent of T(A), and Adv Q-mddh U l,k,ggen(a) Advmddh U l,k,ggen (B) where Adv Q-mddh U l,k,ggen(b) := Pr[B(G, [A], [AW]) = 1] Pr[B(G, [A], [U]) = 1] and the probablty s over G r GGen(1 λ ), A r U l,k, W r Z k Q, U r Z l Q. 2.5 Publc-Key Encrypton Defnton 5 (PKE). A Publc-Key Encrypton (PKE) conssts of three PPT algorthms PKE = (Param PKE, Gen PKE, Enc PKE, Dec PKE ): The probablstc key generaton algorthm Gen PKE (1 λ ) generates a par of publc and secret keys (pk, sk). The probablstc encrypton algorthm Enc PKE (pk, M) returns a cphertext ct. The determnstc decrypton algorthm Dec PKE (pk, sk, ct) returns a message M or, where s a specal rejecton symbol. We defne the followng propertes: Perfect correctness. For all λ, we have [ Pr Dec PKE (pk, sk, ct) = M (pk, sk) r Gen PKE (1 λ ] ); = 1. ct r Enc PKE (pk, M) Mult-cphertext CCA securty [6]. For any adversary A, we defne [ ] Adv nd-cca PKE (A) := Pr b = b b A Setup,DecO( ),EncO(, ) (1 λ ) 1/2 where: Setup sets C enc :=, samples (pk, sk) r Gen KEM (1 λ ) and b r {0, 1}, and returns pk. Setup must be called once at the begnnng of the game. DecO(ct) returns Dec PKE (pk, sk, ct) f ct / C enc, otherwse. If M 0 and M 1 are two messages of eual length, EncO(M 0, M 1 ) returns Enc PKE (pk, M b ) and sets C enc := C enc {ct}. We say PKE s IND-CCA secure f for all PPT adversares A, the advantage Adv nd-cca PKE (A) s a neglgble functon of λ. 9

10 2.6 Key-Encapsulaton Mechansm Defnton 6 (Tag-based KEM). A tag-based Key-Encapsulaton Mechansm (KEM) conssts of three PPT algorthms KEM = (Gen KEM, Enc KEM, Dec KEM ): The probablstc key generaton algorthm Gen KEM (1 λ ) generates a par of publc and secret keys (pk, sk). The probablstc encrypton algorthm Enc KEM (pk, τ) returns a par (K, C) where K s a unformly dstrbuted symmetrc key n K and C s a cphertext, wth respect to the tag τ T. The determnstc decrypton algorthm Dec KEM (pk, sk, τ, C) returns a key K K. We defne the followng propertes: Perfect correctness. For all λ, for all tags τ T, we have [ Pr Dec KEM (pk, sk, τ, C) = K (pk, sk) r Gen KEM (1 λ ] ); = 1. (K, C) r Enc KEM (pk, τ) Mult-cphertext PCA securty [30]. For any adversary A, we defne [ ] Adv nd-pca Pr KEM (A) := b = b b A Setup,DecO(,, ),EncO( ) (1 λ ) 1/2 where: Setup sets T enc = T dec :=, samples (pk, sk) r Gen KEM (1 λ ), pcks b r {0, 1}, and returns pk. Setup s called once at the begnnng of the game. The decrypton oracle DecO(τ, C, K) computes K := Dec KEM (pk, sk, τ, C). It returns 1 f K = K τ / T enc, 0 otherwse. Then t sets T dec := T dec {τ}. EncO(τ) computes (K, C) r Enc KEM (pk, τ), sets K 0 := K and K 1 r K. If τ / T dec T enc, t returns (C, K b ), and sets T enc := T enc {τ}; otherwse t returns. We say KEM s IND-PCA secure f for all PPT adversares A, the advantage Adv nd-pca KEM (A) s a neglgble functon of λ. 2.7 Authentcated Encrypton Defnton 7 (AE [18]). An authentcated symmetrc encrypton (AE) wth message-space M and key-space K conssts of two polynomal-tme determnstc algorthms (Enc AE, Dec AE ): The encrypton algorthm Enc AE (K, M) generates C, encrypton of the message M wth the secret key K. The decrypton algorthm Dec AE (K, C), returns a message M or. We reure that the algorthms satsfy the followng propertes: Perfect correctness. For all λ, for all K K and M M, we have Dec AE (K, Enc AE (K, M)) = M. One-tme Prvacy and Authentcty. For any PPT adversary A, [ AdvAE ae-ot (A) := Pr b = b K ] r K; b r {0, 1} b r A ot-enco(, ),ot-deco( ) (1 λ 1/2, M, K) s neglgble, where ot-enco(m 0, M 1 ), on nput two messages M 0 and M 1 of the same length, Enc AE (K, M b ), and ot-deco(φ) returns Dec AE (K, φ) f b = 0, otherwse. A s allowed at most one call to each oracle ot-enco and ot-deco, and the uery to ot-deco must be dfferent from the output of ot-enco. A s also gven the descrpton of the key-space K as nput. 10

11 3 Mult-cphertext PCA-secure KEM In ths secton we descrbe a tag-based Key Encapsulaton Mechansm KEM PCA that s IND-PCAsecure (see Defnton 6). For smplcty, we use the matrx dstrbuton U 3k,k n our scheme n Fgure 4, and prove t secure under the U k -MDDH Assumpton ( U 3k,k -MDDH Assumpton, by Lemma 1), whch n turn admts a tght reducton to the standard k-ln Assumpton. However, usng a matrx dstrbuton D 3k,k wth more compact representaton yelds a more effcent scheme, secure under the D 3k,k -MDDH Assumpton (see Remark 1). 3.1 Our constructon Gen KEM (1 λ ): G r GGen(1 λ ); M r U 3k,k k 1,0,.. (., k λ,1 r Z 3k pk := G, [M], ( [M k j,β ] ) ) 1 j λ,0 β 1 sk := (k j,β ) 1 j λ,0 β 1 Return (pk, sk) Enc KEM (pk, τ): r r Z k ; C := [r M ] k τ := λ j=1 kj,τ j K := [r M k τ ] Return (C, K) G 1 3k G Dec KEM (pk, sk, τ, C): k τ := λ j=1 kj,τ j Return K := C k τ Fg. 4. KEM PCA, an IND-PCA-secure KEM under the U k -MDDH Assumpton, wth tag-space T = {0, 1} λ. Here, GGen s a prme-order group generator (see Secton 2.3). Remark 1 (On the use of the U k -MDDH Assumpton). In our scheme, we use a matrx dstrbuton U 3k,k for the matrx M, therefore provng securty under the U 3k,k -MDDH Assumpton U k - MDDH Assumpton (see Lemma 2). Ths s for smplcty of presentaton. However, for effcency, one may want to use an assumpton wth a more compact representaton, such as the CI 3k,k -MDDH Assumpton [27] wth representaton sze 2k nstead of 3k 2 for U 3k,k. 3.2 Securty proof Theorem 1. The tag-based Key Encapsulaton Mechansm KEM PCA defned n Fgure 4 has perfect correctness. Moreover, f the U k -MDDH Assumpton holds n G, KEM PCA s IND-PCA secure. Namely, for any adversary A, there exsts an adversary B such that T(B) T(A) + (Q dec + Q enc ) poly(λ) and Adv nd-pca KEM PCA (A) (4λ + 1) Adv mddh U k,ggen (B) + (Q dec + Q enc ) 2 Ω(λ), where Q enc, Q dec are the number of tmes A ueres EncO, DecO, respectvely, and poly(λ) s ndependent of T(A). Proof of Theorem 1. Perfect correctness follows readly from the fact that for all r Z k C = r M, for all k Z 3k : r (M k) = C k. and 11

12 game y unform n: k τ used by EncO and DecO justfcaton/remark G 0 span(m) k τ actual scheme G 1 G 2. Z 3k k τ U 3k,k -MDDH on [M] Z 3k k τ + M RF (τ ) G 1 G 2.0 G 2..1 τ +1 = 0 : span(m, M 0) k τ + M RF (τ ) U 3k,k -MDDH on [M 0] τ +1 = 1 : span(m, M 1) U 3k,k -MDDH on [M 1] G 2..2 τ +1 = 0 : span(m, M 0) k τ + M 0RF (0) +1 (τ +1) + M 1RF (1) (τ ) τ +1 = 1 : span(m, M 1) G 2..3 τ +1 = 0 : span(m, M 0) kτ + M 0RF (0) +1 (τ +1) + M 1RF (1) τ +1 (τ +1 = 1 : span(m, M 1) +1) G 2.+1 Cramer-Shoup argument Cramer-Shoup argument Z 3k k τ + M RF +1(τ +1 ) U 3k,k -MDDH on [M 0] and [M 1] Fg. 5. Seuence of games for the proof of Theorem 1. Throughout, we have () k τ := λ j=1 kj,τ j ; () EncO(τ) = ([y], K b ) where K 0 = [y k τ ] and K 1 r G; () DecO(τ, [y], K) computes the encapsulaton key K := [y k τ ]. Here, (M 0, M 1) s a bass for span(m ), so that M 1M 0 = M 0M 1 = 0, and we wrte M RF (τ ) := M 0RF (0) (τ ) + M 1RF (1) (τ ). The second column shows whch set y s unformly pcked from by EncO, the thrd column shows the value of k τ used by both EncO and DecO. We now prove the IND-PCA securty of KEM PCA. We proceed va a seres of games descrbed n Fgure 6 and 7 and we use Adv to denote the advantage of A n game G. We also gve a hgh-level pcture of the proof n Fgure 5, summarzng the seuence of games. Lemma 4 (G 0 to G 1 ). There exsts an adversary B 0 such that T(B 0 ) T(A) + (Q enc + Q dec ) poly(λ) and Adv 0 Adv 1 Adv mddh U k,ggen (B 0) + 1 1, where Q enc, Q dec are the number of tmes A ueres EncO, DecO, respectvely, and poly(λ) s ndependent of T(A). Here, we use the MDDH assumpton to tghtly swtch the dstrbuton of all the challenge cphertexts. Proof of Lemma 4. To go from G 0 to G 1, we swtch the dstrbuton of the vectors [y] sampled by EncO, usng the Q enc -fold U 3k,k -MDDH Assumpton on [M] (see Defnton 4 and Lemma 3). We buld an adversary B 0 aganst the Q enc-fold U 3k,k -MDDH Assumpton, such that T(B 0 ) T(A) + (Q enc + Q dec ) poly(λ) wth poly(λ) ndependent of T(A), and Adv 0 Adv 1 Adv Qenc-mddh U 3k,k,GGen (B 0). Ths mples the lemma by Lemma 3 (self-reducblty of U 3k,k -MDDH), and Lemma 1 (U 3k,k - MDDH U k -MDDH). Upon recevng a challenge (G, [M] G 3k k, [H] := [h 1... h Qenc ] G 3k Qenc ) for the Q enc -fold U 3k,k -MDDH Assumpton, B 0 pcks b r {0, 1}, k 1,0,..., k λ,1 r Z 3k, and smulates Setup, DecO as descrbed n Fgure 6. To smulate EncO on ts j th uery, for j = 1,..., Q enc, B 0 sets [y] := [h j], and computes K b as descrbed n Fgure 6. Lemma 5 (G 1 to G 2.0 ). Adv 1 Adv 2.0 = 0. 12

13 Setup: G 0,G 1, G 2. T enc = T dec := ; b r {0, 1} G r GGen(1 λ ); M r U 3k,k M r U 3k,2k s.t. M M = 0 Pck random RF : {0, 1} Z 2k k 1,0,..., k λ,1 r Z 3k For all τ {0, 1} λ, k τ := λ j=1 kj,τ j k τ := k τ + M RF (τ ) Return( pk := G, [M], ( [M k j,β ] ) ) 1 j λ,0 β 1 EncO(τ): G 0, G 1,G 2. r r Z k ; y := Mr; y r Z 3k K 0 := [y k τ ]; K 1 r G If τ / T dec T enc, return (C := [y], K b ), and set T enc := T enc {τ}. Otherwse, return. DecO(τ, C := [y], K): G 0,G 1,G 2. K := [y { k τ ] 1 f Return K = K τ / T enc 0 otherwse T dec := T dec {τ} Fg. 6. Games G 0, G 1, G 2. (for 0 λ) for the proof of mult-cphertext PCA securty of KEM PCA n Fgure 4. In each procedure, the components nsde a sold (dotted) frame are only present n the games marked by a sold (dotted) frame. Proof of Lemma 5. We show that the two games are dentcally dstrbuted. To go from G 1 to G 2.0, we change the dstrbuton of k 1,β r Z 3k for β = 0, 1, to k 1,β + M RF 0 (ε), where k 1,β r Z 3k, RF 0 (ε) r Z 2k, and M r U 3k,2k such that M M = 0. Note that the extra term M RF 0 (ε) does not appear n pk, snce M (k 1,β + M RF 0 (ε)) = M k 1,β. Lemma 6 (G 2. to G 2.+1 ). For all 0 λ 1, there exsts an adversary B 2. such that T(B 2. ) T(A) + (Q enc + Q dec ) poly(λ) and Adv 2. Adv Adv mddh U k,ggen (B 2.) + 4Q dec + 2k + 4 1, where Q enc, Q dec are the number of tmes A ueres EncO, DecO, respectvely, and poly(λ) s ndependent of T(A). Proof of Lemma 6. To go from G 2. to G 2.+1, we ntroduce ntermedate games G 2..1, G 2..2 and G 2..3, defned n Fgure 7. We prove that these games are ndstngushable n Lemmas Lemma 7 (G 2. to G 2..1 ). For all 0 λ 1, there exsts an adversary B 2..0 such that T(B 2..0 ) T(A) + (Q enc + Q dec ) poly(λ) and Adv 2. Adv Adv mddh U k,ggen (B 2..0) + 2 1, where Q enc, Q dec are the number of tmes A ueres EncO, DecO, respectvely, and poly(λ) s ndependent of T(A). Here, we use the MDDH Assumpton to tghtly swtch the dstrbuton of all the challenge cphertexts. We proceed n two steps, frst, by changng the dstrbuton of all the cphertexts wth a tag τ such that τ +1 = 0, and then, for those wth a tag τ such that τ +1 = 1. We use the MDDH Assumpton wth respect to an ndependent matrx for each step. Proof of Lemma 7. To go from G 2. to G 2..1, we swtch the dstrbuton of the vectors [y] sampled by EncO, usng the Q enc -fold U 3k,k -MDDH Assumpton. 13

14 Setup: T enc = T dec := ; b r {0, 1} G r GGen(1 λ ); M r U 3k,k M r U 3k,2k s.t. M M = 0 M 0, M 1 r U 3k,k G 2., G 2..1, G 2..2, G 2..3 M 0, M 1 r U 3k,k s.t. span(m ) = span(m 0, M 1) M M 0 = M 1M 0 = 0 = M M 1 = M 0M 1 Pck random RF : {0, 1} Z 2k. Pck random RF (0) +1 : {0, 1}+1 Z k and RF (1) : {0, 1} Z k Pck random RF (0) +1, RF(1) +1 : {0, 1}+1 Z k. k 1,0,..., k λ,1 r Z 3k For all τ {0, 1} λ, k τ := λ j=1 kj,τ j k τ := k τ + M RF (τ ) k τ := k τ + M 0RF (0) +1 (τ +1) + M 1RF (1) (τ ) k τ := k τ + M 0RF (0) +1 (τ +1) + M 1RF (1) +1 (τ +1) ( Return pk := G, [M], ( [M k j,β ] ) ) 1 j λ,0 β 1 EncO(τ): y r Z 3k G 2., G 2..1, G 2..2,G 2..3 If τ +1 = 0 : r r Z k ; r 0 r Z k ; y := Mr + M 0r 0 If τ +1 = 1 : r r Z k ; r 1 r Z k ; y := Mr + M 1r 1 K 0 := [y k τ ]; K 1 r G If τ / T dec T enc, return (C := [y], K b ) and set T enc := T enc {τ}. Otherwse, return. DecO(τ, C := [y], K): K := [y { k τ ] 1 f Return K = K τ / T enc 0 otherwse T dec := T dec {τ}. G 2.,G 2..1,G 2..2,G 2..3 Fg. 7. Games G 2. (for 0 λ),g 2..1, G 2..2 and G 2..3 (for 0 λ 1) for the proof of Lemma 6. For all τ {0, 1} λ, we denote by τ the -bt prefx of τ. In each procedure, the components nsde a sold (dotted, gray) frame are only present n the games marked by a sold (dotted, gray) frame. We ntroduce an ntermedate game G 2..0 where EncO(τ) s computed as n G 2..1 f τ +1 = 0, and as n G 2. f τ +1 = 1. Setup, DecO are as n G We buld adversares B 2..0 and B 2..0 such that T(B 2..0 ) T(B 2..0 ) T(A) + (Q enc + Q dec ) poly(λ) wth poly(λ) ndependent of T(A), and Clam 1: Adv 2. Adv 2..0 Adv Qenc-mddh U 3,k,GGen (B 2..0 ). Clam 2: Adv 2..0 Adv 2..1 Adv Qenc-mddh U 3k,k,GGen (B 2..0 ). Ths mples the lemma by Lemma 3 (self-reducblty of U 3k,k -MDDH), and Lemma 1 (U 3k,k - MDDH U k -MDDH). Let us prove Clam 1. Upon recevng a challenge (G, [M 0 ] G 3k k, [H] := [h 1... h Qenc ] G 3k Qenc ) for the Q enc -fold U 3k,k -MDDH Assumpton wth respect to M 0 r U 3k,k, B 2..0 does as follows: Setup: B 2..0 pcks M r U 3k,k, k 1,0,..., k λ,1 r Z 3k, and computes pk as descrbed n Fgure 7. For each τ uered to EncO or DecO, t computes on the fly RF (τ ) and k τ := k τ +M RF (τ ), where k τ := λ j=1 k j,τ j, RF : {0, 1} Z 2k s a random functon, and τ denotes the -bt prefx of τ (see Fgure 7). Note that B 2..0 can compute effcently M from M. EncO: To smulate the oracle EncO(τ) on ts j th uery, for j = 1,..., Q enc, B 2..0 computes [y] as follows: f τ +1 = 0 : r r Z k ; [y] := [Mr + h j ] f τ +1 = 1 : [y] r G 3k 14

15 Ths way, B 2..0 smulates EncO as n G 2..0 when [h j ] := [M 0 r 0 ] wth r 0 r Z k, and as n G 2. when [h j ] r G 3k. DecO: Fnally, B 2..0 smulates DecO as descrbed n Fgure 7. Therefore, Adv 2. Adv 2..0 Adv Qenc-mddh U 3k,k,GGen (B 2..0 ). To prove Clam 2, we buld an adversary B 2..0 aganst the Q enc-fold U 3k,k -MDDH Assumpton wth respect to a matrx M 1 r U 3k,k, ndependent from M 0, smlarly than B Lemma 8 (G 2..1 to G 2..2 ). For all 0 λ 1, Adv 2..1 Adv Q dec + 2k, where Q dec s the number of tmes A ueres DecO. Here, we use a varant of the Cramer-Shoup nformaton-theoretc argument to move from RF to RF +1, thereby ncreasng the entropy of k τ computed by Setup. For the sake of readablty, we proceed n two steps: n Lemma 8, we move from RF to an hybrd between RF and RF +1, and n Lemma 9, we move to RF +1. Proof of Lemma 8. In G 2..2, we decompose span(m ) nto two subspaces span(m 0 ) and span(m 1 ), and we ncrease the entropy of the components of k τ whch le n span(m 0 ). To argue that G 2..1 and G 2..2 are statstcally close, we use a Cramer-Shoup argument [11]. Let us frst explan how the matrces M 0 and M 1 are sampled. Note that wth probablty at least 1 2k over the random cons of Setup, (M M 0 M 1 ) forms a bass of Z 3k. Therefore, we have span(m ) = Ker(M ) = Ker ( (M M 1 ) ) Ker ( (M M 0 ) ). We pck unformly M 0 and M 1 n Z 3k k that generate Ker ( (M M 1 ) ) and Ker ( (M M 0 ) ), respectvely (see Fgure 1.1). Ths way, for all τ {0, 1} λ, we can wrte M RF (τ ) := M 0RF (0) (τ ) + M 1RF (1) (τ ), where RF (0), RF (1) : {0, 1} Z k are ndependent random functons. We defne RF (0) +1 : {0, 1}+1 Z k as follows: RF (0) +1 (τ +1) := { RF (0) (τ ) f τ +1 = 0 RF (0) (τ ) + RF (0) (τ ) f τ +1 = 1 where RF (0) : {0, 1} Z k s a random functon ndependent from RF (0). Ths way, RF (0) +1 s a random functon. We show that the outputs of EncO and DecO are statstcally close n G 2..1 and G We decompose the proof n two cases (delmted wth ): the ueres wth a tag τ {0, 1} λ such that τ +1 = 0, and the ueres wth a tag τ such that τ +1 = 1. Queres wth τ +1 = 0: The only dfference between G 2..1 and G 2..2 s that Setup computes k τ usng the random functon RF (0) n G 2..1, whereas t uses the random functon RF (0) +1 n G 2..2 (see Fgure 7). Therefore, by defnton of RF (0) +1, for all τ {0, 1}λ such that τ +1 = 0, k τ s the same n G 2..1 and G 2..2, and the outputs of EncO and DecO are dentcally dstrbuted. 15

16 Queres wth τ +1 = 1: Observe that for all y span(m, M 1 ) and all τ {0, 1} λ such that τ +1 = 1, G 2..2 {}}{ ( ) y k τ + M 0RF (0) (τ ) + M 1RF (1) (τ ) + M 0RF (0) (τ ) ( ) = y k τ + M 0RF (0) (τ ) + M 1RF (1) (τ ) + y M 0RF (0) (τ ) }{{} =0 G 2..1 { ( }} ) { = y k τ + M 0RF (0) (τ ) + M 1RF (1) (τ ) where the second eualty uses the fact that M M 0 = M 1 M 0 = 0 and thus y M 0 = 0. Ths means that: the output of EncO on any nput τ such that τ +1 = 1 s dentcally dstrbuted n G 2..1 and G 2..2 ; the output of DecO on any nput (τ, [y], K) where τ +1 = 1, and y span(m, M 1 ) s the same n G 2..1 and G Henceforth, we focus on the ll-formed ueres to DecO, namely those correspondng to τ +1 = 1, and y / span(m, M 1 ). We ntroduce ntermedate games G 2..1.j, and G 2..1.j for j = 0,..., Q dec, defned as follows: G 2..1.j : DecO s as n G 2..1 except that for the frst j tmes t s uered, t outputs 0 to any ll-formed uery. EncO s as n G G 2..1.j : DecO as n G 2..2 except that for the frst j tmes t s uered, t outputs 0 to any ll-formed uery. EncO s as n G We show that: G 2..1 G s G s... s G 2..1.Qdec G 2..1.Q dec s G 2..1.Q dec 1 s... s G G 2..2 where we denote statstcal closeness wth s and statstcal eualty wth. It suffces to show that for all j = 0,..., Q dec 1: Clam 1: n G 2..1.j, f the j + 1-st uery s ll-formed, then DecO outputs 0 wth overwhelmng probablty 1 1/ (ths mples G 2..1.j s G 2..1.j+1, wth statstcal dfference 1/); Clam 2: n G 2..1.j, f the j + 1-st uery s ll-formed, then DecO outputs 0 wth overwhelmng probablty 1 1/ (ths mples G 2..1.j s G 2..1.j+1, wth statstcal dfference 1/) where the probabltes are taken over the random cons of Setup. Let us prove Clam 1. Recall that n G 2..1.j, on ts j + 1-st uery, DecO(τ, [y], K) computes K := [y k τ ], where k τ := ( k τ + M 0 RF(0) (τ ) + M 1 RF(1) (τ ) ) (see Fgure 7). We prove that f (τ, [y], K) s ll-formed, then K s completely hdden from A, up to ts j + 1-st uery to DecO. The reason s that the vector k +1,1 n sk contans some entropy that s hdden from A. Ths entropy s released on the j + 1-st uery to DecO f t s ll-formed. More formally, we use the fact that the vector k +1,1 r Z 3k s dentcally dstrbuted as k +1,1 + M 0 w, where k +1,1 r Z 3k, and w r Z k. We show that w s completely hdden from A, up to ts j + 1-st uery to DecO. 16

17 The publc key pk does not leak any nformaton about w, snce M (k +1,1 + M 0w ) = M k +1,1. Ths s because M M 0 = 0. The outputs of EncO also hde w. For τ such that τ +1 = 0, k τ s ndependent of k +1,1, and therefore, so does EncO(τ). For τ such that τ +1 = 1, and for any y span(m, M 1 ), we have: y (k τ + M 0w ) = y k τ (2) snce M M 0 = M 1 M 0 = 0, whch mples y M 0 = 0. The frst j outputs of DecO also hde w. For τ such that τ +1 = 0, k τ s ndependent of k +1,1, and therefore, so does DecO([y], τ, K). For τ such that τ +1 = 1 and y span(m, M 1 ), the fact that DecO(τ, [y], K) s ndependent of w follows readly from Euaton (2). For τ such that τ +1 = 1 and y / span(m, M 1 ), that s, for an ll-formed uery, DecO outputs 0, ndependently of w, by defnton of G 2..1.j. Ths proves that w s unformly random from A s vewpont. Fnally, because the j+1-st uery (τ, [y], K) s ll-formed, we have τ +1 = 1, and y / span(m, M 1 ), whch mples that y M 0 0. Therefore, the value K = [y (k τ + M 0w)] = [y k τ + y M 0 w] }{{} 0 computed by DecO s unformly random over G from A s vewpont. Thus, wth probablty 1 1/ over K r G, we have K K, and DecO(τ, [y], K) = 0. We prove Clam 2 smlarly, argung than n G 2..1.j, the value K := [y k τ ], where k τ := ( k τ + M 0 RF(0) +1 (τ +1) + M 1 RF(1) (τ ) ), computed by DecO(τ, [y], K) on ts j + 1-st uery, s completely hdden from A, up to ts j + 1-st uery to DecO, f (τ, [y], K) s ll-formed. The argument goes exactly as for Clam 1. Lemma 9 (G 2..2 to G 2..3 ). For all 0 λ 1, Adv 2..2 Adv Q dec, where Q dec s the number of tmes A ueres DecO. Proof of Lemma 9. In G 2..3, we use the same decomposton span(m ) = span(m 0, M 1 ) as that n G The entropy of the components of k τ that le n span(m 1 ) ncreases from G 2..2 to G To argue that these two games are statstcally close, we use a Cramer-Shoup argument [11], exactly as for Lemma 8. We defne RF (1) +1 {0, 1}+1 Z k as follows: RF (1) +1 (τ +1) := { RF (1) (τ ) + RF (1) (τ ) f τ +1 = 0 RF (1) (τ ) f τ +1 = 1 17

18 where RF (1) : {0, 1} Z k s a random functon ndependent from RF (1). Ths way, RF (1) +1 s a random functon. We show that the outputs of EncO and DecO are statstcally close n G 2..1 and G We decompose the proof n two cases (delmted wth ): the ueres wth a tag τ {0, 1} λ such that τ +1 = 0, and the ueres wth tag τ such that τ +1 = 1. Queres wth τ +1 = 1: The only dfference between G 2..2 and G 2..3 s that Setup computes k τ usng the random functon RF (1) n G 2..2, whereas t uses the random functon RF (1) +1 n G 2..3 (see Fgure 7). Therefore, by defnton of RF (1) +1, for all τ {0, 1}λ such that τ +1 = 1, k τ s the same n G 2..2 and G 2..3, and the outputs of EncO and DecO are dentcally dstrbuted. Queres wth τ +1 = 0: Observe that for all y span(m, M 0 ) and all τ {0, 1} λ such that τ +1 = 0, G 2..3 {}}{ ( ) y k τ + M 0RF (0) +1 (τ +1) + M 1RF (1) (τ ) + M 1RF (1) (τ ) ( ) = y k τ + M 0RF (0) +1 (τ +1) + M 1RF (1) (τ ) + y M 1RF (1) (τ ) }{{} =0 G 2..2 { ( }} ) { = y k τ + M 0RF (0) +1 (τ +1) + M 1RF (1) (τ ) where the second eualty uses the fact M M 1 = M 0 M 1 = 0, whch mples y M 1 = 0. Ths means that: the output of EncO on any nput τ such that τ +1 = 0 s dentcally dstrbuted n G 2..2 and G 2..3 ; the output of DecO on any nput (τ, [y], K) where τ +1 = 0, and y span(m, M 0 ) s the same n G 2..2 and G Henceforth, we focus on the ll-formed ueres to DecO, namely those correspondng to τ +1 = 0, and y / span(m, M 0 ). The rest of the proof goes smlarly than the proof of Lemma 8. See the latter for further detals. Lemma 10 (G 2..3 to G 2.+1 ). For all 0 λ 1, there exsts an adversary B 2..3 such that T(B 2..3 ) T(A) + (Q enc + Q dec ) poly(λ) and Adv 2..3 Adv Adv mddh U k,ggen (B 2..3) where Q enc, Q dec are the number of tmes A ueres EncO, DecO, respectvely, and poly(λ) s ndependent of T(A). Ths transton s symmetrc to the transton between G 2. and G 2..1 (cf. Lemma 7): we use the MDDH Assumpton to tghtly swtch the dstrbuton of all the challenge cphertexts n two steps; frst, by changng the dstrbuton of all the cphertexts wth a tag τ such that τ +1 = 0, and then, the dstrbuton of those wth a tag τ such that τ +1 = 1, usng the MDDH Assumpton wth respect to an ndependent matrx for each step. Proof of Lemma 10. Frst, we use the fact that for all τ {0, 1} λ, M 0 RF(0) +1 (τ +1)+M 1 RF(1) +1 (τ +1) s dentcally dstrbuted to M RF +1 (τ +1 ), where RF +1 : {0, 1} +1 Z 2k s a random functon. Ths 18

19 s because (M 0, M 1 ) s a bass of span(m ). That means A s vew can be smulated only knowng M, and not M 0, M 1 explctly. Then, to go from G 2..3 to G 2.+1, we swtch the dstrbuton of the vectors [y] sampled by EncO, usng the Q enc -fold U 3k,k -MDDH Assumpton (whch s euvalent to the U k -MDDH Assumpton, see Lemma 1) twce: frst wth respect to a matrx M 0 r U 3k,k for cphertexts wth τ +1 = 0, then wth respect to an ndependent matrx M 1 r U 3k,k for cphertexts wth τ +1 = 1 (see the proof of Lemma 7 for further detals). Lemma 6 follows readly from Lemmas Lemma 11 (G 2.λ ). Adv 2.λ Qenc. Proof of Lemma 11. We show that the jont dstrbuton of all the values K 0 computed by EncO s statstcally close to unform over G Qenc. Recall that on nput τ, EncO(τ) computes K 0 := [y (k τ + M RF λ (τ))], where RF λ : {0, 1} λ Z 2k s a random functon, and y r Z 3k (see Fgure 6). We make use of the followng propertes: Property 1: all the tags τ uered to EncO, such that EncO(τ), are dstnct. Property 2: the outputs of DecO are ndependent of {RF(τ) : τ T enc }. Ths s because for all ueres (τ, [y], K) to DecO such that τ T enc, DecO(τ, [y], K) = 0, ndependently of RF λ (τ), by defnton of G 2.λ. Property 3: wth probablty at least 1 Qenc over the random cons of EncO, all the vectors y sampled by EncO are such that y M 0. We deduce that the jont dstrbuton of all the values RF λ (τ) computed by EncO s unformly random over ( Z 2k ) Qenc (from Property 1), ndependent of the outputs of DecO (from Property 2). Fnally, from Property 3, we get that the jont dstrbuton of all the values K 0 computed by EncO s statstcally close to unform over G Qenc, snce: K 0 := [y (k τ + M RF λ (τ)) = [y k τ + y M RF }{{} λ (τ)]. 0 w.h.p. Ths means that the values K 0 and K 1 are statstcally close, and therefore, Adv 3 Qenc. Fnally, Theorem 1 follows readly from Lemmas Mult-cphertext CCA-secure Publc Key Encrypton scheme 4.1 Our constructon We now descrbe the optmzed IND-CCA-secure PKE scheme. Compared to the PCA-secure KEM from Secton 3, we add an authentcated (symmetrc) encrypton scheme (Enc AE, Dec AE ), and set the KEM tag τ as the hash value of a sutable part of the KEM cphertext (as explaned n the ntroducton). A formal defnton wth hghlghted dfferences to our PCA-secure KEM appears n Fgure 8. We prove the securty under the U k -MDDH Assumpton, whch admts a tght reducton to the standard k-ln Assumpton. 19

Tightly Secure CCA-Secure Encryption without Pairings

Tightly Secure CCA-Secure Encryption without Pairings Tightly Secure CCA-Secure Encryption without Pairings Romain Gay 1,, Dennis Hofheinz 2,, Eike Kiltz 3,, and Hoeteck Wee 1, 1 ENS, Paris, France rgay,wee@di.ens.fr 2 Ruhr-Universität Bochum, Bochum, Germany

More information

G /G Advanced Cryptography 12/9/2009. Lecture 14

G /G Advanced Cryptography 12/9/2009. Lecture 14 G22.3220-001/G63.2180 Advanced Cryptography 12/9/2009 Lecturer: Yevgeny Dods Lecture 14 Scrbe: Arsteds Tentes In ths lecture we covered the Ideal/Real paradgm and the noton of UC securty. Moreover, we

More information

Cryptanalysis of pairing-free certificateless authenticated key agreement protocol

Cryptanalysis of pairing-free certificateless authenticated key agreement protocol Cryptanalyss of parng-free certfcateless authentcated key agreement protocol Zhan Zhu Chna Shp Development Desgn Center CSDDC Wuhan Chna Emal: zhuzhan0@gmal.com bstract: Recently He et al. [D. He J. Chen

More information

Practical Functional Encryption for Quadratic Functions with Applications to Predicate Encryption

Practical Functional Encryption for Quadratic Functions with Applications to Predicate Encryption Practcal Functonal Encrypton for Quadratc Functons wth Applcatons to Predcate Encrypton Carmen Elsabetta Zara Baltco 1, Daro Catalano 1, Daro Fore 2, and Roman Gay 3 1 Dpartmento d Matematca e Informatca,

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Decentralized Multi-Client Functional Encryption for Inner Product

Decentralized Multi-Client Functional Encryption for Inner Product Ths paper s a slght varant of the Extended Abstract that appears n Advances n Cryptology ASIACRYPT 2018 (December 2 6, Brsbane, Australa) Sprnger-Verlag, LNCS?????, pages??????. Decentralzed Mult-Clent

More information

Provable Security Signatures

Provable Security Signatures Provable Securty Sgnatures UCL - Louvan-la-Neuve Wednesday, July 10th, 2002 LIENS-CNRS Ecole normale supéreure Summary Introducton Sgnature FD PSS Forkng Lemma Generc Model Concluson Provable Securty -

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Affine transformations and convexity

Affine transformations and convexity Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Circular chosen-ciphertext security with compact ciphertexts

Circular chosen-ciphertext security with compact ciphertexts Crcular chosen-cphertext securty wth compact cphertexts Denns Hofhenz January 19, 2013 Abstract A key-dependent message (KDM secure encrypton scheme s secure even f an adversary obtans encryptons of messages

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

CHALMERS GÖTEBORGS UNIVERSITET. TDA352 (Chalmers) - DIT250 (GU) 12 Jan. 2017, 14:00-18:00

CHALMERS GÖTEBORGS UNIVERSITET. TDA352 (Chalmers) - DIT250 (GU) 12 Jan. 2017, 14:00-18:00 CHALMERS GÖTEBORGS UNIVERSITET CRYPTOGRAPHY TDA352 (Chalmers) - DIT250 (GU) 12 Jan. 2017, 14:00-18:00 No extra materal s allowed durng the exam except for pens and a smple calculator (not smartphones).

More information

Assortment Optimization under MNL

Assortment Optimization under MNL Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Lecture Space-Bounded Derandomization

Lecture Space-Bounded Derandomization Notes on Complexty Theory Last updated: October, 2008 Jonathan Katz Lecture Space-Bounded Derandomzaton 1 Space-Bounded Derandomzaton We now dscuss derandomzaton of space-bounded algorthms. Here non-trval

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Algebraic partitioning: Fully compact and (almost) tightly secure cryptography

Algebraic partitioning: Fully compact and (almost) tightly secure cryptography Algebrac parttonng: Fully compact and (almost) tghtly secure cryptography Denns Hofhenz October 12, 2015 Abstract We descrbe a new technque for conductng parttonng arguments. Parttonng arguments are a

More information

Note on EM-training of IBM-model 1

Note on EM-training of IBM-model 1 Note on EM-tranng of IBM-model INF58 Language Technologcal Applcatons, Fall The sldes on ths subject (nf58 6.pdf) ncludng the example seem nsuffcent to gve a good grasp of what s gong on. Hence here are

More information

Lecture 4: Universal Hash Functions/Streaming Cont d

Lecture 4: Universal Hash Functions/Streaming Cont d CSE 5: Desgn and Analyss of Algorthms I Sprng 06 Lecture 4: Unversal Hash Functons/Streamng Cont d Lecturer: Shayan Oves Gharan Aprl 6th Scrbe: Jacob Schreber Dsclamer: These notes have not been subjected

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016

CS : Algorithms and Uncertainty Lecture 17 Date: October 26, 2016 CS 29-128: Algorthms and Uncertanty Lecture 17 Date: October 26, 2016 Instructor: Nkhl Bansal Scrbe: Mchael Denns 1 Introducton In ths lecture we wll be lookng nto the secretary problem, and an nterestng

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Message modification, neutral bits and boomerangs

Message modification, neutral bits and boomerangs Message modfcaton, neutral bts and boomerangs From whch round should we start countng n SHA? Antone Joux DGA and Unversty of Versalles St-Quentn-en-Yvelnes France Jont work wth Thomas Peyrn 1 Dfferental

More information

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8

U.C. Berkeley CS278: Computational Complexity Professor Luca Trevisan 2/21/2008. Notes for Lecture 8 U.C. Berkeley CS278: Computatonal Complexty Handout N8 Professor Luca Trevsan 2/21/2008 Notes for Lecture 8 1 Undrected Connectvty In the undrected s t connectvty problem (abbrevated ST-UCONN) we are gven

More information

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS

8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars

More information

Calculation of time complexity (3%)

Calculation of time complexity (3%) Problem 1. (30%) Calculaton of tme complexty (3%) Gven n ctes, usng exhaust search to see every result takes O(n!). Calculaton of tme needed to solve the problem (2%) 40 ctes:40! dfferent tours 40 add

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture # 15 Scribe: Jieming Mao April 1, 2013 COS 511: heoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 15 Scrbe: Jemng Mao Aprl 1, 013 1 Bref revew 1.1 Learnng wth expert advce Last tme, we started to talk about learnng wth expert advce.

More information

On a CCA2-secure variant of McEliece in the standard model

On a CCA2-secure variant of McEliece in the standard model On a CCA2-secure varant of McElece n the standard model Edoardo Perschett Department of Mathematcs, Unversty of Auckland, New Zealand. e.perschett@math.auckland.ac.nz Abstract. We consder publc-key encrypton

More information

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization

DISCRIMINANTS AND RAMIFIED PRIMES. 1. Introduction A prime number p is said to be ramified in a number field K if the prime ideal factorization DISCRIMINANTS AND RAMIFIED PRIMES KEITH CONRAD 1. Introducton A prme number p s sad to be ramfed n a number feld K f the prme deal factorzaton (1.1) (p) = po K = p e 1 1 peg g has some e greater than 1.

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Basic Regular Expressions. Introduction. Introduction to Computability. Theory. Motivation. Lecture4: Regular Expressions

Basic Regular Expressions. Introduction. Introduction to Computability. Theory. Motivation. Lecture4: Regular Expressions Introducton to Computablty Theory Lecture: egular Expressons Prof Amos Israel Motvaton If one wants to descrbe a regular language, La, she can use the a DFA, Dor an NFA N, such L ( D = La that that Ths

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Secure and practical identity-based encryption

Secure and practical identity-based encryption Secure and practcal dentty-based encrypton D. Naccache Abstract: A varant of Waters dentty-based encrypton scheme wth a much smaller system parameters sze (only a few klobytes) s presented. It s shown

More information

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials

MA 323 Geometric Modelling Course Notes: Day 13 Bezier Curves & Bernstein Polynomials MA 323 Geometrc Modellng Course Notes: Day 13 Bezer Curves & Bernsten Polynomals Davd L. Fnn Over the past few days, we have looked at de Casteljau s algorthm for generatng a polynomal curve, and we have

More information

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm

Lecture 2: Gram-Schmidt Vectors and the LLL Algorithm NYU, Fall 2016 Lattces Mn Course Lecture 2: Gram-Schmdt Vectors and the LLL Algorthm Lecturer: Noah Stephens-Davdowtz 2.1 The Shortest Vector Problem In our last lecture, we consdered short solutons to

More information

6.842 Randomness and Computation February 18, Lecture 4

6.842 Randomness and Computation February 18, Lecture 4 6.842 Randomness and Computaton February 18, 2014 Lecture 4 Lecturer: Rontt Rubnfeld Scrbe: Amartya Shankha Bswas Topcs 2-Pont Samplng Interactve Proofs Publc cons vs Prvate cons 1 Two Pont Samplng 1.1

More information

Short Pairing-based Non-interactive Zero-Knowledge Arguments

Short Pairing-based Non-interactive Zero-Knowledge Arguments Short Parng-based Non-nteractve Zero-Knowledge Arguments Jens Groth Unversty College London j.groth@ucl.ac.uk October 26, 2010 Abstract. We construct non-nteractve zero-knowledge arguments for crcut satsfablty

More information

5 The Rational Canonical Form

5 The Rational Canonical Form 5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Introduction to Algorithms

Introduction to Algorithms Introducton to Algorthms 6.046J/8.40J Lecture 7 Prof. Potr Indyk Data Structures Role of data structures: Encapsulate data Support certan operatons (e.g., INSERT, DELETE, SEARCH) Our focus: effcency of

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Attacks on RSA The Rabin Cryptosystem Semantic Security of RSA Cryptology, Tuesday, February 27th, 2007 Nils Andersen. Complexity Theoretic Reduction

Attacks on RSA The Rabin Cryptosystem Semantic Security of RSA Cryptology, Tuesday, February 27th, 2007 Nils Andersen. Complexity Theoretic Reduction Attacks on RSA The Rabn Cryptosystem Semantc Securty of RSA Cryptology, Tuesday, February 27th, 2007 Nls Andersen Square Roots modulo n Complexty Theoretc Reducton Factorng Algorthms Pollard s p 1 Pollard

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Password Based Key Exchange With Mutual Authentication

Password Based Key Exchange With Mutual Authentication Password Based Key Exchange Wth Mutual Authentcaton Shaoquan Jang and Guang Gong Department of Electrcal and Computer Engneerng Unversty of Waterloo Waterloo, Ontaro N2L 3G1, CANADA Emal:{angshq,ggong}@callope.uwaterloo.ca

More information

The Second Anti-Mathima on Game Theory

The Second Anti-Mathima on Game Theory The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player

More information

Min Cut, Fast Cut, Polynomial Identities

Min Cut, Fast Cut, Polynomial Identities Randomzed Algorthms, Summer 016 Mn Cut, Fast Cut, Polynomal Identtes Instructor: Thomas Kesselhem and Kurt Mehlhorn 1 Mn Cuts n Graphs Lecture (5 pages) Throughout ths secton, G = (V, E) s a mult-graph.

More information

Confined Guessing: New Signatures From Standard Assumptions

Confined Guessing: New Signatures From Standard Assumptions Confned Guessng: New Sgnatures From Standard Assumptons Floran Böhl 1, Denns Hofhenz 1, Tbor Jager 2, Jessca Koch 1, and Chrstoph Strecks 1 1 Karlsruhe Insttute of Technology, Germany, {floran.boehl,denns.hofhenz,jessca.koch,chrstoph.strecks}@kt.edu

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Constant-Size Structure-Preserving Signatures Generic Constructions and Simple Assumptions

Constant-Size Structure-Preserving Signatures Generic Constructions and Simple Assumptions Constant-Sze Structure-Preservng Sgnatures Generc Constructons and Smple Assumptons Masayuk Abe Melssa Chase Bernardo Davd Markulf Kohlwess Ryo Nshmak Myako Ohkubo NTT Secure Platform Laboratores, NTT

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Chapter 8 SCALAR QUANTIZATION

Chapter 8 SCALAR QUANTIZATION Outlne Chapter 8 SCALAR QUANTIZATION Yeuan-Kuen Lee [ CU, CSIE ] 8.1 Overvew 8. Introducton 8.4 Unform Quantzer 8.5 Adaptve Quantzaton 8.6 Nonunform Quantzaton 8.7 Entropy-Coded Quantzaton Ch 8 Scalar

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

Finding Primitive Roots Pseudo-Deterministically

Finding Primitive Roots Pseudo-Deterministically Electronc Colloquum on Computatonal Complexty, Report No 207 (205) Fndng Prmtve Roots Pseudo-Determnstcally Ofer Grossman December 22, 205 Abstract Pseudo-determnstc algorthms are randomzed search algorthms

More information

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.

Linear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space. Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +

More information

NP-Completeness : Proofs

NP-Completeness : Proofs NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7

Stanford University CS254: Computational Complexity Notes 7 Luca Trevisan January 29, Notes for Lecture 7 Stanford Unversty CS54: Computatonal Complexty Notes 7 Luca Trevsan January 9, 014 Notes for Lecture 7 1 Approxmate Countng wt an N oracle We complete te proof of te followng result: Teorem 1 For every

More information

Random Walks on Digraphs

Random Walks on Digraphs Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM

Some Consequences. Example of Extended Euclidean Algorithm. The Fundamental Theorem of Arithmetic, II. Characterizing the GCD and LCM Example of Extended Eucldean Algorthm Recall that gcd(84, 33) = gcd(33, 18) = gcd(18, 15) = gcd(15, 3) = gcd(3, 0) = 3 We work backwards to wrte 3 as a lnear combnaton of 84 and 33: 3 = 18 15 [Now 3 s

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Lai-Massey Scheme and Quasi-Feistel Networks (Extended Abstract)

Lai-Massey Scheme and Quasi-Feistel Networks (Extended Abstract) La-Massey Scheme and Quas-Festel Networks (Extended Abstract Aaram Yun, Je Hong Park 2, and Jooyoung Lee 2 Unversty of Mnnesota - Twn Ctes aaramyun@gmalcom 2 ETRI Network & Communcaton Securty Dvson, Korea

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

Hash functions : MAC / HMAC

Hash functions : MAC / HMAC Hash functons : MAC / HMAC Outlne Message Authentcaton Codes Keyed hash famly Uncondtonally Secure MACs Ref: D Stnson: Cryprography Theory and Practce (3 rd ed), Chap 4. Unversal hash famly Notatons: X

More information

Homomorphic Trapdoor Commitments to Group Elements

Homomorphic Trapdoor Commitments to Group Elements Homomorphc Trapdoor Commtments to Group Elements Jens Groth Unversty College London j.groth@ucl.ac.uk Abstract We present homomorphc trapdoor commtments to group elements. In contrast, prevous homomorphc

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens

THE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of

More information

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0

Bézier curves. Michael S. Floater. September 10, These notes provide an introduction to Bézier curves. i=0 Bézer curves Mchael S. Floater September 1, 215 These notes provde an ntroducton to Bézer curves. 1 Bernsten polynomals Recall that a real polynomal of a real varable x R, wth degree n, s a functon of

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS

BOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all

More information

Recover plaintext attack to block ciphers

Recover plaintext attack to block ciphers Recover plantext attac to bloc cphers L An-Png Bejng 100085, P.R.Chna apl0001@sna.com Abstract In ths paper, we wll present an estmaton for the upper-bound of the amount of 16-bytes plantexts for Englsh

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Anonymous Identity-Based Broadcast Encryption with Revocation for File Sharing

Anonymous Identity-Based Broadcast Encryption with Revocation for File Sharing Anonymous Identty-Based Broadcast Encrypton wth Revocaton for Fle Sharng Janchang La, Y Mu, Fuchun Guo, Wlly Suslo, and Rongmao Chen Centre for Computer and Informaton Securty Research, School of Computng

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

9 Characteristic classes

9 Characteristic classes THEODORE VORONOV DIFFERENTIAL GEOMETRY. Sprng 2009 [under constructon] 9 Characterstc classes 9.1 The frst Chern class of a lne bundle Consder a complex vector bundle E B of rank p. We shall construct

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Circular chosen-ciphertext security with compact ciphertexts

Circular chosen-ciphertext security with compact ciphertexts Crcular chosen-cphertext securty wth compact cphertexts Denns Hofhenz March 22, 2012 Abstract A key-dependent message KDM) secure encrypton scheme s secure even f an adversary obtans encryptons of messages

More information

7. Products and matrix elements

7. Products and matrix elements 7. Products and matrx elements 1 7. Products and matrx elements Based on the propertes of group representatons, a number of useful results can be derved. Consder a vector space V wth an nner product ψ

More information

ANOVA. The Observations y ij

ANOVA. The Observations y ij ANOVA Stands for ANalyss Of VArance But t s a test of dfferences n means The dea: The Observatons y j Treatment group = 1 = 2 = k y 11 y 21 y k,1 y 12 y 22 y k,2 y 1, n1 y 2, n2 y k, nk means: m 1 m 2

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information