Collusion Resistance of Digital Fingerprinting Schemes

Size: px
Start display at page:

Download "Collusion Resistance of Digital Fingerprinting Schemes"

Transcription

1 TECHNISCHE UNIVERSITEIT EINDHOVEN Department of Industrial and Applied Mathematics Philips Research Information and System Security Group MASTER S THESIS Collusion Resistance of Digital Fingerprinting Schemes by T.U. Vladimirova Supervisors: dr. B. Skoric (Philips Research) prof. dr. ir. H.C.A. van Tilborg (TU/e) Eindhoven, June 2006

2

3 Abstract A digital fingerprint is user-specific information that is added in each copy of distributed digital data. The fingerprint should not be noticeable to the user. It is used to trace illegal distributing of data. Should the user illegally distribute his copy, the fingerprint in this copy will trace back to him. In this work we consider fingerprints that are resistant against collusion attacks. In a collusion attack malicious users compare their copies of data in order to detect the positions of the fingerprints. They alter the detected symbols in order to generate a new copy. This new copy carries a new fingerprint that, as the colluders hope, cannot be traced back to them. A collision resistant fingerprint will still allow to trace at least some of the colluders. It is important for content providers to use the small space available for embedding fingerprints as efficiently as possible. Thus, the fingerprinting codes should be as short as possible while they still must be able to trace colluders. There is a number of schemes have been proposed to withstand collusion attacks. These schemes use different design principles. They have different assumptions on attack models. They employ different codes for assigning values of digital fingerprints. As a result the schemes have different code length. This work considers examples of different approaches to the problem. In particular it studies the classical work on collusion secure codes by Boneh and Shaw, the work on identifiable property codes by Hollmann et al and the work on sequential traitor tracing by Jin and Lotspiech. We provide a detailed review of the work by Tardos. Tardos came up with the collusion resistant scheme that uses the fingerprinting code which asymptotically achieves the optimal code length derived in the literature. In his work Tardos fixes a number of parameters which allows him to prove that in the Tardos scheme an innocent user is accused and no guilty user is accused with some small probability. We show that these choices are not unique. A better code length can be achieved when picking the scheme parameters more carefully. We finish this work with comparison of the main scheme parameters of the fingerprinting schemes discussed here. i

4

5 List of Tables 5.1 The mapping table for 256 movies versions, the Jin-Lotspiech scheme The notation used in Chapters 6 and Values ε T 1, ε 2, c, M for experiments Tardos scheme parameters: numerical values for M = 10 6, R = Tardos scheme parameters: numerical values for M = 10 9, R = The abbreviation used in Table The code length of different fingerprinting schemes iii

6 List of Figures 6.1 The distribution f(p), where t p 1 t The graph of A = A(R), where R = ln(1/ε 2) ln(1/ε T 1 ) iv

7 Contents I Literature Survey 1 1 Introduction 3 2 Digital Fingerprinting Digital Watermarking Watermarking for Copy Protection Watermarking for Copyright Protection Tracing Traitors Model and Notions Parties and their Actions The distribution of fingerprinted conent Marking Assumption and attack model Digital Fingerprinting Schemes History of Fingerprinting This work Frameproof and Collusion-Secure Codes Introduction Notation Frameproof codes Collusion-Secure codes Code Construction c-secure codes Poly-log length c-secure codes A lower bound Further work Fighting Two or Three Pirates Improved Boneh-Shaw content fingerprinting Improved Error-Analysis for Boneh-Shaw Scheme Boneh-Shaw Fingerprinting and Soft Decision Identifiable Parent Property Codes Introduction Two Colluders Notation and definitions Arbitrary Number of Colluders Notation and definitions Open questions v

8 4.4 Code Rate Issues Different Constructions Concatenated Codes Column Weight Code Recursive IPP construction Code Quality Measure Sequential Traitor Tracing Introduction The scheme The distribution scheme Colluders strategies Tracing traitors Conclusion II The Tardos scheme 47 6 Tardos Fingerprinting Codes Introduction The notation used in Chapters 6 and Definitions Construction The distribution The fingerprinting code Accusation algorithm Proofs of usability of the Tardos scheme Some innocent user does not get accused Some guilty user gets accused Unreadable digit model Variation of the Tardos Scheme Introduction Optimizing the scheme parameters Theorem Theorem The set of constraints Experiments Re-scaling Definitions Theorem Theorem The set of constraints for the re-scaled distribution Conclusion Summary 79 A MatLab Code 82 Bibliography 85

9 Part I Literature Survey 1

10

11 Chapter 1 Introduction Recent developments in digital technologies have had a great influence on the content providers such as music and movie distributors and on their customers. It has become extremely easy for a user to make a high-quality copy and to distribute it. In the past users had a limited access to professional recording equipment. The copies made by users were of a poor quality or were too expensive to produce. For these reasons illegal copying and distributing of music and video was kept at a reasonable level. Nowadays digital technologies allow customers to make copies of digital content identical to the original. These copies are cheap to produce. Therefore, the amount of the digital data which is illegally distributed is growing. And this means that businesses loose their income. Illegal copying is done on such a big scale that it does not seem to be feasible to prevent it. It is more reasonable to discourage users from making or buying illegal copies. There have been examples set by Microsoft. The company is trying to discourage the customers from illegal copying the company software or from buying illegal copies of it. Microsoft makes users aware of the fact that they are buying pirated products [Mic05] and refuses services to users who have bought pirated copies of Microsoft software [Mic06]. In order to prevent illegal copying it can be more efficient to find at least some of the users involved in illegal distribution. This can be done with help of digital watermarking. A watermark is some additional information embedded into a copy of digital data that. We assume that watermarks are embedded in such a way that the users cannot remove them. This property is sometimes referred to as robustness. The watermarks are hidden from the users. The distributor applies Information Hiding techniques to hide them. The positions of the watermarks are kept secret by the distributor. In this work we do not consider Information Hiding techniques, which are studied in [KP00, CMB02]. We simply assume that the watermarking is properly implemented. With the help if these watermarks the distributor, or some other authorized party can find users involved in illegal distribution of digital data. Such kind of watermarking is known as forensic watermarking. A forensic watermark can be seen as a kind of a serial number. Should the data be illegally copied and 3

12 distributed, this watermark uniquely identifies the user who is involved in illegal distribution. Forensic watermarking has been already applied in the real world, see the case of Russell Sprague [BBC04]. In analogy with real world fingerprints, which are unique for each person, we call the forensic watermarks that are unique for each digital object digital fingerprints. The digital fingerprints were first studied in [Wag83]. To protect digital content against illegal distribution the content owners need to embed different fingerprints into digital copies of data before sending it to users. These fingerprints would prevent users from illegally distributing their copies. It turns out that when some group of users collaborates it can detect some parts of their fingerprints [BMP86]. This means that positions of marks are not secret anymore. The users that collaborate to create a new fingerprinted copy find positions of the marks by comparing their copies of data. These users can see on which positions in their digital copies symbols of the fingerprints differ. Using this information they can create a new digital object that has a fingerprint different from any fingerprint of any of the collaborating users. Now, when this copy is captured they hope that the new fingerprint would not identify any of the collaborating users as a guilty one. Such actions of a group of users we call a collusion attack and such group of collaborating users is called a set of colluders or a coalition. There have been proposed a number of fingerprinting schemes that are able to withstand collusion attacks. These fingerprinting schemes have different assumptions about marks and attackers. In particular, they differ in the following aspects There are distinct assumptions about the possible actions of attackers. These actions are specified by an attack model. The narrow-case attack model, where on every detected positions attackers can output only those symbols that they see on these positions in their codewords [HvLLT98, BK04, SSW01, BK04, JLN04]. The general case attack model, where attackers are assumed to be able to output any symbol, even an unreadable symbol on detected positions [BS98, Yac01]. There are distinct approaches to resilience of fingerprinting schemes. There are schemes that enable finding at least one of the coalition members with certainty they are called deterministic schemes. For example, [HvLLT98]. There are schemes that allow some error when detecting guilty users, probabilistic schemes, for example [BS98]. In probabilistic schemes two important parameters are The false positive error rate, this is the probability of an innocent user to be accused in the scheme. The false negative error rate, this is the probability of not accusing any coalition member.

13 The digital fingerprinting schemes are studied in this work from the content owners perspective. This puts certain requirements on the implementation of fingerprinting schemes: 1. There is a limitation on the amount of data that can be embedded into digital copies. The watermarks should be embedded in such a way that the copies would be still usable. These watermarks should not be visible or obtrusive for the user. 2. The probability of a false accusation should be as small as possible. This is important for the following reasons: If an innocent user gets falsely accused the content owners get bad publicity and, therefore, less clients. If innocent users get accused too often, the system fails. The accusations provided by such a scheme are not accepted in court. 3. The fraction of guilty users that are captured can be small. The probability that a coalition member gets away with participating in a coalition can be as big as 50%. This rate allows to catch repeat offenders. The aim of this thesis is To study current literature on fingerprinting scheme. To study different approaches taken to tackle the problem and different attack models. To provide an insight of how these approaches influence the code length of fingerprinting codes. To study the fingerprinting scheme presented by Tardos in [Tar03]. The Tardos scheme is asymptotically the best known fingerprinting scheme. It provides shorter codes then all previously suggested schemes. To provide some insight in the following problem. The code length of the Tardos fingerprinting codes is 100c 2 ln(m/ε 1 ). Where does the large constant 100 come from? Is it possible to make it smaller? To compare the fingerprinting schemes [BS98, HvLLT98, SSW01, JLN04, Tar03] with respect to the scheme parameters, attack models and eventually the code length. This work contains the following 7 chapters: Chapter 2 introduces the fingerprinting and watermarking in general. It tells about why and where watermarking, and fingerprinting in particular is used. This chapter introduces notation and defines notions used in this work. It also provides an overview of different assumptions about attack models and fingerprints. Chapter 3 is focused on the classical work on fingerprinting schemes by Boneh and Shaw. This chapter provides a detailed review of the Boneh-Shaw fingerprinting scheme and of the construction of the Boneh-Shaw fingerprinting code. Here we also mention some subsequent work and improvements on this scheme.

14 Chapter 4 reviews literature on identifiable parent property (IPP) codes. It starts with the classical work on this subject [HvLLT98] then it reviews some results on the generalized identifiable parent property codes [SSW01]. At the end this chapter provides a brief review of literature on the identifiable parent property codes. Chapter 5 is entirely devoted to the work by Jin and Lotspiech. This work [JLN04, JL05] takes a different approach to fingerprinting digital data than it was previously done. It allows to build short codes with the tracing algorithm that is bale to trace the whole coalition. Chapter 6 provides a detailed analysis of the fingerprinting scheme by Tardos. The fingerprinting codes used in this scheme achieve the best known asymptotical code length. Chapter 7 contains the details of our work on the Tardos fingerprinting scheme. This chapter presents optimization to the scheme that allows us to achieve a better code length. Some ideas for further improvements can be also found in this chapter. Chapter 8 summarizes the work done in the thesis. It contains a table comparing the fingerprinting schemes described in the previous chapters with respect to different scheme parameters.

15 Chapter 2 Digital Fingerprinting This chapter introduces forensic watermarking and shows its place among digital watermarking schemes. It also briefly discusses practical applications of watermarking. It also introduces notation, and provides some definitions. This chapter reviews different assumptions about attack models and fingerprints. In particular, it introduces after [BS98] the Marking Assumption. 2.1 Digital Watermarking Digital watermarking is a way of embedding some additional information into original data. This information is hidden. It can be used for different purposes, for example, to enable copy protection and copyright protection (see [KP00]). Digital fingerprinting is a kind of digital watermarking that is used to enable tracing colluding users Watermarking for Copy Protection Copy protection is a desirable feature for multimedia distribution systems. A digital watermark embedded into a copy can be used for copy protection. For example, a movie can have a copy never -watermark embedded [KP00] to prevent a DVD recorder from copying the data. This kind of watermarks implies use of compliant hardware devices that are capable of detecting the watermarks and restricting copying [MCB99] Watermarking for Copyright Protection It can happen that some party makes a copy of digital content and claims to be the rightful owner. Embedded data can contain information about the copyright owner [KP00]. Thus, a watermark can be used for copyright protection. It does so in the sense that it prevents other parties from claiming to be the owners of the data Tracing Traitors Traitor tracing was introduced by Chor et al in [CFN94]. These schemes withstand unauthorized distribution of data in the following sense. Digital data is 7

16 sent to a set of authorized parties, a set of users. This data is encrypted so that unauthorized parties would not be able to view the data. Each user has a decoder that allows decrypting of the data with some unique key. Malicious users can use their key information to construct a pirate decoder, to decrypt information and distribute it among unauthorized parties. Malicious users can use their copies to generate a new copy of data and illegally distribute the information to unauthorized parties. This second kind of attack is known as the collusion attack. In this work we only study the schemes with respect to this kind of attacks. A forensic watermark can contain information about the original receiver of the data. In this way each copy is made unique for each user. This kind of forensic watermark we call the digital fingerprint throughout this work. Once a copy is captured as being illegally distributed the digital fingerprint will point at the legitimate users involved in illegal distribution of their copies. The digital fingerprinting schemes do not prevent users from making illegal copies. They provide the traitor tracing property while on the other hand, they do not require the use of compliant devices. The two following traitor tracing schemes, dynamic and sequential traitor tracing withstand collusion attacks. In dynamic traitor tracing schemes [FT98, Tas05] the portions of watermarked data are distributed in consecutive time intervals. When an illegal copy is found an accusation algorithm is applied to it to trace one or two offending users. These users are disconnected from the system and watermarking is adjusted with respect to the information obtained about the offending users. The distributor studies the position of watermarks that have been detected and uses this information in watermarking the next portion of data. Thus, the captured pirate copies are used for tracing the colluders and generating watermarks. Generating new watermark on each step can impose a computational bottleneck. Sequential tracing schemes [SNW03] fix this problem. In sequential tracing the watermarks and their positions are pre-computed before distribution. The distributor removes offending users from the system sequentially, just like in dynamic tracing schemes. Thus, the information that is obtained from captured pirate copies is used to trace colluders but not to compute new watermarks. The watermarks are computed in advance. 2.2 Model and Notions This section provides the terminology that is used in the subsequent chapters. It also gives an overview of the attack models presented in the current literature Parties and their Actions A digital object, or digital content are the terms that are used interchangeably during this work. A digital object refers to some piece of digital data, for example a movie.

17 Original data is the data that does not contain any fingerprints. It can be accessed only by a legitimate party, the distributor, who creates fingerprinted copies of the original data and distributes them among legitimate parties. A distributor is the sole party that has access to the original data. The distributor makes copies of the original data, fingerprints them and sends them to users. We assume in this work that the distributor is the party who also performs tracing traitors. Legal distribution is performed by an authorized party. The distributor is the only party authorized to distribute the copies of the digital content. Distribution of the copies by other parties is illegal or unauthorized. Colluders are a group of users who are authorized to access fingerprinted copies of the original data. Together they create new copies containing new fingerprints. These new copies are distributed by colluders among parties that are unauthorized to access the data. This way these unauthorized parties gain unauthorized access to the data. When creating a new copy, actions of colluders are restricted by the Marking Assumption. A fingerprinted copy. The copies that the distributor sends to the users contain a big amount of the original data, which is common to all users, public data and small amount of data which is specific for each user private or secret data. This secret data contains the user-specific fingerprint. A fingerprint is some information that is embedded into each copy of the original data before it is sent to a user. A fingerprint is a string of symbols (watermarks) from some alphabet. The symbols of a fingerprint are sometimes referred to as marks. The fingerprint is unique for each user. When some code C is used for fingerprinting data, the fingerprint is a codeword from this code. Let M be the number of users in the fingerprinting scheme. Let Q be an alphabet of size q. Let w i Q N be some N-tuple, a fingerprint that is embedded into a copy which is sent to user i, where i {1,..., M}. A detectable mark is a mark that is different in at least two fingerprints of the colluders. Otherwise, the mark is undetectable. Let U be a coalition of size c that collaborates to generate a new copy of the original data. The coalition does so by changing some marks in the fingerprints that the coalition possesses. The coalition can only change the marks it can detect (see Sec ). Example 2.1. Let U 1, U 2 and U 3 be a group of three colluders that possess the codewords ( ), ( ) and ( ) respectively. By comparing their codewords they can detect some of the positions where their codewords differ: U U U detected no no no no

18 The detected positions are 1,2,3,5 and 9. Now using these positions colluders are able to create a new fingerprint. Assumptions on attack model and tactics that colluders follow can be different. Therefore in different models colluders can create different fingerprints. If the colluders have chosen majority vote tactics for constructing new fingerprints and if the colluders are not allowed to introduce any unreadable symbols, the new fingerprint will be ( ). If the colluders are allowed to introduce erasures, the new fingerprint can be any of ( ). The symbol can stand for 0, 1 or an unreadable symbol that we denote with?. Such set ( ) of all the strings that colluders can produce is known as the feasible set of the coalition [BS94, BS98] The distribution of fingerprinted conent The distribution of the fingerprinted content works as follows. Let C be the code used for fingerprinting digital data. Let P be the object to be distributed. The distributor picks a codeword w i C, where 1 i M and M is the number of users. The distributor generates a copy of P with a unique fingerprint w i in it, P (w i ). The distributor partitions P in N pieces with exactly one mark in each piece. Each piece, p(s, wr) i has the rth mark in state s, where 1 r N and 1 s q, where q is the size of the alphabet. The distributor assigns a unique codeword w i to each user u i and sends to each user a copy of digital data marked with the user s codeword. Given a codeword w i = w1 i... wn i C the copy of P that is shipped to u i will be P (w i ) = {p(1, w i 1), p(2, w i 2),..., p(n, w N )} Marking Assumption and attack model The Marking Assumption [BS98] stands for the following: 1. Colluding users may detect a certain mark only if this mark differs between their copies, otherwise the mark is undetectable. This property is sometimes referred in the literature as fidelity[mclk98, Eng05]. 2. Users cannot change undetected marks without rendering the object useless. This property is sometimes referred to as robustness [MCLK98, Eng05]. The colluders cannot alter undetected marks. Depending on further assumptions about the strength of colluders, they are allowed to alter detected marks in two ways. They can be restricted to output in a new fingerprint only the symbols they see on the detected positions of their own fingerprints. Such models are sometimes referred to in literature as narrow case model [HvLLT98].

19 Or they can be allowed to output any symbol of the alphabet or even an unreadable digit on detected positions. Such models are sometimes referred to in literature as general case model [BBK03] or unreadable digit models ([BS98]). For example [BS98] considers the case when the colluders know the whole alphabet and on the detected positions they can output any letter of the alphabet. Such models are known as arbitrary digit models. Or even introduce an erasure. These models assume a powerful attacker they are known as unreadable digit models ([BS98, Tar03]). On the other hand, in IPP fingerprinting schemes the colluders can only output the letters they see on these positions. The colluders are assumed not be able to introduce any erasures [HvLLT98, BK04, SSW01] Digital Fingerprinting Schemes Fingerprinting schemes can be different with respect to the error tolerance. (i) Fingerprinting schemes can be deterministic. These schemes allow to catch at least one member of a guilty coalition with 100% certainty. In these schemes innocent users never get falsely accused. (ii) Fingerprinting schemes can be probabilistic. These schemes allow some error probability of accusing an innocent user or missing a guilty user. While deterministic schemes (i) seem to be the best because an innocent user is never falsely accused, it turns out that there are constraints on the alphabet size of the fingerprinting code (like q c 2, see Chapter??). Another difficulty is that there are no explicit constructions of IPP codes for a general case. Most constructions are for restricted families of parameters [HvLLT98, SSW00, SSW01]. The probabilistic schemes (ii) provide a reasonable tradeoff between the amount of additional information embedded into the original data and the error probability. This makes probabilistic schemes more attractive for designers ([BS98] and [Tar03]) History of Fingerprinting The concept of digital fingerprinting was first presented by Wagner in [Wag83]. In the scheme proposed by Wagner all users receive a copy of a digital object that contains a user-specific fingerprint. When later on, a user distributes his copy, it can be traced back to him by this fingerprint. This should discourage users from illegally distributing their copies of data. Blakley et al were first to study collusion attacks on fingerprinting schemes [BMP86]. It was for the first time mentioned that colluding users can extract some information about their fingerprints by comparing their copies of data. The attack model was further studied in [BS94] and [BS98]. In this work Boneh and Shaw came up with the concept of the Marking Assumption and presented the construction of probabilistic fingerprinting codes resilient against collusion attacks.

20 Further work on the fingerprinting codes resilient to collusion attacks was done by Hollmann et al. In [HvLLT98] they considered deterministic schemes resilient to attacks of at most 2 colluders. The codes proposed in [HvLLT98] are called identifiable parent property (IPP) codes. These codes were further generalized for the case of an arbitrary number of colluders by Staddon et al in [SSW00, SSW01], who further investigated the properties of IPP codes and stated a number of open questions about IPP codes. These open questions were later addressed by Barg et al in [BCE + 01] This work In this thesis we focus on solutions to the collusion attack problem provided in current literature. This work consists of the following parts. The first part (Chapters 3-5) reviews a number of deterministic and probabilistic fingerprinting schemes. In particular we are interested in the length of the fingerprinting codes. The second part (Chapters 6 and 7) presents analysis and optimization of the scheme suggested by Tardos in [Tar03]. The last part (Chapter 8) summarizes the results discussed in this thesis. We consider the following design criteria for fingerprinting schemes as the most important. (a) The probability of accusing an innocent user (sometimes it is referred as false positive rate in literature) must be small enough. We allow a probability of accusing an innocent user to be between 10 5 and (b) The probability of not accusing any guilty user(sometimes it is referred as false negative rate in literature) can be about 1/2. This is still considered to be good enough, since repeat offenders are caught with high probability. (c) The code length should be as small as possible. These criteria have the following meaning: since we look at the scheme from the digital content provider s perspective, we are mainly concerned about keeping the length of fingerprints and the probability of accusing an innocent user as small as possible. If it is possible to achieve a low probability of accusing an innocent user by allowing the probability of not accusing any guilty user to be big, we can choose for this option. In this work we do not consider the techniques used for embedding watermarks into digital objects. We assume that embedding is implemented in a proper way.

21 Chapter 3 Frameproof and Collusion-Secure Codes This chapter provides a detailed review of the classic work on collusion-secure fingerprinting schemes by Boneh and Shaw [BS94, BS98]. In this work the Marking Assumption is formulated for the first time. Some subsequent work and improvements on their scheme is also discussed in this chapter. 3.1 Introduction The scheme proposed in [BS98] is a probabilistic fingerprinting scheme. It is resistant to attacks of an arbitrary number of colluders. We call this scheme the Boneh-Shaw fingerprinting scheme throughout this work. In the Boneh- Shaw fingerprinting scheme the colluders are assumed to be powerful. They are allowed to output unreadable symbols on the detected positions. The fingerprinting code for the scheme is built by concatenating random codes of [Che96] with collusion-secure codes. The code length of Boneh-Shaw fingerprinting scheme is O(c 4 log(m/ε) log(1/ε)), where M is the number of users, c is the number of colluders and ε is the probability of accusing an innocent user. In the last part of this chapter we briefly review modifications and improvements of the Boneh-Shaw scheme. Sebé and Domingo-Ferrer [SDF03] used the same construction principles as the ones of Boneh and Shaw, namely they considered the concatenating some random looking codes with an error-correcting code. The fingerprinting scheme of [SDF03] in which fingerprinting codes are built by concatenating scattering codes with simplex codes. However, they considered a weak attacker model and the scheme derived in this work is deterministic 3-collusion resistant. Yacobi [Yac01] considered the Boneh-Shaw fingerprinting scheme under a modified marking assumption. The colluders are allowed to perform random jamming attacks. Fingerprinting codes are built by concatenating the Boneh- Shaw code with pseudo-random sequences, while in the original Boneh-Shaw scheme random codes were used. Schaathun in [Sch03] shows that the Boneh-Shaw fingerprinting scheme is performs better than it is shown in the original work [BS98]. By taking a 13

22 different approach in error-analysis of the Boneh-Shaw scheme Schaathun shows that the Boneh-Shaw collusion-secure codes have shorter length. In subsequent work [SFM05] an attempt was made to apply soft decision list decoding to the Boneh-Shaw fingerprinting scheme. This could be a way to further improve the performance of the codes. However, [SFM05] leaves it as an open question whether soft decision list decoding can be applied to the Boneh-Shaw fingerprinting scheme. 3.2 Notation Let C be a collusion-resistant fingerprinting code and N the code length. Let q be the size of the alphabet Q. The marks of the fingerprints are elements of the alphabet Q, i.e. each mark of the fingerprint can be in one of q states. Let Q = Q {?} be the alphabet extended with unreadable symbol {?}. Let C = M, where M is the number of users in the scheme. Let U be the set of colluders and let U = c. When colluding users detect a mark they can change it to any of q + 1 states from the alphabet Q. Definition 3.1. An (M, N)-code C is a collection of M codewords of length N over the alphabet of size q {w (1),..., w (M) } Q N. The codeword w (j) is assigned to user u j, where 1 j M. The fingerprinting code C can be seen as a matrix with M rows, where each row is a fingerprint w (j) that is assigned to some user u j, where 1 j M. Definition 3.2. Let C = {w (1),..., w (M) } be an (M, N)-code and U be a coalition of users. The position i is undetectable for U, if the words assigned to the members of coalition U match in this position. Definition 3.3. Let w Q N be an codeword of length N and let I be some set of positions in a codeword I = {i 1,..., i r } {1,..., N}. The restriction of w to positions I is the word w I = w i1, w i2 w ir, where w i is the i-th letter in w. Example 3.1. Let I = {4, 5, 6} and let w = (111222), then the restriction w I = (222). Definition 3.4. Let C = {w (1),..., w (M) } be a (M, N)-code and U be a coalition of users. Let R be the set of undetectable positions for U. The feasible set of U is F (U; Q) = {w ( Q {?} ) N : u U w R = w (u) R }. The feasible set contains all the possible words that match the undetectable positions. Example 3.2. Let the users A and B hold the following codewords defined over some alphabet Q : A: , B: Let the coalition U consist of the users A and B, U = {A, B} then the feasible set for U is F (U; Q) = 2 1 2, where Q which means that can be any letter of Q or an erasure.

23 3.3 Frameproof codes If some illegally distributed copy is found that contains a fingerprint of some user Alice, Alice gets accused of illegal distribution. What if Alice could claim that some users colluded and framed her, created a new copy with Alice s fingerprint in it? If it is possible to frame innocent users the scheme is useless. So the first property that a fingerprinting scheme must have is being frameproof. In a frameproof scheme any coalition of users cannot frame an innocent user, and if a copy is illegally distributed and it happens to contain the codeword of Alice embedded in it, then Alice is guilty of illegal distribution. The condition that the fingerprinting scheme must be frameproof is usually relaxed for the sake of shorter code length. The coalition size is limited to c users. The scheme is called c-frameproof scheme if no coalition of size at most c can frame an innocent user. In the rest of this chapter we consider fingerprinting codes over the binary alphabet Q unless announced otherwise. Definition 3.5. The (M, N)-code code C c-frameproof if no coalition of at most c users can generate a word w such that w F (C\U; Q) where U denotes the codewords held by the coalition and C denotes all the codewords, the codewords of colluders and of innocent users. This means that no coalition can generate a codeword that belongs to a user outside of the coalition. Definition 3.6. C o (N)-code is a code consisting of N binary words of length N with exactly one 1 in each codeword. The following example shows that the code C o (N) is N-frameproof. Example 3.3. Consider the (3, 3) code C o (3). This code consists of the codewords {100, 010, 001} and it is 3-frameproof. No c colluders can frame an innocent user. When c = 3 it is trivial. And when c = 2 the colluders cannot detect on which position the innocent user has 1 in his fingerprint. Since this position is undetectable for the colluders, by the Marking Assumption they have to output 0 there. The code C o (N) is impractical because its code length is linear in the number of users. It is possible to build short frameproof codes which are able to accommodate a large number of users. Such codes can be obtained by concatenating frameproof codes with error-correcting codes. Let C 1 be a c-frameproof (N 1, M 1 )-code and C 2 be an error-correcting code with parameters [N 2, M 2, D 2 ] M1 over the alphabet of size M 1. Let C be the code that is obtained by concatenation of the codes C 1 and C 2. Let C 1 be the inner binary code and C 2 be the outer code. The resulting code C is an (N 1 N 2, M 2 ) M1 - code, where the subscript M 1 denotes the size of the alphabet over which the code is defined. It is constructed in the following way: for a codeword v = v 1 v 2 v N2 C 2 : W v = w (v 1) w (v 2) w (v N 2 ),

24 where each v i is the number between 1 and M 1 and denotes concatenation. Each w (v i) is a codeword from C 1. Code C consists of all possible codewords W v such that v C 2. Lemma (Lemma 3.2, [BS98]). The concatenation of the codes C 1 and C 2 is a c-frameproof code, given that the minimal distance of the error-correcting code C 2 is such that ( D 2 > N ). (3.1) c Proof. Let U be the coalition of size c and let inequality (3.1) be satisfied. The feasible set of the coalition contains no other codewords besides those belonging to the coalition U. Let the following codewords from C be the codewords of the coalition: w (v1),..., w (vc). Suppose the contrary, the feasible set contains a word w that belongs to some innocent user u while (3.1) is satisfied. Let v i C 2, where 1 i c be the codewords from which the codewords of the coalition were derived. Let v be the codeword from which w was derived. From (3.1) it follows that the words v i and v match in less than N 2 /c positions. Thus, v 1,..., v c match v in less than N 2 positions. So j : i 1 i c v i j w j. Thus, the w (vj) does not belong to the feasible set F ({w (v1 j ),..., w (vc j ) }; C 1 ). And as w (v j) is a part of the codeword w the contradiction is derived. 3.4 Collusion-Secure codes When an illegally distributed copy is found the distributor or some authorized party would like to know who was involved in producing this copy. To do so the distributor needs to have an algorithm for tracing colluding users, an algorithm A. Basically, A maps the received pirated fingerprint y to the set of users. Definition 3.7. A code C is totally c-secure if there exists a tracing algorithm A such that if a coalition U generated a word y then the algorithm A outputs at least one of the members of the coalition: A(y) U. If C is a totally c-secure code then non-intersecting coalitions of users have non-intersecting feasible sets. This becomes clear from the following observation: let U 1 and U 2 be such that U 1 U 2 =. Suppose U 1 and U 2 have generated some fingerprint y, which belongs to the feasible sets of both U 1 and U 2. It follows that it is not possible to find the guilty coalition and therefore the code is not totally secure. It turns out that there are no binary totally c-secure codes for the parameters c 2 and M 3. Example 3.4 (Theorem 4.2, [BS98]). Let w (1), w (2), w (3) be three distinct codewords assigned to the users u (1), u (2), u (3). Let w = (w 1, w 2, w 3 ) be the majority word, defined as follows: w i = w (1) i if w (1) i = w (2) i or w (1) i = w (3) i, w (2) i if w (2) i = w (3) i,? otherwise.

25 The word w is feasible for any of coalitions {u 1, u 2 }, {u 1, u 3 } and {u 2, u 3 }. Thus, the code is not totally 2-secure. This restriction can be overcome by allowing some small error-probability ε of accusing innocent users. This error probability is an important parameter of the fingerprinting scheme. c-secure fingerprinting schemes that allow some ε-error are called c-secure with ε-error. Definition 3.8. A fingerprinting scheme is c-secure with ε-error if there exists a tracing algorithm A such that if a coalition of at most c users generates a word y, then Pr[A(y) U] > 1 ε. The colluders choose symbols in a codeword independently Code Construction In this section we first follow [BS98] in constructing (N, M)-codes which are c- secure with ε-error. The length of these codes is polynomial in M and therefore the codes are impractical. c-secure codes with ε-error are used in the main construction of the Boneh- Shaw fingerprinting scheme. These codes are used as inner codes in concatenation with random codes. The resulting codes have no longer the code length linear in M but linear in log M. Before sending the codewords to the users the distributor picks a random permutation and permutes the bits in the codewords. So the colluders do not know which bits represent which positions in the codewords. This permutation plays an important role in the construction. 3.5 c-secure codes The following example shows the construction of an M-secure code with ε-error. Example 3.5. An M-secure code with ε-error for M = 5 looks as follows: c 1 c 2 c 3 c 4 A : B : C : D : E : c 1, c 2, c 3, c 4 denote a type of columns: column c i represents a column in which first i users see 1 and the rest of the users see 0. The set of positions on which the users see column i is denoted by B i. For example, c 2 is the column 1 1 c 2 : 0 0 0

26 Let users A, C and D collude. Together they see the following part of the matrix A C D Since B does not participate in the coalition, the colluders cannot distinguish between the bits on the positions 1,2,3 and 4,5,6, because the colluders receive their codewords with the bit positions permuted. This observation plays the main role in the tracing algorithm, which is discussed below. The probability of accusing an innocent user in this scheme is 1/2 9, for a general case ε = 1/2 d(n 1), where d is the number of times the columns are repeated and n the number of different columns. Definition 3.9. Let c M be a column of a height M whose first M bits are 1 and the rest are 0. The C o (M, d)-code is the code that consists of all columns c 1,..., c M 1 each duplicated d times. Thus, the code from Example (3.5) is C o (5, 3) code. The number of replications d defines the error probability ε. The following is the accusation algorithm for the c-secure fingerprinting scheme. Algorithm 1: Let y {0, 1} N. To find a set of users who generated y follow the following steps: 1. If weight(y B1 ) > 0, where weight denotes Hamming weight, then the users were able to detect the marks on the first d positions. Thus, user 1 participated in the coalition. Output user 1 is guilty. 2. If weight(y BM 1 ) < d then the users were able to detect the marks on the last d positions. Output user M is guilty. 3. If weight weight(y Bi 1 ) < k 2 k 2 log 2M ɛ, where k = weight(y R i ) and 2 i (M 1) then output user i is guilty. Theorem 3.1 (Theorem 5.1,[BS98]). For M 3 and ε > 0 let d = 2M 2 log(2m/ε). The fingerprinting scheme C o (M, d) is M-secure with ε-error. The code length of C o (M, d) is O(M 3 log(m/ε)). We show here the observations that lead to this algorithm. Let B m be the set of positions in which the users see column c m, i.e. the first m users see 1 and the rest see 0. Each set B m has d elements. In Example 3.5, B 2 is the set of positions where the users see column c 2, i.e. the first two users, A and B see 1 and the rest see 0. Thus, B 2 consists of positions 4,5 and 6. Let R i be the set of positions on which users that see the columns c i 1 and c i. So R i represents the following set R i = B i 1 B i for 2 i n. The accusation algorithm Algorithm 1 relies on the following observation: 1. Since the colluders do not know the permutation used by the distributor, they do not know which marks represent which bits in the code C o (M, d).

27 2. If user u i is not in the coalition the colluders see the same values for all bit positions in R i. For example, let U consist of users A, C and D from Example 3.5. As the user B does not participate in the coalition, the colluders cannot distinguish between the bits from B 1 and from B 2. The colluders will output the fingerprint y with 0 s and 1 s distributed uniformly between y B1 and y B2. 3. Each user i receives his fingerprint that contains the blocks B i 1 = 0 d and B i = 1 d, all other users have either all zeros or all ones on these positions. Thus, if user i does not participate in the coalition, then y Bi 1 B i will consist of uniformly distributed zeros and ones. If user i does participate in the coalition, then y Bi 1 B i will heavily deviate from a uniform distribution. How big should the deviation of y Bi 1 be for the user i to get accused by the algorithm? Let the probability of accusing any innocent user be pre-defined by the distributor and be equal to ε. If some specific user i is innocent, the probability of accusing this user i is ε/m. Let weight(y Ri ) = k and let Y be a random variable that counts ones in y Bi 1. The random variable Y follows the hyper-geometric distribution and we have the following probability distribution, for any 0 r min(k, d) ( d d ) Pr[Y = r] = r)( k r ). (3.2) The probability function Pr[Y = r] can be rewritten as follows ( ) ( ) r k 1 j=1 1 r d + j k r ( ) d j=1 1 k d + r d + j 2d Pr[Y = r] = r 2 k ). (3.3) k j=1 (1 k 2d + j 2d The random variable Y B(k, 1/2) as d [Whi92]. For a random variable X B(k, 1/2) (X is a random variable over k experiments with probability of success 1/2) the probability function is given by And actually Pr[X = r] = ( 2d k ( k r ) 1 2 k. (3.4) Pr[Y = r] 2 Pr[X = r]. (3.5) The bound for Pr[Y = r] can be found by bounding the probability Pr[X = r]. There exist a good bound for random variables that belong to the binomial distribution, namely the Chernoff bound [AS92]. To be able to apply the Chernoff bound to some binomial distribution, this distribution should have the expectation equal to zero. The expectation of Y is E[Y ] = k/2. Thus, the bound has to be applied to Y k/2 instead of Y. User i is accused by the algorithm A of participating in the coalition hat created y if Y k/2 < σ, for some suitable σ. The probability of this event is Pr[Y k/2 < σ] 2Pr[X k/2 < σ] ε/m. (3.6)

28 Using (3.6): 2Pr[X k/2 < σ)] 2 exp From (3.6) and (3.7) the value of σ can be found: ( ) 2 exp 2σ2 = ε k M, σ = (k/2) log(2m/ε). ( ) 2σ2 = ε/m. (3.7) k (3.8) Lemma 5.3 of [BS98] proves that Algorithm 1 always returns a non-empty set of accused users given the parameters of the code C o (M, d) are such that d = 2M 2 log(2m/ε). 3.6 Poly-log length c-secure codes The fingerprinting scheme shown in the previous section uses codes of length polynomial in the number of users. For a large number of users this is not practical. In this section we sketch the construction of [BS98] that provides fingerprinting schemes with poly-log code length. Definition Let C be an (N, M)-code over an alphabet of size M o. Let the codewords of C be chosen uniformly at random. Such codes are called random codes. (Random codes and their properties were studied in [Che96].) The code C (N, M o, M, d) is the result of concatenating random (N, M)-code C, as the outer code with code C o (M o, d), as the inner code. The resulting code C (N, M o, M, d) contains M codewords and has length Nd(M o 1). The code C (N, M o, M, d) consists of N copies of C o (M, d), where each of N copies is permuted before it is distributed. The random code is kept secret along with the permutations. Theorem 3.2 (Theorem 5.5, [BS98]). Let M o = 2c, N = 2c log(2m/ε) and d = 2M 2 o log(4m o N/ε). Then C (N, M o, M, d) is a c-secure code with ε-error that contains M codewords and has the length l = O(NdM o ) = O(c 4 log(m/ε) log(1/ε)). The code C o is a matrix that consists of 2c rows and (2c 1)d columns. The rows of this matrix are used as alphabet for the fingerprinting code C. As the code C is a random code, the codewords of C are uniformly distributed. The codewords of C can be seen as random words over the alphabet of size M o = 2c. Sketch of the proof. The theorem is proved by showing the algorithm that on the input of a pirate fingerprint outputs the set of guilty users. Let y {0, 1} l be a codeword produced by the coalition. Algorithm 2: Step 1 Apply Algorithm 1 to each component y i, where 1 i N and y = y 1,..., y N. Each y i is an element of the inner code.

29 Step 2 For each component i = 1,..., N arbitrarily choose one of the outputs of Algorithm 1. Set z i to be chosen output, where 1 z i M o. Form the word z = z 1... z N. Step 3 By computing the Hamming distance between z and w. Find the words w C closest to z. If some user u is the one whose codeword is derived from w C output user u is guilty A lower bound The following theorem provides a lower bound on the length of a c-secure code with ε-error. Theorem 3.3 (Theorem 6.1, [BS98]). Let C be an (N, M) fingerprinting scheme over a binary alphabet, where C is c-secure with ε error. Then the code length is at least N 1 2 (c 3) log( 1 εc ). There is a big difference between the actual code length of the c-secure fingerprints, which is linear in c 4, and the lower bound on the code length of the Theorem 6.1 of [BS98]. There was a number of attempts to overcome this gap, for example [Tar03, SDF03]. Peikert et al tightened this bound by a factor of c. In [PSS03] they show that the secure code will always have length O(c 2 log(1/cε)) as long as log(1/ε) Kk log c, where K is some constant and k is the number of different columns used in the code. The proof of this bound is based on possible strategies of the coalition when generating a new fingerprint. For example, the colluding users can use a biased coin when outputting zeros and ones on the detected positions. Based on the knowledge of the codewords the colluders hold, they can calculate the number of ones that they are going to output on the detected positions in such a way that it would them escape accusation. These strategies do not help the colluders as long as the code length of the fingerprinting code is O(c 2 log(1/cε)). 3.7 Further work The Boneh-Shaw fingerprinting scheme uses collusion secure codes of length O(c 4 log(m/ε) log(1/ε). These codes are asymptotically better than the codes linear in the number of users M but they still have considerable length. There was a number of attempts to improve the code length [Sch03, SFM05, SDF03] and performance of the scheme [Yac01] Fighting Two or Three Pirates Collusion-Secure and Cost-Effective Detection of Unlawful Multimedia Redistribution Sebé and Domingo-Ferrer consider the case of three colluding users. They present a deterministic scheme that uses binary fingerprinting codes and is re-

Sequential and Dynamic Frameproof Codes

Sequential and Dynamic Frameproof Codes Sequential and Dynamic Frameproof Codes Maura Paterson m.b.paterson@rhul.ac.uk Department of Mathematics Royal Holloway, University of London Egham, Surrey TW20 0EX Abstract There are many schemes in the

More information

Separating codes: constructions and bounds

Separating codes: constructions and bounds Separating codes: constructions and bounds Gérard Cohen 1 and Hans Georg Schaathun 2 1 Ecole Nationale Supérieure des Télécommunications 46 rue Barrault F-75634 Paris Cedex, France cohen@enst.fr 2 Department

More information

Improved Versions of Tardos Fingerprinting Scheme

Improved Versions of Tardos Fingerprinting Scheme The Open University of Israel The Department of Mathematics and Computer Science Improved Versions of Tardos Fingerprinting Scheme Thesis submitted as partial fulfillment of the requirements towards an

More information

Random Codes for Digital Fingerprinting

Random Codes for Digital Fingerprinting Linköping Studies in Science and Technology Thesis No. 749 Random Codes for Digital Fingerprinting Jacob Löfvenberg LIU-TEK-LIC-1999:07 Department of Electrical Engineering Linköping University, SE-581

More information

New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting

New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting Hideki Yagi 1, Toshiyasu Matsushima 2, and Shigeichi Hirasawa 2 1 Media Network Center, Waseda University 1-6-1,

More information

Fingerprinting, traitor tracing, marking assumption

Fingerprinting, traitor tracing, marking assumption Fingerprinting, traitor tracing, marking assumption Alexander Barg University of Maryland, College Park ICITS 2011, Amsterdam Acknowledgment: Based in part of joint works with Prasanth A. (2009, '10) Prasanth

More information

Analysis of two tracing traitor schemes via coding theory

Analysis of two tracing traitor schemes via coding theory Analysis of two tracing traitor schemes via coding theory Elena Egorova, Grigory Kabatiansky Skolkovo institute of science and technology (Skoltech) Moscow, Russia 5th International Castle Meeting on Coding

More information

BIROn - Birkbeck Institutional Research Online

BIROn - Birkbeck Institutional Research Online BIROn - Birkbeck Institutional Research Online Enabling open access to Birkbeck s published research output Sliding-window dynamic frameproof codes Journal Article http://eprints.bbk.ac.uk/5366 Version:

More information

Generalized hashing and applications to digital fingerprinting

Generalized hashing and applications to digital fingerprinting Generalized hashing and applications to digital fingerprinting Noga Alon, Gérard Cohen, Michael Krivelevich and Simon Litsyn Abstract Let C be a code of length n over an alphabet of q letters. An n-word

More information

Tardos Fingerprinting Codes in the Combined Digit Model

Tardos Fingerprinting Codes in the Combined Digit Model Tardos Fingerprinting Codes in the Combined Digit Model Boris Škorić 1, Stefan Katzenbeisser 2, Hans Georg Schaathun 3, Mehmet U. Celik 4 1 Eindhoven University of Technology, Dept. of Mathematics and

More information

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4

CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky. Lecture 4 CS 282A/MATH 209A: Foundations of Cryptography Prof. Rafail Ostrosky Lecture 4 Lecture date: January 26, 2005 Scribe: Paul Ray, Mike Welch, Fernando Pereira 1 Private Key Encryption Consider a game between

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security Outline Authentication CPSC 467b: Cryptography and Computer Security Lecture 18 Michael J. Fischer Department of Computer Science Yale University March 29, 2010 Michael J. Fischer CPSC 467b, Lecture 18

More information

Best Security Index for Digital Fingerprinting

Best Security Index for Digital Fingerprinting IEICE TRANS. FUNDAMENTALS, VOL.E89 A, NO.1 JANUARY 2006 169 PAPER Special Section on Cryptography and Information Security Best Security Index for Digital Fingerprinting Kozo BANNO a), Nonmember, Shingo

More information

ENHANCED BLIND DECODING of TARDOS CODES with NEW MAP-BASED FUNCTIONS

ENHANCED BLIND DECODING of TARDOS CODES with NEW MAP-BASED FUNCTIONS ENHANCED BLIND DECODING of TARDOS CODES with NEW MAP-BASED FUNCTIONS Mathieu Desoubeaux #1, Cédric Herzet 2, William Puech #, Gaëtan Le Guelvouit o # University of Montpellier 2 - LIRMM - 3495 - Montpellier

More information

A Framework for Optimizing Nonlinear Collusion Attacks on Fingerprinting Systems

A Framework for Optimizing Nonlinear Collusion Attacks on Fingerprinting Systems A Framework for Optimizing onlinear Collusion Attacks on Fingerprinting Systems egar iyavash Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Coordinated Science

More information

Authentication. Chapter Message Authentication

Authentication. Chapter Message Authentication Chapter 5 Authentication 5.1 Message Authentication Suppose Bob receives a message addressed from Alice. How does Bob ensure that the message received is the same as the message sent by Alice? For example,

More information

The Holey Grail A special score function for non-binary traitor tracing

The Holey Grail A special score function for non-binary traitor tracing The Holey Grail A special score function for non-binary traitor tracing B. Škorić, J.-J. Oosterwijk, J. Doumen Abstract We study collusion-resistant traitor tracing in the simple decoder approach, i.e.

More information

EXPURGATED GAUSSIAN FINGERPRINTING CODES. Pierre Moulin and Negar Kiyavash

EXPURGATED GAUSSIAN FINGERPRINTING CODES. Pierre Moulin and Negar Kiyavash EXPURGATED GAUSSIAN FINGERPRINTING CODES Pierre Moulin and Negar iyavash Beckman Inst, Coord Sci Lab and ECE Department University of Illinois at Urbana-Champaign, USA ABSTRACT This paper analyzes the

More information

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs Dafna Kidron Yehuda Lindell June 6, 2010 Abstract Universal composability and concurrent general composition

More information

Lecture 1: Introduction to Public key cryptography

Lecture 1: Introduction to Public key cryptography Lecture 1: Introduction to Public key cryptography Thomas Johansson T. Johansson (Lund University) 1 / 44 Key distribution Symmetric key cryptography: Alice and Bob share a common secret key. Some means

More information

Introduction untraceable version of the content, from which the distributor cannot identify any of the colluders. A segment of the content is called a

Introduction untraceable version of the content, from which the distributor cannot identify any of the colluders. A segment of the content is called a Symmetric Tardos fingerprinting codes for arbitrary alphabet sizes B.» Skorić, S. Katzenbeisser, M.U. Celik Abstract Fingerprinting provides a means of tracing unauthorized redistribution of digital data

More information

A Non-Probabilistic Approach to Binary Electronic Fingerprinting. Raoul Dahlin

A Non-Probabilistic Approach to Binary Electronic Fingerprinting. Raoul Dahlin A Non-Probabilistic Approach to Binary Electronic Fingerprinting Raoul Dahlin ISBN 91-85295-77-9 ISSN 0280-7971 Printed in Sweden by UniTryck, Linköping 2004 In the beginning, everything was even money.

More information

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory

6.895 PCP and Hardness of Approximation MIT, Fall Lecture 3: Coding Theory 6895 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 3: Coding Theory Lecturer: Dana Moshkovitz Scribe: Michael Forbes and Dana Moshkovitz 1 Motivation In the course we will make heavy use of

More information

Introduction to Cryptography Lecture 13

Introduction to Cryptography Lecture 13 Introduction to Cryptography Lecture 13 Benny Pinkas June 5, 2011 Introduction to Cryptography, Benny Pinkas page 1 Electronic cash June 5, 2011 Introduction to Cryptography, Benny Pinkas page 2 Simple

More information

5th March Unconditional Security of Quantum Key Distribution With Practical Devices. Hermen Jan Hupkes

5th March Unconditional Security of Quantum Key Distribution With Practical Devices. Hermen Jan Hupkes 5th March 2004 Unconditional Security of Quantum Key Distribution With Practical Devices Hermen Jan Hupkes The setting Alice wants to send a message to Bob. Channel is dangerous and vulnerable to attack.

More information

Information Theoretical Analysis of Digital Watermarking. Multimedia Security

Information Theoretical Analysis of Digital Watermarking. Multimedia Security Information Theoretical Analysis of Digital Watermaring Multimedia Security Definitions: X : the output of a source with alphabet X W : a message in a discrete alphabet W={1,2,,M} Assumption : X is a discrete

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

1 Indistinguishability for multiple encryptions

1 Indistinguishability for multiple encryptions CSCI 5440: Cryptography Lecture 3 The Chinese University of Hong Kong 26 September 2012 1 Indistinguishability for multiple encryptions We now have a reasonable encryption scheme, which we proved is message

More information

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today:

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today: Today: Introduction to the class. Examples of concrete physical attacks on RSA A computational approach to cryptography Pseudorandomness 1 What are Physical Attacks Tampering/Leakage attacks Issue of how

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

Lecture 9 and 10: Malicious Security - GMW Compiler and Cut and Choose, OT Extension

Lecture 9 and 10: Malicious Security - GMW Compiler and Cut and Choose, OT Extension CS 294 Secure Computation February 16 and 18, 2016 Lecture 9 and 10: Malicious Security - GMW Compiler and Cut and Choose, OT Extension Instructor: Sanjam Garg Scribe: Alex Irpan 1 Overview Garbled circuits

More information

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16

EE5139R: Problem Set 4 Assigned: 31/08/16, Due: 07/09/16 EE539R: Problem Set 4 Assigned: 3/08/6, Due: 07/09/6. Cover and Thomas: Problem 3.5 Sets defined by probabilities: Define the set C n (t = {x n : P X n(x n 2 nt } (a We have = P X n(x n P X n(x n 2 nt

More information

Optimal XOR based (2,n)-Visual Cryptography Schemes

Optimal XOR based (2,n)-Visual Cryptography Schemes Optimal XOR based (2,n)-Visual Cryptography Schemes Feng Liu and ChuanKun Wu State Key Laboratory Of Information Security, Institute of Software Chinese Academy of Sciences, Beijing 0090, China Email:

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2 Contents 1 Recommended Reading 1 2 Public Key/Private Key Cryptography 1 2.1 Overview............................................. 1 2.2 RSA Algorithm.......................................... 2 3 A Number

More information

Theoretical Cryptography, Lecture 10

Theoretical Cryptography, Lecture 10 Theoretical Cryptography, Lecture 0 Instructor: Manuel Blum Scribe: Ryan Williams Feb 20, 2006 Introduction Today we will look at: The String Equality problem, revisited What does a random permutation

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

An Information-Theoretic Model for Steganography

An Information-Theoretic Model for Steganography An Information-Theoretic Model for Steganography Christian Cachin March 3, 2004 Abstract An information-theoretic model for steganography with a passive adversary is proposed. The adversary s task of distinguishing

More information

Notes on Complexity Theory Last updated: November, Lecture 10

Notes on Complexity Theory Last updated: November, Lecture 10 Notes on Complexity Theory Last updated: November, 2015 Lecture 10 Notes by Jonathan Katz, lightly edited by Dov Gordon. 1 Randomized Time Complexity 1.1 How Large is BPP? We know that P ZPP = RP corp

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

Optimal Reductions between Oblivious Transfers using Interactive Hashing

Optimal Reductions between Oblivious Transfers using Interactive Hashing Optimal Reductions between Oblivious Transfers using Interactive Hashing Claude Crépeau and George Savvides {crepeau,gsavvi1}@cs.mcgill.ca McGill University, Montéral, QC, Canada. Abstract. We present

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 10 February 19, 2013 CPSC 467b, Lecture 10 1/45 Primality Tests Strong primality tests Weak tests of compositeness Reformulation

More information

Related-Key Statistical Cryptanalysis

Related-Key Statistical Cryptanalysis Related-Key Statistical Cryptanalysis Darakhshan J. Mir Department of Computer Science, Rutgers, The State University of New Jersey Poorvi L. Vora Department of Computer Science, George Washington University

More information

VC-DENSITY FOR TREES

VC-DENSITY FOR TREES VC-DENSITY FOR TREES ANTON BOBKOV Abstract. We show that for the theory of infinite trees we have vc(n) = n for all n. VC density was introduced in [1] by Aschenbrenner, Dolich, Haskell, MacPherson, and

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution 1. Polynomial intersections Find (and prove) an upper-bound on the number of times two distinct degree

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

Theme : Cryptography. Instructor : Prof. C Pandu Rangan. Speaker : Arun Moorthy CS

Theme : Cryptography. Instructor : Prof. C Pandu Rangan. Speaker : Arun Moorthy CS 1 C Theme : Cryptography Instructor : Prof. C Pandu Rangan Speaker : Arun Moorthy 93115 CS 2 RSA Cryptosystem Outline of the Talk! Introduction to RSA! Working of the RSA system and associated terminology!

More information

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H.

1 Ex. 1 Verify that the function H(p 1,..., p n ) = k p k log 2 p k satisfies all 8 axioms on H. Problem sheet Ex. Verify that the function H(p,..., p n ) = k p k log p k satisfies all 8 axioms on H. Ex. (Not to be handed in). looking at the notes). List as many of the 8 axioms as you can, (without

More information

Galois Field Commitment Scheme

Galois Field Commitment Scheme Galois Field Commitment Scheme Alexandre Pinto André Souto Armando Matos Luís Antunes University of Porto, Portugal November 13, 2006 Abstract In [3] the authors give the first mathematical formalization

More information

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized

More information

Lecture 3: Randomness in Computation

Lecture 3: Randomness in Computation Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,

More information

Lecture 11: Quantum Information III - Source Coding

Lecture 11: Quantum Information III - Source Coding CSCI5370 Quantum Computing November 25, 203 Lecture : Quantum Information III - Source Coding Lecturer: Shengyu Zhang Scribe: Hing Yin Tsang. Holevo s bound Suppose Alice has an information source X that

More information

Lecture 5, CPA Secure Encryption from PRFs

Lecture 5, CPA Secure Encryption from PRFs CS 4501-6501 Topics in Cryptography 16 Feb 2018 Lecture 5, CPA Secure Encryption from PRFs Lecturer: Mohammad Mahmoody Scribe: J. Fu, D. Anderson, W. Chao, and Y. Yu 1 Review Ralling: CPA Security and

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 16 October 30, 2017 CPSC 467, Lecture 16 1/52 Properties of Hash Functions Hash functions do not always look random Relations among

More information

Lecture 11: Non-Interactive Zero-Knowledge II. 1 Non-Interactive Zero-Knowledge in the Hidden-Bits Model for the Graph Hamiltonian problem

Lecture 11: Non-Interactive Zero-Knowledge II. 1 Non-Interactive Zero-Knowledge in the Hidden-Bits Model for the Graph Hamiltonian problem CS 276 Cryptography Oct 8, 2014 Lecture 11: Non-Interactive Zero-Knowledge II Instructor: Sanjam Garg Scribe: Rafael Dutra 1 Non-Interactive Zero-Knowledge in the Hidden-Bits Model for the Graph Hamiltonian

More information

Testing Problems with Sub-Learning Sample Complexity

Testing Problems with Sub-Learning Sample Complexity Testing Problems with Sub-Learning Sample Complexity Michael Kearns AT&T Labs Research 180 Park Avenue Florham Park, NJ, 07932 mkearns@researchattcom Dana Ron Laboratory for Computer Science, MIT 545 Technology

More information

Randomness. What next?

Randomness. What next? Randomness What next? Random Walk Attribution These slides were prepared for the New Jersey Governor s School course The Math Behind the Machine taught in the summer of 2012 by Grant Schoenebeck Large

More information

Probabilistically Checkable Arguments

Probabilistically Checkable Arguments Probabilistically Checkable Arguments Yael Tauman Kalai Microsoft Research yael@microsoft.com Ran Raz Weizmann Institute of Science ran.raz@weizmann.ac.il Abstract We give a general reduction that converts

More information

Lecture 1. 1 Introduction. 2 Secret Sharing Schemes (SSS) G Exposure-Resilient Cryptography 17 January 2007

Lecture 1. 1 Introduction. 2 Secret Sharing Schemes (SSS) G Exposure-Resilient Cryptography 17 January 2007 G22.3033-013 Exposure-Resilient Cryptography 17 January 2007 Lecturer: Yevgeniy Dodis Lecture 1 Scribe: Marisa Debowsky 1 Introduction The issue at hand in this course is key exposure: there s a secret

More information

Partitioning Metric Spaces

Partitioning Metric Spaces Partitioning Metric Spaces Computational and Metric Geometry Instructor: Yury Makarychev 1 Multiway Cut Problem 1.1 Preliminaries Definition 1.1. We are given a graph G = (V, E) and a set of terminals

More information

Lecture 14: IP = PSPACE

Lecture 14: IP = PSPACE IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Basic Course on Computational Complexity Lecture 14: IP = PSPACE David Mix Barrington and Alexis Maciel August 3, 2000 1. Overview We

More information

BLIND DECODER FOR BINARY PROBABILISTIC TRAITOR TRACING CODES

BLIND DECODER FOR BINARY PROBABILISTIC TRAITOR TRACING CODES BLIND DECODER FOR BINARY PROBABILISTIC TRAITOR TRACING CODES Luis Pérez-Freire, Teddy Furon To cite this version: Luis Pérez-Freire, Teddy Furon. BLIND DECODER FOR BINARY PROBABILISTIC TRAITOR TRACING

More information

Optimal symmetric Tardos traitor tracing schemes

Optimal symmetric Tardos traitor tracing schemes Optimal symmetric Tardos traitor tracing schemes Laarhoven, T.M.M.; de Weger, B.M.M. Published in: Designs, Codes and Cryptography DOI: 10.1007/s10623-012-9718-y Published: 01/01/2014 Document Version

More information

Maximal Noise in Interactive Communication over Erasure Channels and Channels with Feedback

Maximal Noise in Interactive Communication over Erasure Channels and Channels with Feedback Maximal Noise in Interactive Communication over Erasure Channels and Channels with Feedback Klim Efremenko UC Berkeley klimefrem@gmail.com Ran Gelles Princeton University rgelles@cs.princeton.edu Bernhard

More information

Non-Interactive Zero Knowledge (II)

Non-Interactive Zero Knowledge (II) Non-Interactive Zero Knowledge (II) CS 601.442/642 Modern Cryptography Fall 2017 S 601.442/642 Modern CryptographyNon-Interactive Zero Knowledge (II) Fall 2017 1 / 18 NIZKs for NP: Roadmap Last-time: Transformation

More information

Fundamental Limits of Invisible Flow Fingerprinting

Fundamental Limits of Invisible Flow Fingerprinting Fundamental Limits of Invisible Flow Fingerprinting Ramin Soltani, Dennis Goeckel, Don Towsley, and Amir Houmansadr Electrical and Computer Engineering Department, University of Massachusetts, Amherst,

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 9 February 6, 2012 CPSC 467b, Lecture 9 1/53 Euler s Theorem Generating RSA Modulus Finding primes by guess and check Density of

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 14 October 16, 2013 CPSC 467, Lecture 14 1/45 Message Digest / Cryptographic Hash Functions Hash Function Constructions Extending

More information

Linear Secret-Sharing Schemes for Forbidden Graph Access Structures

Linear Secret-Sharing Schemes for Forbidden Graph Access Structures Linear Secret-Sharing Schemes for Forbidden Graph Access Structures Amos Beimel 1, Oriol Farràs 2, Yuval Mintz 1, and Naty Peter 1 1 Ben Gurion University of the Negev, Be er Sheva, Israel 2 Universitat

More information

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Kuan Cheng Yuval Ishai Xin Li December 18, 2017 Abstract We study the question of minimizing the computational complexity of (robust) secret

More information

Lecture 1 : Data Compression and Entropy

Lecture 1 : Data Compression and Entropy CPS290: Algorithmic Foundations of Data Science January 8, 207 Lecture : Data Compression and Entropy Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will study a simple model for

More information

Benny Pinkas Bar Ilan University

Benny Pinkas Bar Ilan University Winter School on Bar-Ilan University, Israel 30/1/2011-1/2/2011 Bar-Ilan University Benny Pinkas Bar Ilan University 1 Extending OT [IKNP] Is fully simulatable Depends on a non-standard security assumption

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

RSA RSA public key cryptosystem

RSA RSA public key cryptosystem RSA 1 RSA As we have seen, the security of most cipher systems rests on the users keeping secret a special key, for anyone possessing the key can encrypt and/or decrypt the messages sent between them.

More information

CS 361 Meeting 26 11/10/17

CS 361 Meeting 26 11/10/17 CS 361 Meeting 26 11/10/17 1. Homework 8 due Announcements A Recognizable, but Undecidable Language 1. Last class, I presented a brief, somewhat inscrutable proof that the language A BT M = { M w M is

More information

David Chaum s Voter Verification using Encrypted Paper Receipts

David Chaum s Voter Verification using Encrypted Paper Receipts David Chaum s Voter Verification using Encrypted Paper Receipts Poorvi Vora In this document, we provide an exposition of David Chaum s voter verification method [1] that uses encrypted paper receipts.

More information

A Survey of Broadcast Encryption

A Survey of Broadcast Encryption A Survey of Broadcast Encryption Jeremy Horwitz 13 January 2003 Abstract Broadcast encryption is the problem of a sending an encrypted message to a large user base such that the message can only be decrypted

More information

Learning convex bodies is hard

Learning convex bodies is hard Learning convex bodies is hard Navin Goyal Microsoft Research India navingo@microsoft.com Luis Rademacher Georgia Tech lrademac@cc.gatech.edu Abstract We show that learning a convex body in R d, given

More information

Locally Decodable Codes

Locally Decodable Codes Foundations and Trends R in sample Vol. xx, No xx (xxxx) 1 114 c xxxx xxxxxxxxx DOI: xxxxxx Locally Decodable Codes Sergey Yekhanin 1 1 Microsoft Research Silicon Valley, 1065 La Avenida, Mountain View,

More information

Lecture 11: Key Agreement

Lecture 11: Key Agreement Introduction to Cryptography 02/22/2018 Lecture 11: Key Agreement Instructor: Vipul Goyal Scribe: Francisco Maturana 1 Hardness Assumptions In order to prove the security of cryptographic primitives, we

More information

Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding Mohsen Ghaffari MIT ghaffari@mit.edu Bernhard Haeupler Microsoft Research haeupler@cs.cmu.edu Abstract We study coding schemes

More information

Anticollusion solutions for asymmetric fingerprinting protocols based on client side embedding

Anticollusion solutions for asymmetric fingerprinting protocols based on client side embedding Bianchi et al. EURASIP Journal on Information Security (2015) 2015:6 DOI 10.1186/s13635-015-0023-y RESEARCH Open Access Anticollusion solutions for asymmetric fingerprinting protocols based on client side

More information

Homework 7 Solutions

Homework 7 Solutions Homework 7 Solutions Due: March 22, 2018 CS 151: Intro. to Cryptography and Computer Security 1 Fun with PRFs a. F a s = F 0 k(x) F s (x) is not a PRF, for any choice of F. Consider a distinguisher D a

More information

Signatures and DLP-I. Tanja Lange Technische Universiteit Eindhoven

Signatures and DLP-I. Tanja Lange Technische Universiteit Eindhoven Signatures and DLP-I Tanja Lange Technische Universiteit Eindhoven How to compute ap Use binary representation of a to compute a(x; Y ) in blog 2 ac doublings and at most that many additions. E.g. a =

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

Theory of Computer Science to Msc Students, Spring Lecture 2

Theory of Computer Science to Msc Students, Spring Lecture 2 Theory of Computer Science to Msc Students, Spring 2007 Lecture 2 Lecturer: Dorit Aharonov Scribe: Bar Shalem and Amitai Gilad Revised: Shahar Dobzinski, March 2007 1 BPP and NP The theory of computer

More information

Lecture Notes 20: Zero-Knowledge Proofs

Lecture Notes 20: Zero-Knowledge Proofs CS 127/CSCI E-127: Introduction to Cryptography Prof. Salil Vadhan Fall 2013 Lecture Notes 20: Zero-Knowledge Proofs Reading. Katz-Lindell Ÿ14.6.0-14.6.4,14.7 1 Interactive Proofs Motivation: how can parties

More information

Secure Multiparty Computation from Graph Colouring

Secure Multiparty Computation from Graph Colouring Secure Multiparty Computation from Graph Colouring Ron Steinfeld Monash University July 2012 Ron Steinfeld Secure Multiparty Computation from Graph Colouring July 2012 1/34 Acknowledgements Based on joint

More information

A FRAMEWORK FOR UNCONDITIONALLY SECURE PUBLIC-KEY ENCRYPTION (WITH POSSIBLE DECRYPTION ERRORS)

A FRAMEWORK FOR UNCONDITIONALLY SECURE PUBLIC-KEY ENCRYPTION (WITH POSSIBLE DECRYPTION ERRORS) A FRAMEWORK FOR UNCONDITIONALLY SECURE PUBLIC-KEY ENCRYPTION (WITH POSSIBLE DECRYPTION ERRORS) MARIYA BESSONOV, DIMA GRIGORIEV, AND VLADIMIR SHPILRAIN ABSTRACT. We offer a public-key encryption protocol

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 18 November 6, 2017 CPSC 467, Lecture 18 1/52 Authentication While Preventing Impersonation Challenge-response authentication protocols

More information

Computational security & Private key encryption

Computational security & Private key encryption Computational security & Private key encryption Emma Arfelt Stud. BSc. Software Development Frederik Madsen Stud. MSc. Software Development March 2017 Recap Perfect Secrecy Perfect indistinguishability

More information

Stream ciphers I. Thomas Johansson. May 16, Dept. of EIT, Lund University, P.O. Box 118, Lund, Sweden

Stream ciphers I. Thomas Johansson. May 16, Dept. of EIT, Lund University, P.O. Box 118, Lund, Sweden Dept. of EIT, Lund University, P.O. Box 118, 221 00 Lund, Sweden thomas@eit.lth.se May 16, 2011 Outline: Introduction to stream ciphers Distinguishers Basic constructions of distinguishers Various types

More information

Lecture 38: Secure Multi-party Computation MPC

Lecture 38: Secure Multi-party Computation MPC Lecture 38: Secure Multi-party Computation Problem Statement I Suppose Alice has private input x, and Bob has private input y Alice and Bob are interested in computing z = f (x, y) such that each party

More information

Information Hiding and Covert Communication

Information Hiding and Covert Communication Information Hiding and Covert Communication Andrew Ker adk @ comlab.ox.ac.uk Royal Society University Research Fellow Oxford University Computing Laboratory Foundations of Security Analysis and Design

More information

T Cryptography: Special Topics. February 24 th, Fuzzy Extractors: Generating Strong Keys From Noisy Data.

T Cryptography: Special Topics. February 24 th, Fuzzy Extractors: Generating Strong Keys From Noisy Data. February 24 th, 2005 Fuzzy Extractors: Generating Strong Keys From Noisy Data Helsinki University of Technology mkivihar@cc.hut.fi 1 Overview Motivation and introduction Preliminaries and notation General

More information

Course 2BA1: Trinity 2006 Section 9: Introduction to Number Theory and Cryptography

Course 2BA1: Trinity 2006 Section 9: Introduction to Number Theory and Cryptography Course 2BA1: Trinity 2006 Section 9: Introduction to Number Theory and Cryptography David R. Wilkins Copyright c David R. Wilkins 2006 Contents 9 Introduction to Number Theory and Cryptography 1 9.1 Subgroups

More information

Answering n^{2+o(1)} Counting Queries with Differential Privacy is Hard

Answering n^{2+o(1)} Counting Queries with Differential Privacy is Hard Answering n^{2+o(1)} Counting Queries with Differential ivacy is Hard Jonathan Ullman Harvard University ullman@seas.harvard.edu ABSTRACT A central problem in differentially private data analysis is how

More information

Fingerprinting Codes and the Price of Approximate Differential Privacy

Fingerprinting Codes and the Price of Approximate Differential Privacy 2014 ACM Symposium on the on on the Theory of of of Computing Fingerprinting Codes and the ice of Approximate Differential ivacy Mark Bun Harvard University SEAS mbun@seas.harvard.edu Jonathan Ullman Harvard

More information

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions

Inaccessible Entropy and its Applications. 1 Review: Psedorandom Generators from One-Way Functions Columbia University - Crypto Reading Group Apr 27, 2011 Inaccessible Entropy and its Applications Igor Carboni Oliveira We summarize the constructions of PRGs from OWFs discussed so far and introduce the

More information