Improved Versions of Tardos Fingerprinting Scheme

Size: px
Start display at page:

Download "Improved Versions of Tardos Fingerprinting Scheme"

Transcription

1 The Open University of Israel The Department of Mathematics and Computer Science Improved Versions of Tardos Fingerprinting Scheme Thesis submitted as partial fulfillment of the requirements towards an M.Sc. degree in Computer Science By Oded Blayer Prepared under the supervision of Dr. Tamir Tassa February 2007

2 Abstract We examine Tardos optimal probabilistic fingerprinting scheme and improve the performance of the paper significantly. In addition we improve Tardos proofs, that applied for coalitions of size at least four, to include coalitions of any size. The examination is done by investigating constant parameters that were used for the scheme creation in Tardos paper. We formulate the constraints that those parameters have to satisfy and analytically find the values that yield a minimal codeword length. This analysis enables to shorten the codeword length by a factor of approximately 4. The last part of our work finds out how good Tardos scheme is in practice. Simulations of codeword creation, pirate collusion and accusation process are used to determine the actual codeword length that provides a required level of security. The thesis begins with an overview of traitor tracing schemes and fingerprinting codes and summarizes the works of Chor, Fiat and Naor [2], Boneh and Shaw [1], Cox, Kilian, Leighton and Shamoon [3] and the work of Gabor Tardos [6, 7].

3 Contents 1 Introduction 5 2 Traitor Tracing Framework The framework Generation assumptions Logarithm tables Pay-TV decoders Fingerprinting digital data A Simple Traitor Tracing Scheme Due to Chor, Fiat and Naor 12 4 Boneh And Shaw Framework Collusion secure codes A composition scheme Cox, Kilian, Leighton and Shamoon Image Watermarking Method The watermarking procedure Evaluating the similarities of watermarks An Optimal Binary Watermarking The Tardos fingerprinting code Phase 1: Creating the codebook Phase 2: Accusation Properties of the code Our contribution Analysis of Tardos Scheme Quadratic upper bounds on the exponential function Why the innocent is not accused Why some guilty is accused Optimizing the constants Minimizing d m Minimizing d t Minimizing d z Optimizing d α Conclusion Simulation The setup Pirate codeword generation strategies Results Appendix A The optimal value finder

4 10 Appendix B The scheme tester The code book generator The pirate tool Helper classes

5 1 Introduction Digital data is growing in popularity. Data representing anything, from motion pictures to the design of spaceships, is easily created using a wide variety of tools. This data can be duplicated in perfect quality, easily stored on optic and magnetic media, and seamlessly distributed using the Web. However, this situation does have drawbacks. Some of the digital data is private, secret or copyrighted. The fact that the data can be perfectly cloned and distributed makes the job of those who protect this data very hard. As a result of the popularity of digital data, there is an uprising need to protect digital data from being forged. Digital watermarking was introduced in order to protect such digital data. Watermarking is a method that was developed as early as the 13th century by paper makers to identify their product. The original idea of watermarking is to stamp the material (mostly paper) in a way that the source of this paper will be explicitly identifiable. One of the most common forms of digital data is movies. Including all kinds of digital video such as motion pictures, TV broadcasts etc. Currently, movies are forged by the millions, disregarding the copyright laws. Many legal struggles take place all over the world between the copyright owners and the people responsible for the illegal distribution of copyrighted material. This work will concentrate on the methods available for watermarking digital representations of all kinds of movies. Digital watermarks are generally divided into two kinds: visible and invisible. The visible watermark should be easily noticeable by the viewer of the movie, similar to the broadcasting company icon in the corner of the TV screen. The invisible watermark, on the other hand, should not change the movie to a perceptually great extent. Digital watermarking is done for one of two goals: 1. Identifying the origin of the movie and ensuring originality. 2. Tracking down the source of a forged movie. Once a forged movie is acquired, the watermark can be extracted using various analysis tools, depending on the watermark kind, and the source of the forged movie can be found. The watermarks used for the first goal are meant for everybody to see, so the watermark should be visible and easily identifiable. The second goal is better achieved using invisible watermarks that does not change the perception of the movie by the viewer. In this study we concentrate on watermarking for the second goal, namely watermarks that are designed for the protection of movies from forgery. The movie forgery is done by a user or a group of users that we call the pirate. The motivation of this pirate is usually to steal the movie and distribute it illegally. Because a digital movie is easily duplicated, the pirate has little to do. Watermarking is used in this context in order to create personalized versions of the movie so that once an illegal copy of the movie is captured, the pirate may be traced and incriminated. To ensure that the pirate is not able to evade tracing, the watermark must be very robust; it should be impossible to remove a watermark from the marked movie. Moreover, the watermark should sustain all kinds of digital video manipulations, such as scaling and compression. Achieving those properties for digital watermarking is not an easy task. Creation of digital watermarks is a well studied and exercised area. Many methods exist for watermarking movies and other 5

6 types of digital data. This work will present a method for watermarking images, developed by Cox et. al. [3]. A digital representation of a movie can be watermarked by breaking the movie into fragments and then watermarking each of the fragments separately, using the techniques of cox et. al. This way, we can create distinct personalized copies of a given movie. Having the digital data watermarked does not prevent the pirate from duplicating and distributing the movie; however, the watermarks help us to track down the origin of the duplicated digital movie. Suppose that a pirate wants to distribute a movie that was watermarked in fragments as described above. Duplicating the movie and distributing it as it is, will make it easy to find out the original copy. Since it will have the same watermark as the duplication source. So the pirate is faced with two options: The first option is to deal with the watermark of each fragment and attempt to remove it or blur it without harming the quality of the movie. The second option is to get several original copies, each one watermarked with a different and unique watermark, and create a new copy composed of fragments from several original copies. This pirate created copy will have a new watermark that will be composed from fragments of the original watermarks. The term collusion is used to describe this operation. When a pirate is creating a movie by collusion, there are many combinations available, each combination will yield a different watermark. A good watermarking method will trace the pirate regardless of the combination selected. This is actually the combinatorial problem that we will focus on in this work. Namely, how to trace the pirate when we know only the original watermarks and the watermark produced by the pirate. Several aspects of digital watermarking have been studied profoundly in recent years. One of the studied areas is watermarking data that is dynamically distributed, like Pay-TV or on-line presentations/conferences. This area enables techniques that are more efficient than those that can be implemented for static watermarking, in which all of the data is watermarked only once prior to the distribution. Although there is a lot in common between dynamic watermarking and static watermarking this work will concentrate on the static aspect of digital watermarking. Even though the main motivation of this work is movies and videos, the techniques apply to all sorts of digital data that can be fragmented, and have the fragments watermarked independently. Structure of this work This work begins with an introduction to the field of traitor tracing schemes and fingerprinting codes. In Section 2 we introduce the main definitions and notations of the traitor tracing framework. Next, two such schemes are described. A traitor tracing scheme due to Chor, Fiat and Naor [2] is described in Section 3. Then, in Section 4, we discuss the fingerprinting code of Boneh and Shaw [1]. Section 5 reviews a robust image watermarking method, due to Cox, Kilian, Leighton and Shamoon [3], that can be used to watermark movies. Then, in Section 6 we describe the optimal probabilistic fingerprinting code of Gabor Tardos [6]. We analyze the scheme and the parameters that it utilizes in Section 7 in order to determine the parameter values for which the codeword length is minimal. Finally, in Section 8 we describe a method of simulating Tardos code. We use that method in order to assess the performance of Tardos framework in practice and show that one may even use shorter codewords in order to achieve a predetermined level of security. 6

7 2 Traitor Tracing Framework This section gives a more formal definition of this work s main problem. The definitions are taken mostly from Tassa [8]. The framework discusses a coding theory problem of tracing an origin of a word, generated by collusion. This framework will leave aside the methods of inserting those codes to the digital data and extracting the codes from the digital data. 2.1 The framework Let U = {u 1,..., u n denote a set of users and denote T = {t 1,..., t c U to be the subset of traitorous users. T is called the pirate while its members are referred to as traitors. An algorithm that aims at locating the traitors is called a traitor tracing scheme. Such a scheme consists of the following ingredients: A marking alphabet Σ. The alphabet size, r = Σ, depends on the kind of digital data and on the marking method. A codebook Γ containing n distinct words of length l over Σ, Γ = {w 1,..., w n Σ l. Each word in the code book will be unique and designated to one user. A personalization scheme P : U Γ that determines how to mark the data that is provided to a particular user with a codeword from the codebook. A generation assumption which defines what code modifications are possible. Let P (T ) = {P (t 1 ),..., P (t c ) denote the set of codewords in the copies that are owned by the traitors. We let P (T ) denote the set of codewords that could be generated by the pirate and be placed in the pirate copy. There are different generation assumptions, some stricter than others. Section 2.2 will describe a few of them. A tracing algorithm σ that, given a pirate copy, aims at tracing back (at least) one traitor that collaborated in producing that copy. This algorithm may be therefore viewed as a function σ : P (T ) 2 U. Obviously, a desired property is that whenever y P (T ) then σ(y) > 0 (namely, the tracing algorithm has found some suspect users) and σ (y) T (i.e., all traced users are indeed traitors). However, this is not always the case, as indicated by the next ingredient. An upper bound ε 0 on the probability of a failure. Namely, if y P (T ) then Pr [σ(y) = ] ε and Pr [σ(y) (U \ T ) ] ε. Traitor tracing schemes in which ε = 0 are called deterministic while those which may fail with some small probability are called probabilistic. This work will focus on probabilistic schemes. The following definition extends the notion of ε-security, as appears in [1, 6]. 7

8 Definition 2.1 A fingerprinting scheme is called (ε, ˆε)-secure against coalitions of size c if for any T U of size T c, and for every y P (T ), the following two properties hold: and Pr (σ(y) (U \ T ) ) ε, (1) Pr (σ(y) T = ) ˆε. (2) In case ˆε = ε, the scheme is called ε-secure against coalitions of size c. The next section presents several generation assumptions. 2.2 Generation assumptions The generation assumption defines the set of codewords that can be generated by the pirate. We discuss here three different settings and the generation assumption in each one of them Logarithm tables In earlier days logarithm tables were used by scientists, especially astronomers, to perform complex calculations. The logarithm tables contained the logarithm values of natural numbers. To make sure that the tables are not forged, tiny errors were introduced through the least significant digits of a few logarithm values. Figure 1 displays an example. The example uses a four digit watermark to mark the logarithm table of the natural numbers smaller than 10. Notice that the third digit of the two watermarks is identical (7), so having both logarithm tables the pirate can detect only three of the four watermarked spots. If a pirate would purchase two logarithm tables, he would get two different keywords. By comparing them he can easily find out which spots are used for watermarking (the spots in which the numbers are different). Notice that he might not be able to find all watermarking spots, but he will surely find at least one. After the pirate finds the spots he can assign arbitrary digits to these spots, enabling him to create new watermarks. The set R = {j : P (t 1 ) j =... = P (t c ) j denote the set of watermarking spots that are undetectable by the pirate T. Hence, in this setting P (T ) = {w Σ l s.t. w = P (t 1 ) R. In the example from Figure 1 the generation assumption will impose the pirate holding the two watermarked log tables to put 7 as the third digit of the watermark, since in both watermarks the third digit is Pay-TV decoders Chor, Fiat and Naor [2] considered a setting of Pay-TV system, where Conditional Access techniques are implemented in order to deny access to content from non-paying users. In order to achieve that goal, the stream of content that is broadcast from the center is encrypted, and each paying user is given a decoder with embedded personal keys that are used to decode session keys that are used to decrypt the actual transmission. As such decoders are merely tamper-resistant, 8

9 Figure 1: Logarithm table watermarking example and not tamper-proof, highly motivated and capable users may read the secret keys from their own decoder in order to manufacture illegal pirate decoders and start their own business. Hence, in order for the center to be able to trace the source of such piracy, the personal keys are marked in some manner. The following is a basic technique of marking keys. The center determines two parameters - r (the size of the marking alphabet) and l (the length of codewords) - in a manner that will be explained later. It then generates r l random and independent encryption keys, k (i, j), 1 i r, 1 j l. Next, each personal decoder is provided with a unique selection of l of those keys - one from each column. Namely, denoting Σ = {1,..., r, the center employs a personalization function P : U Γ l and then each user u U gets the following personal key: k (u) = {k i(j),j : 1 j l, i (j) = P (u) j. Figure 2 for illustrates the personal key construction of three personal keys of length l using an alphabet of ten marks. In each location one of the ten marks is selected. In the illustration, the marks of k (u 1 ), k (u 2 ) and k (u 3 ) are denoted by green, red and blue, respectively. Let s be the secret that needs to be communicated to all paying users (e.g., s is the session key to be used throughout the next time period in order to encrypt the transmitted material). First, we break s in a random manner into l parts, s = s 1 s l, where denotes XOR. Then, the secret s j is encrypted r times with respect to each of the keys k i,j, 1 i r. The r l encrypted messages {E (s j, k i,j ) : 1 i r, 1 j l are broadcast. Finally, since each user knows exactly one key k i,j for each j, he can decrypt one of the r messages that contain s j, for all j, and therefore recover the secret s. Assume that T = {t 1,..., t c U is a coalition of traitors. Then the arsenal of keys that they 9

10 Figure 2: Personal key construction have at their disposal is K = 1 j l K j where K j = { k i,j : i P j := {P (t 1 ) j,..., P (t c ) j. They could manufacture a fully functional decoder if they install in it one key from K j, for all j. Namely, the set of all selections of l keys that the pirate may make corresponds to the subset of codewords P (T ) = P 1... P l. (3) An example of such a selection of keys for the pirate decoder is shown in Figure 3. A coalition of the three users can generate the codeword as marked by the dotted pattern. Notice that in some of the spots the coalition has only one digit possible for selection. This assumption is indeed the correct generation assumption in this context since if in the jth column the traitors have together, say, only two keys - k i1,j and k i2,j - they have no choice other than selecting one of those two keys and placing it in the pirate decoder. They could not remove those keys altogether, for then the decoder will not be able to recover s j ; they could not deduce from those two keys the value of any other key from the jth column; and they could not combine those keys in any manner that would yield a different key that would be of any help in trying to decrypt the messages transmitting s j Fingerprinting digital data Consider a data provider that sells viewing rights of the latest digital features. Since digital material may be cloned many times without experiencing quality degradation, immoral users that paid for their copies might redistribute such copies for a bargain price in the black market. In order to deter such users and to be able to trace them in case that they do exercise such piracy, Boneh and Shaw [1] suggested a fingerprinting technique. As in any static watermarking framework, the original data is personalized prior to distribution. The movie V is broken up to l short segments, V = V 1... V l where denotes concatenation, and to each segment V j, r 10

11 Figure 3: Pirate key construction under CFN generation assumption almost-identical variants are generated, V j { Vj 1, j r. As in the previous example, we let Σ = {1,..., r and the personalization function is of the same sort, P : U Γ l. If user u U was assigned with the codeword P (u), then he will get the following version of the movie: V (u) = V i(1) 1 V i(l) l where i (j) = P (u) j, 1 j l. Boneh and Shaw used a stronger generation assumption than Chor et. al. They assumed that in segments where the embedded mark is detectable by the pirate, namely, a segment j where the pirate owns at least two different variants, the pirate could reproduce any of the r different variants of that segment, or render that mark unreadable. Letting R = {j : P (t 1 ) j =... = P (t c ) j denote the set of watermarking spots that are undetectable for the pirate T, this assumption implies that { ( ) l P (T ) = w {? s.t. w = P (t1 ) R, (4) where R = {j : P (t 1 ) j = = P (t c ) j. Boneh and Shaw referred to P (T ) as the feasible set of T. Figure 4 illustrates this generation assumption. The given three users coalition under Boneh and Shaw s generation assumption can generate the codeword 29?9...?636 as marked above by dots. Notice that in the undetectable spot (4) the codeword must have the value 9. Boneh and Shaw concentrated on the case r = 2 and designed watermarking schemes that are capable of tracing at least one of the colluding traitors that participated in producing the pirate version, with an error probability as small as desired. Going back to the generation assumption (4), we note that in the binary case r = 2 it becomes quite similar to the previous assumption (3), the only difference being that (4) allows the complete removal of detectable marks. However, in the binary case a detectable mark occurs in segments where the pirate has all r = 2 variants. In that case there is not much point in the pirate removing the mark (an operation that usually damages the quality of the produced copy); instead, he could 11

12 Figure 4: Pirate key construction under Boneh and Shaw s generation assumption pick any of the r = 2 variants that he has. So when r = 2 we can change the generation assumption to P (T ) = {w ({0, 1) l s.t. w = P (t 1 ) R, (5) Indeed, in accord with this observation, the Boneh-Shaw scheme sets unreadable marks arbitrarily to zero. That closes entirely the gap between their generation assumption (4) and the previous one (3). We would like to note that when r > 2 assumption (4) is quite far-reaching in the context of a digital video. For example, Cox et. al. [3] have introduced methods to generate secure video watermarks, see Section 5. Those watermarks could not be easily removed unless the pirate edits out the entire segment. In addition, if the pirate got at some segment b distinct variants, 1 < b < r, he could not reproduce from them any of the remaining r b variants for that segment. Hence, when such secure watermarks are used, the strong generation assumption (4) may be relaxed to (3). Another relaxation is possible, by allowing the pirate to cut off at most 5% of the video segments. This way we make sure that most of the codeword could still be parsed from the pirate copy. As a concluding remark on the generation assumption (4), we note that this is the same assumption as the one of the logarithm tables described above, except for the option to render certain marks unreadable (marked as? in (4)). 3 A Simple Traitor Tracing Scheme Due to Chor, Fiat and Naor Consider the Pay-TV decoder scheme described in Section Chor Fiat and Naor [2] described a traitor tracing scheme that is ε-secure against coalitions of size c, for given parameters ε and c. The scheme is as follows: The alphabet is = {1,..., r. The center creates n codewords, one for each user. The generation of codewords is done by independently choosing l random hash functions h j : {1,..., n {1,..., r, 1 j l. 12

13 Then Γ = {w i = h 1 (i)...h l (i) 1 i n. The personalization scheme is P (u i ) = w i for each u i U = {u 1,..., u n. The generation assumption is the one described in Section 2.2.2, P (T ) = P 1... P l where P j := {P (t 1 ) j,..., P (t c ) j. The tracing in this case is done by majority. Let p be the pirate generated codeword. We know that p P (T ). The algorithm traces the user with maximal number of matches with p. More formally, we define a matching score S : U {0, 1,..., l as follows: S (u i ) = l j=1 ) δ (P (u i ) j, p j where δ (x, y) = { 1 if x = y 0 otherwise. The user with the maximal matching score (ties broken arbitrarily) is marked as a traitor. Chor et. al. introduced two kinds of schemes: the open scheme, where the codewords of the users are publicly known; and the closed scheme, where the codewords are kept secret. Here we will concentrate on closed schemes, where the center keeps the list of codewords secret. This makes it harder for the pirate, since he does not know how to incriminate other users that are not in the traitor group. Lemma 3.1 Let T = {t 1,..., t c U be the pirate and let p P (T ) be a pirate codeword generated according to the generation assumption. Then there exists a user u T for whom S (u) l c. Proof. By the generation assumption we know that each mark of the codeword p is originated from one of the codewords in P (T ). This means that for every 1 j l there must be a user u i T such that P (u i ) j = p j. Hence, the average number of matches between u T and p is at least l c, so there must be a user u T for which S (u) l c. Theorem 3.2 Let σ : P (T ) U be the majority algorithm described above and let p P (T ) be the pirate generated codeword. Then the probability (over all choices of hash functions) of false incriminations can be made arbitrarily small by selecting r and l. More specifically, let c be ( an upper bound on the size of the coalition, and assume that r = 2c. Then by letting l = 2r log n ) 2 ε we get Pr [σ (p) U\T ] ε. Proof. Consider an innocent user, u i / U\T. Since the hash functions are selected independently at random, the value h j (i) is uniformly distributed in ], meaning that Pr [h j (i) = (p) j = 1 r ) for all 1 j l. Define a random variable x j = δ (h j (i), (p) j. Then the matching score of u i is given by l S (u i ) = x j. It follows that l µ = E [S (u i )] = E j=1 13 j=1 x j = l E [x j ] = l r. j=1

14 Next, we invoke the Chernoff bound to bound the probability that a user will be falsely incriminated. Chernoff Bound [5] Let X 1, X 2,..., X n be independent Poisson trials such that, for 1 i n, Pr [X i = 1] = p i, where 0 < p i < 1. Then, for X = n i=1 X i, µ = E [X] = n i=1 p i, and any δ > 0, [ ] µ e δ Pr [X > (1 + δ) µ] < (1 + δ) (1+δ). In the view of the above, the probability that u i will be falsely incriminated is bounded by [ Pr S (u i ) > l ] [ = Pr S (u i ) > µ r ]. c c As, by our assumption, r = 2c, we infer that [ Pr S (u i ) > µ r ] = Pr [S (u i ) > 2µ]. c Using Chernoff bound, with δ = 1, we get that [ e ] µ [ e l r Pr [S (u i ) > 2µ] < = 4 4] < 2 l 2r = ε n. Hence, the probability that any of the innocent users will be falsely incriminated is bounded by ε. Notice that this framework does have drawbacks, The first and most important drawback is the fact that we are limiting the size of the pirate coalition c to be less than r 2. Another drawback is that the generation assumption is stricter than Boneh and Shaw s in the sense that it does not allow the pirate to set unreadable watermarks. In the bottom line this framework gives us a tracing framework with a simple tracing algorithm which produces codewords of length l = O ( c log n ) ε. Chor et. al. introduced a similar framework for the open distribution scheme, where all user codewords are publicly known. That framework yields codewords of the same length with the stricter limitation of r = 2c 2. 4 Boneh And Shaw Framework 4.1 Collusion secure codes Boneh and Shaw [1] introduced a collusion secure code over binary alphabets. Such binary codes are bound to be probabilistic, as implies by the following theorem due to Fiat and Tassa [4, Theorem 1]. Theorem 4.1 If a pirate controls c traitors then: (a) There exists a deterministic watermarking scheme with Σ = c + 1. (b) No watermarking scheme that uses an alphabet of size Σ c can be deterministic. Hence, when using a binary scheme, there are no deterministic schemes that are secure against coalitions of size c 2. All such schemes will fail with some probability ε > 0. First, we describe a scheme that was proposed by Boneh and Shaw that is ε-secure against coalitions of any size. Namely, regardless of the size of the coalition, it traces at least one member of the coalition in probability of at least 1 ε. 14

15 The codebook is created in the following manner. First, we define the following words: w i = 0 (i 1) d 1 (n i)d, 1 i n, where the parameter d will be specified later on. All codewords are of length l = (n 1) d where the first (i 1) d bits are 0s and the rest are 1s. For example, if n = 5 and d = 3 we get w 1 : w 2 : w 3 : w 4 : w 5 : Note that there are exactly d bits that distinguish between two consecutive codewords. Next, we permute the bits of the codewords using a random and secret permutation π S l and give user u i the permuted codeword P (u i ) = π (w i ). Namely, the codebook is Γ = {π(w i ) : 1 i n. For example, assume that the following permutation is selected π = (11, 5, 4, 7, 3, 12, 10, 8, 6, 1, 2, 9). This permutation will move the first bit in all codewords to the eleventh position, the second bit to the fifth position and so forth. The resulting codewords are: Therefore, in this example we get the codebook π (w 1 ) : π (w 2 ) : π (w 3 ) : π (w 4 ) : π (w 5 ) : Γ = {π (w i ) 1 i n. The generation assumption used by this framework was described in Section Namely, P (T ) = {w {0, 1 l s.t. w = P (t 1 ) R where R = {j : P (t 1 ) j = = P (t c ) j. Before defining the tracing algorithm we introduce some notations: 1. B i = {π (j) : (i 1) d j i d for 1 i n 1 is the set of all bit positions in which the first i codewords have 1 and the other n i codewords have 0. These are the bits separating between π (w i ) and π (w i+1 ). In our example, B 3 = {4, 8, 12, because these are the positions of bits that are different in π (w 3 ) and π (w 4 ). 15

16 2. R s = B s 1 B s for 2 s n 1. In our example R 3 = {2, 3, 4, 8, 9, For a binary string x, weight (x) is the number of 1s in x, and weight (x B i ) is the number of 1s in bit positions B i of x. The intuition of the algorithm is as follows: Let u i be an innocent user in U\T. We can see that each member of the coalition t j T has exactly the same bit pattern for all bits of R i (the pattern will be all 1s if j < i and all 0s if j > i). On the other hand, π (w i ) has 1s in all bits of B i 1 and 0s in all bits of B i. In our example, R 2 = B 1 B 2 = {10, 11, 5 {3, 2, 9 = {2, 3, 5, 9, 10, 11. Then only u 2 can distinguish between the different bit positions in R 2 (half of the bits of π(w 2 ) R 2 will be 1s and the other half will be 0s). On the other hand, all bits of π(w 1 ) R 2 are 1 while all bits of π(w 3 ) R 2, π(w 4 ) R 2 and π(w 5 ) R 2 are 0. Even if the coalition can detect all bit positions of R i, and can therefore mark each of the bit positions with either 0 or 1, the pirate can not be sure which of the bit positions of R i is part of B i and which are part of B i 1. Hence, whatever they choose to place in the bit positions of R i, the 1-bits are expected to be distributed evenly between B i and B i 1. Consequently, the pirate-generated watermark, p, should have roughly the same number of 1s in bit positions B i and in bit positions B i 1, i.e., weight (p B i ) weight (p B i 1 ). This gives rise to the idea of running a statistical test comparing weight (p B i ) and weight (p B i 1 ) in order to test the hypothesis that u i is a member of the coalition. Having a large enough d, if the difference in weights is non-negligible, we can conclude that with high probability u i is a traitor. Letting p {0, 1 l be the pirate generated codeword, the tracing algorithm works as follows: 1. If weight (p B 1 ) > 0 then output u 1 is a traitor. 2. If weight (p B n 1 ) < d then output u n is a traitor. 3. For all s = 2 to n 1 do: let g = weight (p R s ). If then output u s is a traitor. weight (p B s 1 ) < g 2 g 2 log 2n ε We proceed to prove the ε-security of this code against coalitions of size c. This is done in Lemma 4.2 and Theorem 4.3. Lemma 4.2 Choosing d = 2n 2 log 2n ε guarantees us that Pr [σ (p) / T ] < ε. Proof. Suppose that the algorithm outputs u 1 as a traitor. Then weight (p B 1 ) > 0. The only codeword π (w i ) for which weight (π (w i ) B 1 ) > 0 is π (w 1 ) which is the codeword personalized to u 1. Hence, the only way one of the bit locations of B 1 can be marked as 1 is if u 1 T. Similarly if σ (p) = u n then u n must be a part of T, because for all other users u i, 1 i n 1, weight (π (w i ) B n 1 ) = d. Hence, in those cases where the algorithm outputs u 1 or u n as traitors, the algorithm is deterministic. Next, we proceed to show that for 2 s n 1 the probability of the algorithm to incriminate u s by mistake is less than ε n for each s. So by the union bound the probability that any user is falsely incriminated is less than ε. 16

17 Let u s U\T be a user that is not in the coalition. As discussed above, the coalition can not distinguish between the bit positions of R s (namely, it can not tell which of them are in B s and which are in B s 1 ). Since the secret permutation π was chosen uniformly at random from the set of all permutations, the 1s in the bit locations of R s can be regarded as being randomly distributed. Let g = weight (p R s ) and define Y to be a random variable which counts the number of 1s in p B s 1. Also, define X to be a binomial random variable over g experiments with success probability of 1 2. For any 0 r 2d we can show that Pr [Y = r] 2 Pr [X = r] and by the Chernoff bound presented above we can see that Pr [Y g ] 2 < a 2 Pr [X g ] 2 < a 2e 2a2 g. Assigning a = g 2 log 2n ε we get [ Pr Y < g g 2 2 log 2n ε ] 2e log 2n ε = ε n. Thus, if π (w s ) / P (T ) then the probability of the algorithm to falsely identify u s as a traitor is at most ε n. Therefore, the probability of any false incrimination of that algorithm is less than ε. Theorem 4.3 Any time the algorithm runs it outputs at least one user as a suspected traitor. This proof is based on the following lemma: Lemma 4.4 If σ (p) does not output a user then for any 1 s n 1, weight (p B s ) 2s 2 log 2n ɛ. Proof. The proof is done by induction on s As we know that u 1 is not returned by the algorithm, we infer that weight (p B 1 ) = 0 < 2 log 2n ɛ. Hence, our statement holds for s = 1. Assume next that for some s < n 1, weight (p B s ) 2s 2 log 2n ɛ. We proceed to show that weight (p B s+1 ) 2 (s + 1) 2 log 2n ɛ. Define x = weight (p B s ) ; x = weight (p B s+1 ) ; g = weight (p R s+1 ). The following is true by definition and by the induction assumption: g = x + x (6) x 2s 2 log 2n ε (7) Since no user is returned by the algorithm, u s is not returned, which implies that Joining (6) with inequalities (7) and (8) we get x g 2 g 2 log 2n ε. (8) x = g x g 2 + g 2 log 2n ε = x + x (x + x ) log 2n ε 17

18 2s2 log 2n ε + x 1 + (2s log 2nε ) + x log 2n ε, which leads to x 2s 2 log 2 2n (2s ε + 2 log 2nε ) + x log 2n ε. Let a be such that x = 2a 2 log 2n ε. Substituting and dividing by 2 log 2n ε we get a 2 s 2 4s 2 log 2 2n ε + + 4a2 log 2 2n ε 4 log 2 2n = s 2 + s 2 + a 2. ε Lemma 4.5 When s 1 the inequality a 2 s 2 + s 2 + a 2 holds only when a s + 1. Proof. First, we denote A = a 2 and S = s 2. A S + S + A (A S) 2 S + A A 2 (2S + 1) A + S (S 1) 0 Finding the roots of this quadratic equation yields: A 1,2 = 2S + 1 ± 4S 2 + 4S + 1 4S 2 + 4S 2 = 2S + 1 ± 8S = Since the quadratic coefficient is 1 we know that this equation get negative values between A 1 and A 2, so A 2S S a 2 2s s s s + 1 (s + 1) 2 Hence, when s 1 we get a s + 1. Lemma 4.5 proves that a s + 1. Hence, x = weight (p B s+1 ) 2 (s + 1) 2 log 2n ε. Using this lemma we can now prove Theorem 4.3: 18

19 Proof. Since no user is returned by the algorithm we know that weight (p B n 1 ) = d = 2n 2 log 2n ε. But Lemma 4.4 proves that for s = n 1, weight (p B n 1 ) 2 (n 1) 2 log 2n ε < d. This contradiction implies that algorithm σ always outputs at least one user. This scheme is working with a binary alphabet and supports tracing for every size of coalition. The codeword length is the weak spot and it is l = d n = 2n 3 log 2n ε bits. The codeword length is much worse than the scheme presented in Section 3. We proceed to describe a composition scheme that combines the above described scheme with that of Chor et. al. that we described in the previous section. That scheme, as opposed to the previous one, is ε-secure against coalition of size c only, but it offers shorter codewords. 4.2 A composition scheme Chor et. al. (CFN) traitor tracing scheme produces codewords of length that is linear in the maximum size of the coalition. However, it depends upon a rich alphabet, the size of which is twice the bound on the size of the coalition. This constraint limits the implementations of that scheme to settings in which it is technically possible to supply such large alphabets. Boneh and Shaw showed how to avoid that limit, by proposing a composition scheme that combines the CFN scheme with their binary fingerprinting scheme. The idea is to start with the codewords over the rich alphabet as dictated by the CFN scheme, and then use the Boneh and Shaw scheme to encode each letter from the rich alphabet by a binary sequence of length O ( c 3 log 2c ) ε. Figure 5 illustrates this process. Using CFN s 4-secure scheme, a codebook Γ was created over an 8-symbol alphabet, Σ. The first 10 symbols of 3 words of the codebook are illustrated, w 1, w 2, w 3 Γ. The composition scheme converts each symbol of Σ with a binary string composed by Boneh and Shaw 8-secure scheme. A different 8-secure scheme is used for each marking spot. The formal definition of the framework follows: As shown in Section 3, the CFN c-secure framework requires an alphabet of size r = 2c. Given the parameter c we create an alphabet of size r for each marking spot. Denote L to be the length of a codeword of the CFN scheme. We create L alphabet sets j = {σ j 1,.., σ jl, 1 j L. Each alphabet set Σ i is then translated into binary sequences using Boneh and Shaw code, as described in Section 4.1. All alphabet sets use d = 2r 2 log 4nL ε, whence each letter from Σ i is replaced by a sequence of length r d = 2r 3 log 4nL ε bits. It is important to note that each alphabet set is created using a unique secret permutation. The codebook Γ = {w 1,..., w n is composed as described in Section 3. Using the alphabet sets Σ 1,..., Σ L we generate the codewords by independently choosing L random hash functions h j : {1,..., n {1,..., r, 1 j L. Then { Γ = w i = σ...σ 1h1 (i) L hl (i) 1 i n. We will set L = 2r log 2n ε = 4c log 2n ε. The total length of the codeword will therefore be L r d = 4 r 4 log 2n 4nL ε log ε = O ( c 4 log 2 n ) ε bits. The personalization scheme is as before, namely, P (u i ) = w i for 1 i n. 19

20 Figure 5: The composition scheme The generation assumption is the same assumption used in Chor et. al. scheme. The tracing algorithm σ of this composition scheme, given p {0, 1 Lrd, is as follows: 1. Apply the algorithm from Boneh and Shaw scheme on each of the L marking spots. When applied on marking spot 1 j L, the algorithm will output a symbol y j Σ j to be the symbol in the jth spot. By concatenating all output symbols, the composed codeword will be y 1...y L 1 L. 2. Find the codeword w i Γ that has the maximal number of matches with y 1...y L. Output u i is a traitor. This is simply the majority algorithm from the CFN scheme. Theorem 4.6 The maximum false incrimination probability of the above described composition scheme is bounded as follows, Pr [σ (p) U\T ] < ε. Proof. First, we compute the probability that the first step of the algorithm identifies the wrong symbol in one of the spots, 1 j L. That is Pr [y j Σ j \P j ] where P j := {(t 1 ) j,..., (t c ) j is the set of all symbols that the pirate holds in marking spot 1 j L. This probability is the same for all symbols, and since we are using Boneh and Shaw scheme with d = 2r 2 log 4nL ε then, in view of the discussion in Section 4.1, Using the union bound we get Pr [y j Σ j \P i ] ε 2L. Pr [y 1...y l (Σ 1 Σ L ) \ (P 1 P L )] L 20 ε 2L = ε 2.

21 We denote the event of that happening as E 1. We move on to the second step of the tracing algorithm, in which each codeword w i Γ is compared to the codeword y 1...y L. The probability that the second step of the algorithm fails is Pr [u i U\T ] where u i is the user whose codeword best matches y 1...y L. This probability was bound in Section 3 and it is less than ε 2 since we picked L = 4c log 2n ε. Denote the event of u i U\T as E 2. The probability of the false incrimination is therefore Pr [E 1 E 2 ] Pr [E 1 ] + Pr [E 2 ] ε 2 + ε 2 = ε. 5 Cox, Kilian, Leighton and Shamoon Image Watermarking Method Digital images are very common in the digital world. So obviously the images and movies are a data type that calls for protection by watermarking. As discussed in the introduction, a good watermark should be invisible and robust. A robust watermark should be immunized for various image processing methods, including: Common signal processing. The watermark should still be retrievable even if common signal processing operations are applied to the data. These include digital-to-analog and analog-to-digital conversion, resampling, requantization (including dithering and recompression), and common signal enhancements to image contrast and color for example. Common geometric distortions. Watermarks in image and video data should also be immune from geometric image operations such as rotation, translation, cropping and scaling. Collusion and Forgery. In addition, the watermark should be robust to collusion by multiple individuals that possess different watermarked copies of the data. As discussed earlier, the collusion should promise that combining data from several watermarked copies will not allow the pirate to destroy the watermarks or to generate a watermark that frames an innocent user. Cox et. al. [3] suggested an effective watermarking method which applies to many types of digital data, including images. The method is based upon insertion of random noise to the image. This noise will be inserted to the low frequency range of the frequency domain which holds the key elements of the image. 5.1 The watermarking procedure The watermark is a sequence of real numbers X = x 1,..., x n, where each value x i is selected independently according to N (0, 1) (the standard normal distribution). Denote by D the image that we want to watermark. The following process is done in order to mark it, (see Figure 6). 21

22 Figure 6: Cox et. al. marking process 22

23 1. First we convert D from the RGB color scheme to the T CbCr scheme. The Y CbCr color scheme is the one used in TV broadcasts in order to be available both to color and black/white TV sets. The Y component is the luminance component holding most of the image data, while Cb and Cr hold the chromaticity data. More bits are used to store the Y component than the chromaticity components. The conversion is done simply by multiplying every RGB pixel with a conversion matrix. The conversion when we are using 8-bit color representation is: Y = 0.3R + 0.6G + 0.1B Cb = 0.2R 0.3G + 0.5B Cr = 0.5R 0.4G 0.1B Transform the Y component of the image into the frequency domain using a frequency domain transform (DCT, DFT etc.). 3. A sequence of n locations in the frequency domain representation are selected as marking spots. These locations should be mostly from the low frequency range which holds the key elements of the image. Denote the values of the selected locations as V = v 1,..., v n. Watermarking of image D with watermark X is done by combining V and X. The combination of V and X is done by changing the values of V to V in one of the three manners: v i = v i + αx i (9) v i = v i (1 + αx i ) (10) v i = v i e αx i (11) Equation (9) is always invertible in the sense that the main x i may be extracted from the unmarked value v i and the marked one v i. Equations (10) and (11) on the other hand are invertible only if v i 0. Each one of the insertion methods is useful with different type of images; method (9) is good when the values of V do not vary widely while methods (10) and (11) are giving better results in such settings. This is because equations (10) and (11) change is relative to the value of v i while equation (9) is constant for all v i values. For example if v 1 = 15 and v 2 = it will be hard to find a value for α for which equation (9) will give a good result. However it will be possible to find such value for equations (10) and (11). This step yields a new sequence of values V = v 1,..., v n. 4. Transform the Y component of the image back to the spatial domain using the inverse transform. 5. Unite the Y component with the original chromaticity components and color transform the image back to the original color scheme. An image D is achieved, it should be almost identical perceptually to D. All of the steps are completely lossless. The process of extracting a mark from an image has the same steps, except the 3rd step which gets X out of V by inverting the insertion method. 23

24 5.2 Evaluating the similarities of watermarks When acquiring a pirate copy of the image D we can extract the watermark X as discussed in Section 5.1. Extracting a mark X that is completely identical to the original watermark of the image X is highly unlikely, since the pirate image was probably processed. We have to calculate the similarity between watermarks X and X. This is done using the following score: sim (X, X ) = Claim 5.1 For any given X, sim (X, X ) N (0, 1). X X X X. (12) Proof. By definition, x i N (0, 1). Hence, c i x i N ( 0, c 2 ) i for any constant ci. Using this we can see that ( ) n n c i x i N 0,. So given any value of X, This implies that i=1 i=1 c 2 i X X N (0, X X ). sim (X, X) N (0, 1). Using Theorem 5.1 we conclude that Pr [sim (X, X ) > t] decays exponentially fast with t. For example Pr [sim (X, X ) > 6] Therefore, if sim (X, X ) exceeds some threshold, we may conclude with high probability that X was originally marked with X. Cox et. al. tested the watermark resilience to most kinds of image processing techniques described above. The results show that even after severely distorting the watermarked image and harming the quality, the original watermark is still readable from the distorted image. Cox et. al. also tested the resilience of their watermarks to coalitions. They averaged five separately watermarked images and the watermark that was extracted from the resulting image still showed traces of all five original watermarks. (They did not report experiments with larger coalitions). 6 An Optimal Binary Watermarking In this section we introduce an algorithm due to Tardos [6]. This is an optimal probabilistic watermarking algorithm. We begin by describing the algorithms for codeword creation and for accusation, and then we prove the correctness of the algorithm. 6.1 The Tardos fingerprinting code The code of Tardos is binary, and it uses randomization to a greater extent than the previous codes that we discussed. We proceed to describe the two phases of the algorithm. The first phase is the construction of the codebook namely, the codewords that are embedded in the copies that are distributed to the users. Once a pirate copy of the copyrighted material is captured, the codeword that is embedded in that copy is extracted and compared to all codewords in the codebook. An accusation algorithm is then executed in order to identify a user as a likely traitor, 24

25 that took part in the creation of that illegal copy. This is the second phase of the algorithm. It is probabilistic since the accusation is always accompanied by a small probability of error, that is bounded by some parameter ε. Hereinafter, we set k = log 1 ε Phase 1: Creating the codebook Let n be the number of users, U = {u 1,..., u n be the set of users, T U be the coalition of traitors and c be an upper bound on the size of the coalition, c T. User u i receives a codeword P (u i ) {0, 1 m. Those codewords are generated as follows: 1. Set the length of the codewords to m = d m c 2 k, for some integral constant d m. 2. Select probabilities p j for all 1 j m, in a manner that is explained below. 3. For all 1 i n (a loop over all users) and for all 1 j m (a loop over the bits of the codeword of that user) set P (u i ) j = 1 in probability p j and P (u i ) j = 0 otherwise. The resulting codebook, {P (u 1 ),..., P (u n ) {0, 1 m, is denoted F n,c,ε. Next, we describe the critical part of selecting the probabilities. Let d t 1 be some constant and set t = 1 d t c, sin2 t = t. Since we assume hereinafter that c > 2, then t (0, 1 2 ) and, consequently, t (0, π 4 ). With this choice of parameters, we select r j uniformly at random from the interval [t, π 2 t ], and then set p j = sin 2 r j, 1 j m Phase 2: Accusation Define the array V i,j = 1 pj p j if P (u i ) j = 1, 1 i n, 1 j m. (13) 1 p j if P (u i ) j = 0 pj Assume that the codeword that was extracted from the pirate copy is y {0, 1 m. Then user u i, 1 i n, will be accused if m y j V i,j > z, (14) j=1 where z = d z ck and d z is some constant. The set of all users that are accused by this algorithm when the pirate codeword is y is denoted σ(y) Properties of the code The code that we described above uses several undetermined constants. Those constants are d m and d t from Phase 1, and d z from Phase 2. The values that were set by Tardos for those constants were: d m = 100, d t = 300, d z = 20. (15) With this choice of constants, and under generation assumption (5), Tardos proved the following claims. 25

26 Theorem 6.1 Let u i U be an arbitrary user, and T U\ {u i be a coalition of arbitrary size. Then for any codeword y P (T ), Pr [u i σ (y)] < ε. Theorem 6.2 Let T U be a coalition of size T c. Then for any codeword y P (T ) Pr [T σ (y) = ] < ε c/4. Theorem 6.1 bounds the probability of falsely accusing just one innocent user when using the codebook F n,c,ε. Hence, in order to achieve ε-security, one needs to use the codebook F n,c,ε/n. In view of Theorem 6.2, that code is in fact (ε, ˆε)-secure (in the sense of Definition 2.1) with ˆε = (ε/n) c/4. Note that ˆε ε because n is typically a very large number and c, being an assumed upper bound on the size of the coalition of pirates (and not the actual size of the active coalition), is usually set to values for which c/ Our contribution The choice of constants made by Tardos, (15), seems, at first, arbitrary. As this code has a minimal length to within a constant factor, a natural question arises: does there exist a different choice of constants with a smaller value of d m - the constant that determines the codeword length? In this study we retrace Tardos analysis and extract from it all constants that were arbitrarily selected. We identify a set of seven constants that appear in Tardos analysis and could be tuned differently in order to decrease the codeword length. Those constants include the above mentioned d m, d t and d z, and four other constants that pop up in the proofs of Theorems 6.1 and 6.2. We replace those constants with parameters and derive a set of inequalities that those parameters must satisfy in order for Theorems 6.1 and 6.2 to hold. Then, we look for a solution of those inequalities in which d m is minimal. Another way to reduce the codeword length is through the error probabilities in Theorems 6.1 and 6.2. As noted at the end of Section 6.1, the Tardos fingerprinting scheme is (ε, ˆε)-secure with ˆε ε. However, a better setting of error probabilities is one in which ˆε ε, because accusing an innocent user of forgery is much worse than allowing a pirate to act undetected. Since pirates tend to repeat their actions, the probability they have for acting undetected for a long period of time is slim. For example, while ε would be typically set to a small value such as ε = 10 3 or ε = 10 4, ˆε could be set to ˆε = 1 2 or even larger, because the expected number of piracy actions until one pirate is finally traced would not exceed 1 1 ˆε. For this reason, we decouple the two error probabilities in Theorems 6.1 and 6.2. This decoupling allows us to further decrease the value of d m. We keep denoting the error probability in Theorem 6.1 by ε, while the error probability in Theorem 6.2 will be denoted by ˆε. Finally, we set η = log ε (ˆε) so that ˆε = ε η. Example. Assume that n = 10 6 and it is desired to achieve (10 3, 3/4)-security (namely, the probability of accusing any innocent user is bounded by 10 3 while the expected number of piracy acts until the scheme traces a true pirate is 4). Then we set in Theorem 6.1 ε = 10 3 /n = 10 9, while in Theorem 6.2 we set ˆε = 3/4. In this case, η = 1 9 log In Section 7 we carry out the above described analysis of the proofs of Theorems 6.1 and 6.2. Given n, c, ε and η, we determine the domain Ω = {(d m, d t, d z ) R 3 of all triplets (d m, d t, d z ) 26

Sequential and Dynamic Frameproof Codes

Sequential and Dynamic Frameproof Codes Sequential and Dynamic Frameproof Codes Maura Paterson m.b.paterson@rhul.ac.uk Department of Mathematics Royal Holloway, University of London Egham, Surrey TW20 0EX Abstract There are many schemes in the

More information

Collusion Resistance of Digital Fingerprinting Schemes

Collusion Resistance of Digital Fingerprinting Schemes TECHNISCHE UNIVERSITEIT EINDHOVEN Department of Industrial and Applied Mathematics Philips Research Information and System Security Group MASTER S THESIS Collusion Resistance of Digital Fingerprinting

More information

Separating codes: constructions and bounds

Separating codes: constructions and bounds Separating codes: constructions and bounds Gérard Cohen 1 and Hans Georg Schaathun 2 1 Ecole Nationale Supérieure des Télécommunications 46 rue Barrault F-75634 Paris Cedex, France cohen@enst.fr 2 Department

More information

BIROn - Birkbeck Institutional Research Online

BIROn - Birkbeck Institutional Research Online BIROn - Birkbeck Institutional Research Online Enabling open access to Birkbeck s published research output Sliding-window dynamic frameproof codes Journal Article http://eprints.bbk.ac.uk/5366 Version:

More information

Fingerprinting, traitor tracing, marking assumption

Fingerprinting, traitor tracing, marking assumption Fingerprinting, traitor tracing, marking assumption Alexander Barg University of Maryland, College Park ICITS 2011, Amsterdam Acknowledgment: Based in part of joint works with Prasanth A. (2009, '10) Prasanth

More information

Random Codes for Digital Fingerprinting

Random Codes for Digital Fingerprinting Linköping Studies in Science and Technology Thesis No. 749 Random Codes for Digital Fingerprinting Jacob Löfvenberg LIU-TEK-LIC-1999:07 Department of Electrical Engineering Linköping University, SE-581

More information

New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting

New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting Hideki Yagi 1, Toshiyasu Matsushima 2, and Shigeichi Hirasawa 2 1 Media Network Center, Waseda University 1-6-1,

More information

Generalized hashing and applications to digital fingerprinting

Generalized hashing and applications to digital fingerprinting Generalized hashing and applications to digital fingerprinting Noga Alon, Gérard Cohen, Michael Krivelevich and Simon Litsyn Abstract Let C be a code of length n over an alphabet of q letters. An n-word

More information

Foundations of Cryptography

Foundations of Cryptography - 111 - Foundations of Cryptography Notes of lecture No. 10B & 11 (given on June 11 & 18, 1989) taken by Sergio Rajsbaum Summary In this lecture we define unforgeable digital signatures and present such

More information

Lecture 1: Introduction to Public key cryptography

Lecture 1: Introduction to Public key cryptography Lecture 1: Introduction to Public key cryptography Thomas Johansson T. Johansson (Lund University) 1 / 44 Key distribution Symmetric key cryptography: Alice and Bob share a common secret key. Some means

More information

An Introduction to Probabilistic Encryption

An Introduction to Probabilistic Encryption Osječki matematički list 6(2006), 37 44 37 An Introduction to Probabilistic Encryption Georg J. Fuchsbauer Abstract. An introduction to probabilistic encryption is given, presenting the first probabilistic

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

A Framework for Optimizing Nonlinear Collusion Attacks on Fingerprinting Systems

A Framework for Optimizing Nonlinear Collusion Attacks on Fingerprinting Systems A Framework for Optimizing onlinear Collusion Attacks on Fingerprinting Systems egar iyavash Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Coordinated Science

More information

Authentication. Chapter Message Authentication

Authentication. Chapter Message Authentication Chapter 5 Authentication 5.1 Message Authentication Suppose Bob receives a message addressed from Alice. How does Bob ensure that the message received is the same as the message sent by Alice? For example,

More information

Lecture Notes on Secret Sharing

Lecture Notes on Secret Sharing COMS W4261: Introduction to Cryptography. Instructor: Prof. Tal Malkin Lecture Notes on Secret Sharing Abstract These are lecture notes from the first two lectures in Fall 2016, focusing on technical material

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 16 October 30, 2017 CPSC 467, Lecture 16 1/52 Properties of Hash Functions Hash functions do not always look random Relations among

More information

Lecture 2: Perfect Secrecy and its Limitations

Lecture 2: Perfect Secrecy and its Limitations CS 4501-6501 Topics in Cryptography 26 Jan 2018 Lecture 2: Perfect Secrecy and its Limitations Lecturer: Mohammad Mahmoody Scribe: Mohammad Mahmoody 1 Introduction Last time, we informally defined encryption

More information

EXPURGATED GAUSSIAN FINGERPRINTING CODES. Pierre Moulin and Negar Kiyavash

EXPURGATED GAUSSIAN FINGERPRINTING CODES. Pierre Moulin and Negar Kiyavash EXPURGATED GAUSSIAN FINGERPRINTING CODES Pierre Moulin and Negar iyavash Beckman Inst, Coord Sci Lab and ECE Department University of Illinois at Urbana-Champaign, USA ABSTRACT This paper analyzes the

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 August 28, 2003 These supplementary notes review the notion of an inductive definition and give

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0

Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Near-Optimal Secret Sharing and Error Correcting Codes in AC 0 Kuan Cheng Yuval Ishai Xin Li December 18, 2017 Abstract We study the question of minimizing the computational complexity of (robust) secret

More information

5th March Unconditional Security of Quantum Key Distribution With Practical Devices. Hermen Jan Hupkes

5th March Unconditional Security of Quantum Key Distribution With Practical Devices. Hermen Jan Hupkes 5th March 2004 Unconditional Security of Quantum Key Distribution With Practical Devices Hermen Jan Hupkes The setting Alice wants to send a message to Bob. Channel is dangerous and vulnerable to attack.

More information

An introduction to basic information theory. Hampus Wessman

An introduction to basic information theory. Hampus Wessman An introduction to basic information theory Hampus Wessman Abstract We give a short and simple introduction to basic information theory, by stripping away all the non-essentials. Theoretical bounds on

More information

The Lovász Local Lemma : A constructive proof

The Lovász Local Lemma : A constructive proof The Lovász Local Lemma : A constructive proof Andrew Li 19 May 2016 Abstract The Lovász Local Lemma is a tool used to non-constructively prove existence of combinatorial objects meeting a certain conditions.

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

STUDY GUIDE Math 20. To accompany Intermediate Algebra for College Students By Robert Blitzer, Third Edition

STUDY GUIDE Math 20. To accompany Intermediate Algebra for College Students By Robert Blitzer, Third Edition STUDY GUIDE Math 0 To the students: To accompany Intermediate Algebra for College Students By Robert Blitzer, Third Edition When you study Algebra, the material is presented to you in a logical sequence.

More information

Lecture 4: Codes based on Concatenation

Lecture 4: Codes based on Concatenation Lecture 4: Codes based on Concatenation Error-Correcting Codes (Spring 206) Rutgers University Swastik Kopparty Scribe: Aditya Potukuchi and Meng-Tsung Tsai Overview In the last lecture, we studied codes

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003

CS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003 CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming

More information

1 Distributional problems

1 Distributional problems CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially

More information

Lecture 7 September 24

Lecture 7 September 24 EECS 11: Coding for Digital Communication and Beyond Fall 013 Lecture 7 September 4 Lecturer: Anant Sahai Scribe: Ankush Gupta 7.1 Overview This lecture introduces affine and linear codes. Orthogonal signalling

More information

THE RSA ENCRYPTION SCHEME

THE RSA ENCRYPTION SCHEME THE RSA ENCRYPTION SCHEME Contents 1. The RSA Encryption Scheme 2 1.1. Advantages over traditional coding methods 3 1.2. Proof of the decoding procedure 4 1.3. Security of the RSA Scheme 4 1.4. Finding

More information

Recommended Reading. A Brief History of Infinity The Mystery of the Aleph Everything and More

Recommended Reading. A Brief History of Infinity The Mystery of the Aleph Everything and More Direct Proofs Recommended Reading A Brief History of Infinity The Mystery of the Aleph Everything and More Recommended Courses Math 161: Set Theory What is a Proof? Induction and Deduction In the sciences,

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 10 February 19, 2013 CPSC 467b, Lecture 10 1/45 Primality Tests Strong primality tests Weak tests of compositeness Reformulation

More information

Proving languages to be nonregular

Proving languages to be nonregular Proving languages to be nonregular We already know that there exist languages A Σ that are nonregular, for any choice of an alphabet Σ. This is because there are uncountably many languages in total and

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

1 Maintaining a Dictionary

1 Maintaining a Dictionary 15-451/651: Design & Analysis of Algorithms February 1, 2016 Lecture #7: Hashing last changed: January 29, 2016 Hashing is a great practical tool, with an interesting and subtle theory too. In addition

More information

Lecture 7: More Arithmetic and Fun With Primes

Lecture 7: More Arithmetic and Fun With Primes IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 7: More Arithmetic and Fun With Primes David Mix Barrington and Alexis Maciel July

More information

On Two Class-Constrained Versions of the Multiple Knapsack Problem

On Two Class-Constrained Versions of the Multiple Knapsack Problem On Two Class-Constrained Versions of the Multiple Knapsack Problem Hadas Shachnai Tami Tamir Department of Computer Science The Technion, Haifa 32000, Israel Abstract We study two variants of the classic

More information

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs

Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs Impossibility Results for Universal Composability in Public-Key Models and with Fixed Inputs Dafna Kidron Yehuda Lindell June 6, 2010 Abstract Universal composability and concurrent general composition

More information

Computational security & Private key encryption

Computational security & Private key encryption Computational security & Private key encryption Emma Arfelt Stud. BSc. Software Development Frederik Madsen Stud. MSc. Software Development March 2017 Recap Perfect Secrecy Perfect indistinguishability

More information

Solutions for week 1, Cryptography Course - TDA 352/DIT 250

Solutions for week 1, Cryptography Course - TDA 352/DIT 250 Solutions for week, Cryptography Course - TDA 352/DIT 250 In this weekly exercise sheet: you will use some historical ciphers, the OTP, the definition of semantic security and some combinatorial problems.

More information

Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security

Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security Lecture 17 - Diffie-Hellman key exchange, pairing, Identity-Based Encryption and Forward Security Boaz Barak November 21, 2007 Cyclic groups and discrete log A group G is cyclic if there exists a generator

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 14 October 16, 2013 CPSC 467, Lecture 14 1/45 Message Digest / Cryptographic Hash Functions Hash Function Constructions Extending

More information

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds

Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized

More information

Introduction to Cryptography Lecture 13

Introduction to Cryptography Lecture 13 Introduction to Cryptography Lecture 13 Benny Pinkas June 5, 2011 Introduction to Cryptography, Benny Pinkas page 1 Electronic cash June 5, 2011 Introduction to Cryptography, Benny Pinkas page 2 Simple

More information

1 Cryptographic hash functions

1 Cryptographic hash functions CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 23 February 2011 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length

More information

ROTATION, TRANSLATION AND SCALING INVARIANT WATERMARKING USING A GENERALIZED RADON TRANSFORMATION

ROTATION, TRANSLATION AND SCALING INVARIANT WATERMARKING USING A GENERALIZED RADON TRANSFORMATION ROTATION, TRANSLATION AND SCALING INVARIANT WATERMARKING USING A GENERALIZED RADON TRANSFORMATION D. SIMITOPOULOS, A. OIKONOMOPOULOS, AND M. G. STRINTZIS Aristotle University of Thessaloniki Electrical

More information

8 Security against Chosen Plaintext

8 Security against Chosen Plaintext 8 Security against Chosen Plaintext Attacks We ve already seen a definition that captures security of encryption when an adversary is allowed to see just one ciphertext encrypted under the key. Clearly

More information

Lecture 5, CPA Secure Encryption from PRFs

Lecture 5, CPA Secure Encryption from PRFs CS 4501-6501 Topics in Cryptography 16 Feb 2018 Lecture 5, CPA Secure Encryption from PRFs Lecturer: Mohammad Mahmoody Scribe: J. Fu, D. Anderson, W. Chao, and Y. Yu 1 Review Ralling: CPA Security and

More information

1 More finite deterministic automata

1 More finite deterministic automata CS 125 Section #6 Finite automata October 18, 2016 1 More finite deterministic automata Exercise. Consider the following game with two players: Repeatedly flip a coin. On heads, player 1 gets a point.

More information

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 Star Joins A common structure for data mining of commercial data is the star join. For example, a chain store like Walmart keeps a fact table whose tuples each

More information

Application: Bucket Sort

Application: Bucket Sort 5.2.2. Application: Bucket Sort Bucket sort breaks the log) lower bound for standard comparison-based sorting, under certain assumptions on the input We want to sort a set of =2 integers chosen I+U@R from

More information

Cryptanalysis of a Message Authentication Code due to Cary and Venkatesan

Cryptanalysis of a Message Authentication Code due to Cary and Venkatesan Cryptanalysis of a Message Authentication Code due to Cary and Venkatesan Simon R. Blackburn and Kenneth G. Paterson Department of Mathematics Royal Holloway, University of London Egham, Surrey, TW20 0EX,

More information

New Attacks on the Concatenation and XOR Hash Combiners

New Attacks on the Concatenation and XOR Hash Combiners New Attacks on the Concatenation and XOR Hash Combiners Itai Dinur Department of Computer Science, Ben-Gurion University, Israel Abstract. We study the security of the concatenation combiner H 1(M) H 2(M)

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 9 February 6, 2012 CPSC 467b, Lecture 9 1/53 Euler s Theorem Generating RSA Modulus Finding primes by guess and check Density of

More information

Shannon s Noisy-Channel Coding Theorem

Shannon s Noisy-Channel Coding Theorem Shannon s Noisy-Channel Coding Theorem Lucas Slot Sebastian Zur February 2015 Abstract In information theory, Shannon s Noisy-Channel Coding Theorem states that it is possible to communicate over a noisy

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

2.3 Some Properties of Continuous Functions

2.3 Some Properties of Continuous Functions 2.3 Some Properties of Continuous Functions In this section we look at some properties, some quite deep, shared by all continuous functions. They are known as the following: 1. Preservation of sign property

More information

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today:

1 What are Physical Attacks. 2 Physical Attacks on RSA. Today: Today: Introduction to the class. Examples of concrete physical attacks on RSA A computational approach to cryptography Pseudorandomness 1 What are Physical Attacks Tampering/Leakage attacks Issue of how

More information

1 Cryptographic hash functions

1 Cryptographic hash functions CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 24 October 2012 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length

More information

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 08 Shannon s Theory (Contd.)

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory

Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory Entropy and Ergodic Theory Lecture 3: The meaning of entropy in information theory 1 The intuitive meaning of entropy Modern information theory was born in Shannon s 1948 paper A Mathematical Theory of

More information

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9. ( c ) E p s t e i n, C a r t e r, B o l l i n g e r, A u r i s p a C h a p t e r 17: I n f o r m a t i o n S c i e n c e P a g e 1 CHAPTER 17: Information Science 17.1 Binary Codes Normal numbers we use

More information

Winter 2008 Introduction to Modern Cryptography Benny Chor and Rani Hod. Assignment #2

Winter 2008 Introduction to Modern Cryptography Benny Chor and Rani Hod. Assignment #2 0368.3049.01 Winter 2008 Introduction to Modern Cryptography Benny Chor and Rani Hod Assignment #2 Published Sunday, February 17, 2008 and very slightly revised Feb. 18. Due Tues., March 4, in Rani Hod

More information

Improved High-Order Conversion From Boolean to Arithmetic Masking

Improved High-Order Conversion From Boolean to Arithmetic Masking Improved High-Order Conversion From Boolean to Arithmetic Masking Luk Bettale 1, Jean-Sébastien Coron 2, and Rina Zeitoun 1 1 IDEMIA, France luk.bettale@idemia.com, rina.zeitoun@idemia.com 2 University

More information

6.842 Randomness and Computation Lecture 5

6.842 Randomness and Computation Lecture 5 6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its

More information

Solution of Exercise Sheet 7

Solution of Exercise Sheet 7 saarland Foundations of Cybersecurity (Winter 16/17) Prof. Dr. Michael Backes CISPA / Saarland University university computer science Solution of Exercise Sheet 7 1 Variants of Modes of Operation Let (K,

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

1 Indistinguishability for multiple encryptions

1 Indistinguishability for multiple encryptions CSCI 5440: Cryptography Lecture 3 The Chinese University of Hong Kong 26 September 2012 1 Indistinguishability for multiple encryptions We now have a reasonable encryption scheme, which we proved is message

More information

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore

Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore (Refer Slide Time: 00:15) Error Correcting Codes Prof. Dr. P. Vijay Kumar Department of Electrical Communication Engineering Indian Institute of Science, Bangalore Lecture No. # 03 Mathematical Preliminaries:

More information

(Classical) Information Theory III: Noisy channel coding

(Classical) Information Theory III: Noisy channel coding (Classical) Information Theory III: Noisy channel coding Sibasish Ghosh The Institute of Mathematical Sciences CIT Campus, Taramani, Chennai 600 113, India. p. 1 Abstract What is the best possible way

More information

Error-correcting codes and applications

Error-correcting codes and applications Error-correcting codes and applications November 20, 2017 Summary and notation Consider F q : a finite field (if q = 2, then F q are the binary numbers), V = V(F q,n): a vector space over F q of dimension

More information

Entanglement and information

Entanglement and information Ph95a lecture notes for 0/29/0 Entanglement and information Lately we ve spent a lot of time examining properties of entangled states such as ab è 2 0 a b è Ý a 0 b è. We have learned that they exhibit

More information

Lattice Cryptography

Lattice Cryptography CSE 06A: Lattice Algorithms and Applications Winter 01 Instructor: Daniele Micciancio Lattice Cryptography UCSD CSE Many problems on point lattices are computationally hard. One of the most important hard

More information

Cryptographic Hash Functions

Cryptographic Hash Functions Cryptographic Hash Functions Çetin Kaya Koç koc@ece.orst.edu Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report December 9, 2002 Version 1.5 1 1 Introduction

More information

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012

Lecture Introduction. 2 Formal Definition. CS CTT Current Topics in Theoretical CS Oct 30, 2012 CS 59000 CTT Current Topics in Theoretical CS Oct 30, 0 Lecturer: Elena Grigorescu Lecture 9 Scribe: Vivek Patel Introduction In this lecture we study locally decodable codes. Locally decodable codes are

More information

Simple Math: Cryptography

Simple Math: Cryptography 1 Introduction Simple Math: Cryptography This section develops some mathematics before getting to the application. The mathematics that I use involves simple facts from number theory. Number theory is

More information

Lecture 9 - Symmetric Encryption

Lecture 9 - Symmetric Encryption 0368.4162: Introduction to Cryptography Ran Canetti Lecture 9 - Symmetric Encryption 29 December 2008 Fall 2008 Scribes: R. Levi, M. Rosen 1 Introduction Encryption, or guaranteeing secrecy of information,

More information

CONSTRUCTION OF THE REAL NUMBERS.

CONSTRUCTION OF THE REAL NUMBERS. CONSTRUCTION OF THE REAL NUMBERS. IAN KIMING 1. Motivation. It will not come as a big surprise to anyone when I say that we need the real numbers in mathematics. More to the point, we need to be able to

More information

Computational Tasks and Models

Computational Tasks and Models 1 Computational Tasks and Models Overview: We assume that the reader is familiar with computing devices but may associate the notion of computation with specific incarnations of it. Our first goal is to

More information

Cryptographic Protocols Notes 2

Cryptographic Protocols Notes 2 ETH Zurich, Department of Computer Science SS 2018 Prof. Ueli Maurer Dr. Martin Hirt Chen-Da Liu Zhang Cryptographic Protocols Notes 2 Scribe: Sandro Coretti (modified by Chen-Da Liu Zhang) About the notes:

More information

arxiv: v1 [cs.ds] 3 Feb 2018

arxiv: v1 [cs.ds] 3 Feb 2018 A Model for Learned Bloom Filters and Related Structures Michael Mitzenmacher 1 arxiv:1802.00884v1 [cs.ds] 3 Feb 2018 Abstract Recent work has suggested enhancing Bloom filters by using a pre-filter, based

More information

Lecture Notes, Week 6

Lecture Notes, Week 6 YALE UNIVERSITY DEPARTMENT OF COMPUTER SCIENCE CPSC 467b: Cryptography and Computer Security Week 6 (rev. 3) Professor M. J. Fischer February 15 & 17, 2005 1 RSA Security Lecture Notes, Week 6 Several

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes. 5 Binary Codes You have already seen how check digits for bar codes (in Unit 3) and ISBN numbers (Unit 4) are used to detect errors. Here you will look at codes relevant for data transmission, for example,

More information

Colored Bin Packing: Online Algorithms and Lower Bounds

Colored Bin Packing: Online Algorithms and Lower Bounds Noname manuscript No. (will be inserted by the editor) Colored Bin Packing: Online Algorithms and Lower Bounds Martin Böhm György Dósa Leah Epstein Jiří Sgall Pavel Veselý Received: date / Accepted: date

More information

Notes on Alekhnovich s cryptosystems

Notes on Alekhnovich s cryptosystems Notes on Alekhnovich s cryptosystems Gilles Zémor November 2016 Decisional Decoding Hypothesis with parameter t. Let 0 < R 1 < R 2 < 1. There is no polynomial-time decoding algorithm A such that: Given

More information

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when

More information

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2 Contents 1 Recommended Reading 1 2 Public Key/Private Key Cryptography 1 2.1 Overview............................................. 1 2.2 RSA Algorithm.......................................... 2 3 A Number

More information

Introduction to Modern Cryptography Lecture 11

Introduction to Modern Cryptography Lecture 11 Introduction to Modern Cryptography Lecture 11 January 10, 2017 Instructor: Benny Chor Teaching Assistant: Orit Moskovich School of Computer Science Tel-Aviv University Fall Semester, 2016 17 Tuesday 12:00

More information

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018

EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Please submit the solutions on Gradescope. Some definitions that may be useful: EE376A: Homework #2 Solutions Due by 11:59pm Thursday, February 1st, 2018 Definition 1: A sequence of random variables X

More information

Entropy as a measure of surprise

Entropy as a measure of surprise Entropy as a measure of surprise Lecture 5: Sam Roweis September 26, 25 What does information do? It removes uncertainty. Information Conveyed = Uncertainty Removed = Surprise Yielded. How should we quantify

More information

On the Impossibility of Black-Box Truthfulness Without Priors

On the Impossibility of Black-Box Truthfulness Without Priors On the Impossibility of Black-Box Truthfulness Without Priors Nicole Immorlica Brendan Lucier Abstract We consider the problem of converting an arbitrary approximation algorithm for a singleparameter social

More information

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography CS 7880 Graduate Cryptography September 10, 2015 Lecture 1: Perfect Secrecy and Statistical Authentication Lecturer: Daniel Wichs Scribe: Matthew Dippel 1 Topic Covered Definition of perfect secrecy One-time

More information

Characterizing Ideal Weighted Threshold Secret Sharing

Characterizing Ideal Weighted Threshold Secret Sharing Characterizing Ideal Weighted Threshold Secret Sharing Amos Beimel Tamir Tassa Enav Weinreb August 12, 2004 Abstract Weighted threshold secret sharing was introduced by Shamir in his seminal work on secret

More information

Number theory (Chapter 4)

Number theory (Chapter 4) EECS 203 Spring 2016 Lecture 12 Page 1 of 8 Number theory (Chapter 4) Review Compute 6 11 mod 13 in an efficient way What is the prime factorization of 100? 138? What is gcd(100, 138)? What is lcm(100,138)?

More information

Learning outcomes. Palettes and GIF. The colour palette. Using the colour palette The GIF file. CSM25 Secure Information Hiding

Learning outcomes. Palettes and GIF. The colour palette. Using the colour palette The GIF file. CSM25 Secure Information Hiding Learning outcomes Palettes and GIF CSM25 Secure Information Hiding Dr Hans Georg Schaathun University of Surrey Learn how images are represented using a palette Get an overview of hiding techniques in

More information