LDPC Codes in the McEliece Cryptosystem

Size: px
Start display at page:

Download "LDPC Codes in the McEliece Cryptosystem"

Transcription

1 LDPC Codes in the McEliece Cryptosystem Marco Baldi, and Franco Chiaraluce Member, IEEE Abstract arxiv: v1 [cs.it] 30 Sep 2007 The original McEliece cryptosystem, based on Goppa codes, has two important drawbacks: long keys and low transmission rate. LDPC codes seem natural candidates to overcome such drawbacks, because of their sparse parity-check matrices, that could form the public keys, and the flexibility in transmission rates. Moreover, quasi-cyclic (QC) LDPC codes could permit to further reduce the key length. Such expected advantages, however, must combine with the need to ensure security. In this paper, we firstly consider a previous LDPC-based instance of the McEliece cryptosystem and develop a thorough cryptanalysis to quantify the risk level of various attacks. We verify that some families of QC-LDPC codes, like that based on circulant permutation matrices, cannot be used, since vulnerable to total break attacks. Instead, we propose a variant of the difference families approach that permits to design codes not exposed to the same threat. However, a killer attack exists, based on the dual of the secret code, able to produce a total break of the cryptosystem. As a countermeasure, we propose a new instance of the system that is immune to all the considered attacks and can achieve the prefixed objectives with limited complexity. Index Terms Quasi-Cyclic Low-Density Parity-Check Codes, Cryptography, McEliece cryptosystem. The material in this paper was presented in part at the IEEE International Symposium on Information Theory, Nice, France, June 2007 and at the IEEE International Conference on Communications, Glasgow, Scotland, June The authors are with the Department of Electronics, Artificial Intelligence and Telecommunications, Università Politecnica delle Marche, Ancona, Italy ( m.baldi@univpm.it; f.chiaraluce@univpm.it).

2 1 LDPC Codes in the McEliece Cryptosystem I. INTRODUCTION Since many years, error correcting codes have gained an important place in cryptography. In fact, in 1978, just a couple of years after publication of the pioneeristic work of Diffie and Hellman on the usage of the double key [1], McEliece proposed a public-key cryptosystem based on algebraic coding theory [2] that revealed to have a very high security level. The rationale of the McEliece algorithm, that adopts a generator matrix as the private key and one transformation of it as the public key, lies in the difficulty of decoding a linear large code with no visible structure, that in fact is known to be an NP complete problem [3]. The original McEliece cryptosystem is still unbroken, in the sense that no algorithm able to realize a total break in an acceptable time has been presented up to now. On the other hand, a vast body of literature exists on local deduction attacks, i.e., attacks finalized to find the plaintext of intercepted ciphertexts, without knowing the secret key. Despite the advances in the field, however, the complexity of this kind of violation remains very high, and quite intractable in practice. Moreover, the system is two or three orders of magnitude faster than competing solutions, like RSA, that is, probably, the most popular public key algorithm currently in use. In spite of these merits, however, less attention has been devoted by the cryptographic community to the McEliece cryptosystem with respect to other solutions, for two main reasons: i) the large size of its public key and ii) its low transmission rate (that is about 0.5). Conversely, attention has been focused on a small number of cryptosystems, such as the Diffie- Hellman procedure, the mentioned RSA and, more recently, the elliptic curves. All these schemes are based on either an integer factoring problem (IFP) or a discrete logarithm problem (DLP). There is an increasing conviction that practical algorithms able to break both IFP and DLP in polynomial time will be discovered soon. Actually, Shor [4] has already found a (probabilistic) polynomial-time algorithm potentially able to break both IFP and DLP. At present, however, this algorithm would require a quantum computer, so it cannot be implemented. Anyway, the discovery highlights a potential breakthrough of these classic solutions and suggests to resume alternative solutions, like the McEliece cryptosystem, relying on neither IFP nor DLP [5]. In his original formulation, McEliece used Goppa codes of length n = 1024, dimension

3 2 k = 524, and minimum distance d min = 101, that are able to correct t = 50 errors. Several attempts have been made, later on, for overcoming the drawbacks of the original system and/or further reducing the complexity. Among them, it is meaningful to mention those based on the usage of Generalized Reed-Solomon (GRS) codes and some types of concatenated codes. Both these classes of codes, however, did not pass the cryptanalysis verification [6], [7]. GRS codes, in particular, were initially considered for an important variant of the McEliece cryptosystem, proposed by Niederreiter [8]. The Niederreiter cryptosystem can be seen as a dual version of the McEliece cryptosystem, since it uses a parity-check matrix as (the most important part of) the public key, instead of a generator matrix; its security is equivalent to that of the McEliece system, at a parity of the code [9], but it shows some advantages, like the key size more than halved (when considering the values reported above), the transmission rate slightly increased, and the possibility to use a public key in systematic form. So, though renouncing the usage of GRS codes, the Niederreiter system is considered a good alternative to the McEliece system. Moreover, the Niederreiter cryptosystem can serve as the basis for digital signature schemes [10]. Trying to counter the attack on GRS based systems, Gabidulin et al. proposed a modified structure of the Niederreiter public key [11]; as verified by Gibson [12], however, this other variant requires a longer code in order to achieve equivalent security; for this reason, although remaining conceptually interesting, the Gabidulin system has been rapidly abandoned. Rao and Nam introduced the so-called private-key algebraic-coded cryptosystems [13] in order to provide higher security with simpler error-correcting codes. Possible flaws of this scheme were found by Struik and van Tilburg [14], yielding a revision of the former proposal [15]. Another modification of the McEliece cryptosystem, that uses the so-called maximum-rankdistance (MRD) codes, was proposed in [11]. This variant is also known as the Gabidulin- Paramonov-Tretjakov (GPT) cryptosystem but was also criticized in [12], where a successful attack was found for the system with its original parameters. New values of the parameters were subsequently given, in order to avoid the attack. As implied by the adoption of an error correcting code integrated within the cryptosystem, a distinctive feature of the McEliece solution, and its proposed variants, is the ability to combine encryption with error correction. A practical way to realize this integration was proposed by Riek [16], while the trade-off between error control and security was investigated by Alabbadi

4 3 and Wicker [17]. All these solutions adopt classic block codes with hard-decision decoding. On the other hand, the current scenario of error correcting codes is dominated by new schemes using Soft-Input Soft- Output (SISO) decoding, basically all derived by the discovery of the turbo-principle. Among them, in particular, an important role is played by Low-Density Parity-Check (LDPC) codes, that permit to approach the theoretical Shannon limit [18], while ensuring reduced complexity. For this reason, LDPC codes are considered in the most recent telecommunication standards [19]-[21]. Therefore, the idea behind this paper is to verify applicability of LDPC codes in the framework of the McEliece system, regarding either the possibility to overcome the drawbacks on the key length and transmission rate or the capability to fight classic and new attacks to the security. Actually, a first proposal for the adoption of LDPC codes in the McEliece system was presented by Monico et al. in [22]. In that paper, however, the analysis was limited to qualitatively show that random-based LDPC codes do not permit reducing of the key size, while cryptanalysis considered only one attack, that will be reminded in the following. In this paper, we show that the cryptosystem proposed in [22] is not secure, since an attack exists, based on the dual of the secret code, that is able to totally break it with limited complexity, unless particular choices for its parameters are made. In these cases, however, the key size and the system complexity grow, while the code rate is reduced, to the point that interest on such cryptosystem is no longer justified. We propose, instead, a new version of the McEliece cryptosystem based on LDPC codes that is immune to such attack, even for practical choices of the system parameters. Contrary to [22], our study is focused on the class of Quasi-Cyclic (QC) LDPC codes. This class has recently gained an increasing interest, as it combines excellent error rate performance with simple encoding implementation. As for storing of the parity-check matrix or the generator matrix of a QC-LDPC code it is sufficient to give only a few rows (one in some cases), from which the whole matrix can be derived, these codes appear as potential candidates to solve the problem of the large key, permitting, in principle, to have lengths proportional to the code length or even to the information sequence length (whilst the law is quadratic for the original McEliece cryptosystem). At the same time, the great flexibility in designing LDPC codes with variable rate provides expectations on the possibility to increase the transmission rate. The proposal to employ quasi-cyclic (but not LDPC) codes for shortening the public key of

5 4 the McEliece cryptosystem has been also presented, more recently and independently, by Gaborit [23]. His scheme uses a large set of subcodes of a given BCH code in place of the Goppa code of the original system; so, though the idea of using subcodes is very interesting and maybe reusable also in different contexts, it continues referring to classic structures of the parity-check (or the generator) matrix, while the sparse character of the parity-check matrix for LDPC codes has a number of implications that must be accurately considered from the cryptanalysis point of view. The organization of the paper is as follows. In Section II we introduce the McEliece cryptosystem, first reminding the original structure and then discussing an implementation based on LDPC codes, very similar to that proposed in [22]. In Section III we present the family of QC-LDPC codes considered in the study, with emphasis on the codes designed through a modified version of the Difference Families approach. Section IV is devoted to the cryptanalysis, and contains the description of both already known and new attacks able to threaten the security of the proposed system. The results of the cryptanalysis permit to verify inapplicability of some families of codes, and to design suitable parameters for the acceptable families. Once having identified the most dangerous attack, in Section V the new scheme is proposed, that ensures satisfactory system robustness against all the examined attacks, with reduced key size and increased transmission rate. Finally, some concluding remarks are presented in Section VI. II. THE MCELIECE CRYPTOSYSTEM The McEliece cryptosystem is an asymmetric cryptosystem based on linear block codes. Not all codes are suitable for this kind of application. In order to be used in the McEliece cryptosystem, a linear block code must satisfy the following conditions [24]: 1) for given code length, dimension and error correction capability, the family Γ, among which one is chosen to serve as the secret code, is large enough to avoid any enumeration; 2) an efficient algorithm is known for decoding; 3) a generator (or parity-check) matrix of a permutation equivalent code (that serves as the public key) gives no information about the structure of the secret code. Property 3), in particular, implies that the fast decoding algorithm needs the knowledge of some characteristics or parameters of the secret code hidden in the public code.

6 5 The fact that Goppa codes satisfy all these conditions will be reminded in Subsection II-A. Discussion on the similar properties of LDPC codes is instead postponed to Section III, where we will introduce the specific class of QC-LDPC codes considered in this work. A. The Original McEliece Cryptosystem Key generation in the original McEliece cryptosystem is based on Goppa codes [25]. For each irreducible polynomial of degree t over GF 2 m, there is a binary Goppa code with length n = 2 m and dimension k n t m, able to correct at most t errors. In addition, there is a fast decoding algorithm for these codes [the Patterson algorithm that runs in time O (n t).] Therefore, a user who wants to receive encrypted messages, say Bob, can choose at random a polynomial of degree t and verify whether it is an irreducible polynomial, or not. The probability that a random chosen polynomial is irreducible is about 1/t, and, since there are fast algorithms for testing irreducibility, this selection is easy to do. When a valid polynomial has been selected, Bob can produce a k n generator matrix G for the code, which can be in reduced echelon (i.e., systematic) form. Matrix G will be part of Bob s private key. The remaining part of the private key is formed by two other matrices: a dense k k non-singular scrambling matrix S and an n n permutation matrix P. Both S and P are randomly chosen by Bob, who can hence produce his public key as follows: G = S G P (1) The public key G is the generator matrix of a linear code with the same rate and minimum distance as that generated by G. It is made available in a public directory and is used to run the encryption algorithm on messages intended for Bob. Now suppose that Alice wants to send an encrypted message to Bob. She obtains Bob s public key from the public directory and derives the encrypted version of her message through the following encryption algorithm. First of all, the message is divided into k-bit blocks. If u is one of these blocks, its encrypted version is obtained as follows: x = u G + e = c + e (2) where e is a locally generated random vector of length n and weight t. Vector e can be seen as an intentional error vector; therefore its weight cannot exceed the code correction capability

7 6 in order to allow the reconstruction of the original message when needed. This is true if we suppose that the encrypted message is transmitted over an error-free transmission channel, that is without unintentional errors [for example induced by thermal noise in an additive white gaussian noise (AWGN) channel.] Otherwise, the channel effect on the encrypted message must be considered, and the weight of e must be reduced, at least in principle, according with an estimate of the maximum number of errors induced by transmission, in order to avoid that the sum of intentional and unintentional errors exceeds the code correction capability. When Bob receives Alice s encrypted message x, he is able to recover the cleartext message through the following decryption algorithm. He first computes x = x P 1, where P 1 is the inverse of the permutation matrix P (that coincides with its transpose, due to orthogonality of permutation matrices). Considering Eqs. (1) and (2), the following relationship holds: x = x P 1 = = (u S G P + e) P 1 = = u S G + e P 1 (3) Therefore, x is a codeword of the Goppa code chosen by Bob (the one corresponding to the information vector u = u S), affected by the error vector e P 1 of weight t. Using Patterson s algorithm, Bob can hence correct the errors and recover u, from which u is easily derived through post-multiplication by S 1. Goppa codes satisfy conditions 1)-3) listed in the Introduction, and, for this reason, they were formerly considered for inclusion in the cryptosystem. In particular, the cardinality of Γ for the Goppa codes with n = 1024, k = 524 and t = 50 is close to 2 500, that is an impressively high number. Moreover, as mentioned, a fast algorithm exists for their decoding, that however requires the knowledge of two polynomials (the generator polynomial and the Goppa polynomial) that cannot be easily extracted from the generator matrix. Finally, no efficient algorithm is available that permits to extract the structure of a Goppa code from one of its permuted generator matrix. The latter considerations implicitly certify the security of the cryptosystem based on Goppa codes, that, in fact, is still unbroken up today. On the contrary, other families of classic codes, like GRS codes and some types of concatenated codes have been proved to be unsuitable for the purpose. More will be said on security issues in Section IV, devoted to the system cryptanalysis.

8 7 B. McEliece Cryptosystem with LDPC Codes: First Implementation The scheme proposed in this subsection is basically derived from [22], with some modifications. The first change consists in the fact that, in [22], random-based LDPC codes were used, while our study will be focused on the class of QC-LDPC codes (see Section III), that are more appealing for shortening the public key length. The second change is that the system in [22] relies on the sparsity of a public parity-check matrix, H, in order to reduce the key length. The present scheme, instead, uses a generator matrix (G) as the public key, and relies only on its quasi-cyclic character to reduce the key length. This also prevents the system from being subject to a particular kind of attacks (described in Subsection IV-E). According with this scheme, Bob, that must receive a message from Alice, chooses a paritycheck matrix H among a family Γ with high cardinality. The object to have high cardinality can be surely achieved through a random generation of (sparse) parity-check matrices (this is the option adopted in [22]). However, we will show in Section III that the same result can be achieved also using QC-LDPC codes designed by means of a randomized Difference Families approach. Bob also chooses an r r non-singular circulant transformation matrix, T, and obtains the new matrix H = T H, that, obviously, has the same null space of H. We will show in Subsection IV-E that, in order to prevent some specific attacks to the system security, matrix T is required to be dense. This in turn implies to have a dense matrix H. This is the reason why it reveals rather useless to adopt the parity-check matrix as public key (as proposed in [22]), that instead would be advantageous if H would maintain the sparse character proper of H. On the other hand, using generator matrices instead of dense parity-check matrices as public keys is advantageous in terms of key size. In fact, when generator matrices in systematic form are adopted, only part of them (namely k r bits) must be stored, and this results in a reduction of the key size (in order to store a dense H matrix, n r bits are required). In addition, to put directly in the public directory a generator matrix reduces the computational effort required to Alice; at the same time, encoding by using the generator matrix is particularly simple in the case of QC-LDPC codes, as will be shown in Section III. So, Bob derives a generator matrix, G, corresponding to H (i.e. satisfying the condition H G T = 0, superscript T denoting transposition), in row reduced echelon form and makes

9 8 it available in the public directory: G is Bob s public key. On the contrary, H and T form the private (or secret) key, that is owned by Bob only. The system requires also a k k nonsingular scrambling matrix S, that is suitably chosen and publicly available (it can be even embedded in the algorithm implementation). Matrix S is similar to the homonymous matrix in the original McEliece cryptosystem and it has the fundamental role to cause propagation of residual errors at the eavesdropper s receiver, leaving the opponent in the most uncertain condition (that is equivalent to guess the plaintext at random). The scrambling action of matrix S reflects directly on residual errors at the receiver s side, and can be easily predicted through combinatorial arguments. It results that, in order to ensure sufficient scrambling effect, matrix S must be dense in its turn [26]. When Alice wants to send an encrypted message to Bob, she fetches G from the public directory and calculates G = S 1 G (4) Then she divides her message into k-bit blocks and encrypts each block u according with Eq. (2), where, as before, e represents a vector of intentional errors with weight t. It should be noted, however, that, contrary to the case of Goppa codes, it is not easy to explicitly determine the decoding radius for LDPC codes under belief propagation decoding. On the other hand, a knowledge of the number of correctable errors is required in order to design the system and to evaluate the complexity of some attacks. So, an estimate of this quantity is mandatory and it can be achieved through simulations. Examples in this sense will be provided in Subsection II-C. At the receiver side, Bob uses its private key for decoding. In the ideal case of a channel that does not introduce additional errors, by a suitable choice of t, all the errors can be corrected with high probability (in the extremely rare case of an LDPC decoder failure, message resend is requested, as discussed in Section IV). Belief propagation decoding, however, works only on sparse Tanner graphs, free of short-length cycles; therefore, in order to exploit the actual correction capability, the knowledge of the sparse parity-check matrix H is essential. By applying the decoding algorithm on the ciphertext x, Bob can derive u G = u (S 1 G). On the other hand, as G is in row reduced echelon form, the first k coordinates of this product reveal directly u S 1, and right multiplication by S permits to extract the plaintext u as desired.

10 9 An eavesdropper, named Eve, that wishes to intercept the message from Alice to Bob and is not able to derive matrix H from the knowledge of the public key, is, as expected, in a much less favorable position. This also holds for the original version [22], where Eve knows H, that, however, is made unsuitable for LDPC decoding through the action of matrix T. C. Choice of t for LDPC Codes Once having fixed the code parameters, the value suitable for t can be estimated through simulation. An example is shown in Fig. 1 for a code with n = 8000, k = 6000 (rate R = 3/4) and column weight of the parity-check matrix d v = 13; actually, this is a QC-LDPC code, of the type described in Subsection III-B. Simulation ran by adding t errors (randomly distributed) in each codeword of a sequence long enough to ensure a satisfactory confidence level for the results, and determining the BER and FER values, when decoding by either matrix H or matrix H = T H. In the latter case, that represents the opponent s viewpoint in the system proposed in [22], a sparse T (same column weight as H) has been adopted; by employing denser T s (as may be required for security issues) the performance of H would be even worse. The presence of intentional errors can be modeled in terms of a Binary Symmetric Channel (BSC) with an error probability p = t/n. Really, this is a special BSC, in the sense that each codeword includes exactly t errors, although randomly distributed. Over such a channel, we have applied the LLR-SPA (Log-Likelihood Ratios Sum-Product Algorithm), that is an optimum technique for LDPC decoding, that uses reliability values on the logarithmic scale. The steps of the LLR-SPA decoding procedures (Initialization, Right/Left Semi-Iterations, Decision and Parity-Check) are well known, and are not repeated here, for the sake of brevity. The main peculiarity of the BSC is in the form of the log-likelihood ratio of a priori probabilities associated with the codeword bit at position i, i.e., LLR(x i ) = ln [ ] P (xi = 0 y i = y) P (x i = 1 y i = y) where P (x i = x y i = y), x {0, 1} is the probability that the codeword bit x i at position i is equal to x, given a received signal y i = y at the channel output. In the case of the BSC, we can (5)

11 10 write: P(y i = 0 x i = 1) = p P(y i = 1 x i = 1) = 1 p P(y i = 0 x i = 0) = 1 p P(y i = 1 x i = 0) = p (6) Through simple algebra, by applying the Bayes theorem, we find: ( ) ( ) 1 p n t LLR(x i y i = 0) = ln = ln p t ( ) ( ) p t LLR(x i y i = 1) = ln = ln 1 p n t From Fig. 1 we see that, when decoding by H, the number of residual bit and frame errors becomes rapidly vanishing on the left of the knee at t 120. Because of the waterfall behavior of the curves, we can reasonably state that, for the considered choice of the system parameters and for t = 40 (that is, obviously, impossible to simulate with acceptable statistical confidence), both the BER and the FER assume negligible values, i.e., decoding is practically error free. So, it seems realistic to assume t = 40 as an estimate of the considered LDPC code correction capability. This does not ensure that all possible combinations of t = 40 (or even less) errors are corrected when decoding by H, but gives sufficient guarantees that errors are extremely rare. For this code (and for all its equivalent versions) Alice can choose a vector e, in Eq. (2), with weight t = 40. From the figure we see that, instead, 40 intentional errors cannot be efficiently corrected by using H : an eavesdropper possessing such information, as extracted from the public key G, achieves the same BER performance of an uncoded transmission (under the same assumption of a channel error probability equal to t/n). In other words, its decoder is useless. It should be noted, however, that, from a cryptographic point of view, a BER on the order of , like that achievable for t = 40 when using matrix H, is unacceptable. So, it is necessary to reinforce bad decoding by means of a scrambling action able to propagate the residual errors after decoding. This is the role of matrix S: the vector of residual errors is multiplied by S, and this determines an increase in their number able to yield the BER up to the desired value of about 0.5. (7)

12 11 A second example is shown in Fig. 2, that reports the performance obtained by a QC-LDPC code with length n = 16128, dimension k = (rate R = 3/4) and column weight of its parity-check matrix d v = 13. An important aspect in practical implementation of LDPC decoders, that influences complexity, is the quantization strategy adopted for the quantities exchanged among nodes. In this kind of application, a common solution is to adopt a midtread quantization scheme for both a priori and a posteriori LLRs. In order to verify the effectiveness of such a strategy, the simulation in Fig. 2 has been carried out considering q = 6 bit midtread quantization and clipping threshold set to 16. From the figure it is also possible to derive an estimate of the error correction capability of this other code. Moreover, by comparing Figs. 1 and 2, we notice that, as expected, longer codes ensure better correction capability, even when the effects of quantization are considered: the new code exhibits a waterfall knee at t 240. III. QUASI-CYCLIC LDPC CODES Quasi-cyclic codes, and QC-LDPC codes in particular, are good candidates to be used in the McEliece cryptosystem. Although the public key G in the system of Subsection II-B is dense, its cyclic structure allows to completely describe G by means of a small number of bits that, furthermore, increases linearly with the code length. This is potentially a great advantage with respect to the original McEliece cryptosystem where the key length is k n bits; obviously, the cryptanalysis shall verify if the value of n when using LDPC codes must be increased or not, but the advantage to pass from a quadratic dependence O (n 2 ) to a linear dependence O (n) on the code length will remain. Quasi-Cyclic codes have been studied since many years [27], but they did not find a great success in the past because of their inherent decoding complexity in classic implementations. Nowadays, however, the encoding facilities of Quasi-Cyclic codes can be combined with new efficient LDPC decoding techniques, to the point that QC-LDPC codes have been recently included in important telecommunication standards [20], [21]. The dimension k and the length n of a QC code are both multiple of a positive integer p, i.e., k = p k 0 and n = p n 0 ; the information vector u = [u 0, u 1,..., u k 1 ] and the codeword

13 12 vector c = [c 0, c 1,..., c n 1 ] can be divided into p sub-vectors of size k 0 and n 0, respectively, so that u = [u 0, u 1,..., u p 1 ] and c = [c 0, c 1,..., c p 1 ]. The distinctive characteristic of QC codes is that every cyclic shift of n 0 positions of a codeword yields another codeword; since every cyclic shift of n 0 positions of a codeword is led by a cyclic shift of k 0 positions of the corresponding information word, it can be easily shown that Quasi-Cyclic codes are characterized by the following form of the generator matrix G, where each block G i has size k 0 n 0 : G 0 G 1... G p 1 G p 1 G 0... G p 2 G = (8) G 1 G 2... G 0 This allows an efficient encoder implementation, consisting in a barrel shift register of size p, followed by a combinatorial network and an adder, as shown in Fig. 3. It can be easily proved that the parity-check matrix H is also characterized by the same circulant of blocks form, in which each block H i has size (n 0 k 0 ) n 0 = r 0 n 0. A row and column rearrangement can be applied that yields the alternative block of circulants form, here shown for the parity-check matrix H: H c 0,0 H c 0,1... H c 0,n 0 1 H c 1,0 H c 1,1... H c 1,n H = 0 1. (9)..... H c r 0 1,0 H c r 0 1,1... H c r 0 1,n 0 1 In this expression, each block H c i,j is a p p circulant matrix: a i,j 0 a i,j 1 a i,j 2 a i,j p 1 a i,j p 1 a i,j 0 a i,j 1 a i,j p 2 H c i,j = a i,j p 2 a i,j p 1 a i,j 0 a i,j p a i,j 1 a i,j 2 a i,j 3 a i,j 0 (10)

14 13 and can be, therefore, associated to a polynomial a i,j (x) GF 2 [x]mod (x p + 1), with maximum degree p 1 and coefficients taken from the first row of H c i,j: a i,j (x) = a i,j 0 + ai,j 1 x + ai,j 2 x2 + a i,j 3 x3 + + a i,j p 1 xp 1 (11) Among the several options available to design QC-LDPC codes, we have focused attention on two options: the first one is based on circulant permutation matrices, that is the solution recently proposed, for example, in the IEEE e standard for mobile WiMax [20]. The second one adopts the Difference Families approach to construct parity-check matrices with a single row of circulants. The main features of both these methods are described next. A. QC-LDPC codes based on Circulant Permutation Matrices A widespread family of QC-LDPC codes has parity-check matrices in which each block H c i,j = P i,j is a circulant permutation matrix or the null matrix of size p; circulant permutation matrices can be represented through the value of their first row shift p i,j. Many authors have proposed code construction techniques based on this approach (see [28] and [29] for example) and LDPC codes based on circulant permutation matrices have been even included in the IEEE e standard [20], thus confirming their recognized error correction performance. Another reason why these codes have known good fortune is their implementation facility [30], [31]. The parity-check matrix of these codes can be represented through a model matrix H m, of size r 0 n 0, containing the shift values p i,j (p i,j = 0 represents the identity matrix while, conventionally, p i,j = 1 represents the null matrix). The code rate of such codes is R = k 0 /n 0 and it can be varied arbitrarily through a suitable choice of r 0 and n 0. On the other hand, the local girth length for these codes cannot exceed 12, and the imposition of a lower bound on the local girth length reflects on a lower bound on the code length [29]. The rows of a permutation matrix sum into the all-one vector; so, when there are no null matrices among the H c i,j blocks, these parity-check matrices cannot have full rank. In this case, every parity-check matrix contains at least r 0 1 rows that are linearly dependent on the others, and the maximum rank is r 0 (p 1) + 1. A common solution to ensure the full rank consists in inserting null matrices in such a way as to impose the lower triangular (or quasi-lower triangular) form of the matrices, similarly to what done in [20].

15 14 B. QC-LDPC codes based on Difference Families When designing a QC-LDPC code, the parity-check matrix H should have some properties that optimize the behavior of the belief propagation-based decoder. First of all, H must be sparse, and this reflects on the maximum number of non-zero coefficients of each polynomial a i,j (x). Then, short length cycles must be avoided in the Tanner graph associated with the code. The latter requirement can be ensured, as demonstrable through algebraic considerations, when H is a single row of circulants (i.e., r 0 = 1): [ ] H = H 0 H 1 H n0 1 and the corresponding code rate is R = (n 0 1)/n 0 (these fractional values are very common in practical applications and ensure sufficient rate granularity, especially for values near unity). In this case, in order to have a non-singular parity-check matrix, at least one of the blocks H i, i [0; n 0 1] must be non-singular, which means that its associated polynomial must not be a zero divisor in the ring of polynomials mod (x p + 1) over GF 2. Without loss of generality, we can suppose the last block, H n0 1, to be non-singular (if necessary, this can be obtained through suitable permutations). In such case, a systematic form for the code generator matrix G is as follows: G = I ( H 1 n 0 1 H ) T 0 ( ) H 1 T n 0 1 H 1. ( H 1 n 0 1 H n 0 2 If (G, +) is a finite group, and D a subset of G, D [x y : x, y D, x y] is the collection of all differences of distinct elements of D. Given a positive integer s, and a multi-set M, sm s M is defined as s copies of M. i=1 Let v be the order of the group G, µ and λ positive integers, with v > µ 2. A (v, µ, λ)- difference family (DF) is a collection {D 0,...,D n0 1} of µ-subsets of G, called base-blocks, such that: n 0 1 i=0 ) T (12) (13) D i = λ (G\ {0}) (14) In other terms, every non-zero element of G appears exactly λ times as a difference of two elements from a base-block. As already shown in the literature [32], difference families can be used

16 15 to construct QC-LDPC codes. In particular, considering the difference family {D 0,, D n0 1}, a code based on it has the following form for the polynomials of the circulant matrices {H 0,,H n0 1}: a i (x) = d v j=1 x d ij, i [0; n 0 1] (15) where d ij is the j-th element of D i whose dimension equals the column weight, d v, of the parity-check matrix (the row weight is d c = n 0 d v ). With this choice, the designed matrix H is regular and all the elements in the difference family are used. It can be shown [32] that, by using difference families with λ = 1 in construction (15), the resulting code has a Tanner graph free of 4-length cycles. Some theorems ensure the existence of difference families with λ = 1, but they apply only for particular values of the group order, so putting heavy constraints on the code length. We have recently proposed to overcome such constraints by relaxing part of the hypotheses of these theorems and then refining the outputs through simple computer searches, in a Pseudo Difference Families approach [33]; other authors have proposed an alternative technique based on Extended Difference Families, that ensures great flexibility in the code length [34]. Moreover, a multi-set with the properties of a difference family can be constructed by a constrained random choice of its elements; we call this approach a Random Difference Family or RDF. The constraints serve to ensure the difference family character of each set, namely that designed codes do not have 4-length cycles. Through simple combinatorial arguments (details are given in the Appendix), we can obtain the following lower bound for the cardinality of a set of codes based on RDFs, with fixed values of n 0, d v and p: N (n 0, d v, p) 1 p p d v n 0 n0 1 l=0 d v 1 j=1 p p j + j [2 p mod2 + (j2 1) /2 + l d v (d v 1)] p j This number can be impressively high: as an example, for n 0 = 4, d v = 11 and p = 4032, the cardinality is N (n 0, d v, p) For other values of the parameters, however, the simplifying assumptions that permit derivation of (16) are too strong and, correspondingly, the lower bound becomes too loose. This occurs, for example, with the following choice of the parameters: n 0 = 4, d v = 13 and p = 4032, that will be of interest in the following. In these cases, we are equally able to prove the high cardinality, but using different arguments. This other procedure that, in essence, (16)

17 16 consists in an explicit subsets enumeration, is also described in the Appendix; its application to the above values of the parameters has provided N (n 0 = 4, d v = 13, p = 4032) 2 95 that, though smaller than the previous example (with d v = 11) remains very high. As discussed in Section II, the availability of very large families of codes with equivalent correction capability is a prerequisite for their application in the McEliece cryptosystem. Actually, at a parity of n 0, d v and p, the error correction performance of codes based on RDFs is equivalent, since they share the characteristics that most influence belief propagation decoding, that are: i) code length and rate, ii) parity-check matrix density, iii) nodes degree distributions and iv) girth length distribution in the Tanner graph associated with the code. As the considered codes have length n and rate R = (n 0 1)/n 0, property i) is simply ensured. An odd integer d v > 2 is chosen as the column weight of matrix H, so that property ii) is verified too. Since circulant matrices are regular, the resulting parity-check matrix is regular in both row and column weights; so all codes in a family have the same nodes degree distributions and property iii) is verified. Finally, when a circulant matrix has row (column) weight greater than 2, and it does not induce 4-length cycles, it can be shown [26] that it corresponds to a Tanner graph with constant local girth length equal to 6, so property iv) is satisfied as well. A second condition concerns the ability to realize decoding through an efficient decoding algorithm; this is true for LDPC codes, thanks to the theoretical and practical effectiveness of the belief propagation algorithm. Finally, as regards the third condition, that is the guarantee that the public code does not permit to obtain enough information on the structure of the secret code, we have already shown in Section II that even decoding by H is highly inefficient. But this aspect, that is the property of the public key to maintain hidden the structure of the secret key, must be discussed with much more detail in terms of cryptanalysis. This is the object of Section IV, where an overview of the most dangerous attacks that can be made to the cryptosystem is presented. C. Complexity The encryption and decryption complexity of the McEliece cryptosystem is dominated by the number of operations required for encoding and decoding, respectively. Actually, to find explicit expressions for these numbers is usually not simple, as they depend (as obvious) on the algorithm adopted, but also on several implementation issues whose generalization is difficult.

18 17 In the case of LDPC codes, encoding can be performed with common techniques for linear block codes; for example, the standard map consisting in multiplying the information vector by the code generator matrix. This procedure applies to any linear block code of length n and dimension k, and, in this case, the number of binary operations required to encrypt a frame can be evaluated as follows [24]: C enc = nk 2 + n (17) where n binary operations are considered for the addition of vector e. This formula, however, holds for a generic linear block code of length n and dimension k. In the case of QC-LDPC codes, instead, efficient algorithms for polynomial multiplication over GF 2 [x] mod (x p + 1) can be adopted, like the Toom-Cook algorithm [35]. If we denote as C pm (p) the number of binary operations required to multiply two polynomials over GF 2 [x] mod (x p + 1), the number of operations needed to encode a frame results in: C enc = n 0 [k 0 C pm (p) + (k 0 1) p] + n (18) For the case, considered in the following, of p = 4032, we can assume C pm (p) , as an estimate of the number of binary operations required for one recursion of Toom-4, two recursions of Toom-3 and two recursions of Toom-2. This results in an optimum combination to minimize the complexity [36]. LDPC decoding complexity can be estimated as the number of binary operations needed to decode a received frame by means of the LLR-SPA decoder. Its average value can be therefore expressed as: C SPA = I ave (C lsi + C rsi + C pc ) (19) where I ave is the average number of decoding iterations and C lsi, C rsi, C pc represent the number of binary operations required by the Left Semi-Iteration, Right Semi-Iteration and Decision and Parity-Check steps of the LLR-SPA. The initialization step is not taken into account, since, for the considered channel, as shown in Subsection II-C, the initial LLRs can assume only two fixed values. With reference to the serial implementation proposed in [37], we can consider that each check node requires 3 (d c 2) core operations. Each of them can be implemented as a two-way comparison and an addition with a constant, for a total of 2 operations between q-bit values.

19 18 On the other hand, each symbol node update requires 2d v + 1 additions of q-bit values. Finally, each parity-check requires d c binary additions; therefore, we have: C lsi = 3 (d c 2) 2q r = 6 (d c 2) qr C rsi = (2d v + 1)qn C pc = d c r (20) In order to evaluate the decryption complexity, we must also add the number of binary operations required to perform post-multiplication by S, noted by C S. This way, we obtain that the number of binary operations required to decrypt a received frame can be expressed as follows: C dec = I ave n [q (8d v + 12R 11) + d v ] + C S (21) where Eqs. (19) and (20) have been considered and the relationships r = n (1 R) and d c = d v / (1 R) have been used. Matrix S is assumed regular, with row/column weight equal to s; so C S can be estimated as k s, that corresponds to employing classic multiplication. However, when S is not sparse, as in the present case, it is convenient to adopt again optimized algorithms for polynomial multiplication. In this case, we separate S in its circulant blocks and we obtain C S = k 0 [k 0 C pm (p) + (k 0 1)p]. Eqs. (18) and (21) are rather general, and in fact apply without specific restrictions to a vast class of QC-LDPC codes. We will use them for the first example of McEliece cryptosystem based on QC-LDPC codes we have designed (see Table I in Subsection IV-G). However, Eq. (21) will be slightly modified in the last part of the paper, where a new scheme will be introduced in order to resist attacks to the system security. IV. CRYPTANALYSIS The original McEliece cryptosystem and its variants have been subjected to many different attacks aiming to verify their security against the risk of unauthorized intrusions. The long list of attacks include (but is not limited to): Brute force attacks Attacks based on Information Set Decoding Message-Resend and Related-Message attacks

20 19 Attacks based on Equivalence Classes Attacks based on Minimum Weight Codewords As mentioned, no evidence of a total break of the original cryptosystem has been given up to now. Indeed, several efforts have been devoted to improve the efficiency of local deductions, i.e., attacks finalized to find the plaintext of intercepted ciphertexts, without knowing the secret key. Actually, this problem is identical to that of finding codewords with minimum weight (or a given weight) in a general linear code. Such a problem, traditionally considered as very important in coding theory, has recently benefited by a number of theoretical and practical advances (see [24], [38] for example) that, however, have not modified the scene in a radical way; as a matter of fact, potential vulnerability against attacks based on minimum weight codewords has been overcome by changing the code parameters, like in [39], where the adoption of a Goppa code with length n = 2048, k = 1608 and t = 40 was proposed, in place of the original choice. Similarly, a proven flaw of the system [40] in protecting messages that are sent to a recipient more than once using different random error vectors, or even in protecting messages with a known linear relationship among them, has been solved by introducing a simple hash function over the error vector e [41]. The complexity of an attack is typically measured by the work factor, defined as the average number of binary operations required to have a success. It seems realistic to say that work factors as high as 2 80 or larger are great enough to ensure the system is secure at the current state of technology. Obviously, for the sake of brevity, it is not possible to describe in detail all the attacks to the McEliece cryptosystem; we redirect the interested reader to the list of quoted references, where others are included. In the following of this section, we will limit to present the attacks we have found as the most dangerous ones for the variant of the McEliece cryptosystem employing LDPC codes and QC-LDPC codes, in particular. Based on one of these attacks, we will prove that codes based on circulant permutation matrices (that is the option described in Subsection III-A) certainly cannot be applied in the cryptosystem, since for them it is very easy to realize a total break. However, for all kinds of LDPC codes, we will find a killer attack, that will drive us to propose, in Section V, a new implementation.

21 20 A. Brute force attacks The first kind of known attacks are brute force attacks. As already shown, enumeration of the code set is too demanding; therefore a brute force attack on H should be excluded. Even the number of randomly chosen (dense) T matrices is extremely high. Other kinds of brute force attacks (e.g., trying to decode the ciphertext, even using coset leaders) are unfeasible as well, like in the original McEliece cryptosystem. However, the circulants row form of the parity-check matrices, proposed in Subsection III-B, exposes them to another kind of brute force attack. In fact, if we consider a single circulant block of H, say H i, and its corresponding block in H, say H i, it is H i = T H i. This, for H i non-singular, implies T = H i H 1 i. Therefore, an eavesdropper could try to derive a single block H i, calculate its inverse H 1 i and multiply the corresponding block H i by H 1 i, thus obtaining T, and, consequently, H. The work factor of such attack is > N (1, d v, p) (the number of possible alternatives for each block H i ). For the case of p = 4032 and d v = 13, here of interest, we can estimate, by means of Eq. (16), N (1, 13, 4032) 2 110, that is a value high enough to discourage such approach. B. Attacks based on Information Set Decoding A low complexity attack, in the form of a local deduction, yet mentioned in [2], has been further investigated and improved in the subsequent literature; it is based on a flaw in the random choice of vector e. In fact, the eavesdropper could select, at random, a part of the ciphertext not affected by any intentional error. The information vector u is a k-bit vector; therefore, only k elements of x and e, chosen at fixed (for example, the first k) positions, can be considered, together with their corresponding k columns of G. Using subscript k to denote the vectors and matrix reduced in size, Eq. (2) can be rewritten as: x k = u G k + e k (22) If, by random choice, a vector e having all zero symbols in the chosen k positions is selected, e k = 0 and x k = u G k. Therefore, the eavesdropper, who knows G k, can easily obtain the cleartext message by evaluating u = x k G 1 k (we assume G k always true only for maximum distance separable codes). is non-singular, even if this is

22 21 Lee and Brickell, in [42], proposed a generalization of this attack, characterized by reduced complexity. Their main idea is to exploit Eq. (22) even when e k 0. In this case, u = x k G 1 k + e k G 1 k, e k being a random k-bit error pattern with weight t. In principle, all possible u should be considered and tested through a systematic procedure [42]; however, Lee and Brickell showed that considering a subset of all possible e k vectors, namely those with weight less than or equal to a given integer j, can be convenient for the eavesdropper. The work factor of Lee and Brickell s algorithm can be evaluated as follows: ( W j = T j αk 3 + N j βk ) (23) where T j = 1/ j t n t i=0 / n, N j = j k i=0 and α and β are integers i k i k i 1. Assuming α = β = 1 (the most favorable condition for an opponent), n = and k = 12096, we have calculated W j for different values of j and t. Results are shown in Fig. 4, where we notice that, for the considered case, the choice j = 2 ensures minimum work factor for the attack. For j = 2, t > 25 implies W 2 > 2 80, that is high enough to discourage the attack. C. Message-Resend and Related-Message Attack Berson [40] proved that attacks based on Information Set Decoding become very easy for the McEliece cryptosystem when the same message is encrypted more than once (with different error vectors), or in the case of messages with a known linear relation among them. The same does not hold true for the Niederreiter cryptosystem, that is immune to such attacks [43]. When adopting LDPC codes, for a fixed value of t, Bob s decoder may be occasionally unable to correct all the errors. In this case, that however occurs with extremely low probability, the parity-check fails, and the uncorrected errors are detected. So, message resending could be necessary. The McEliece cryptosystem, however, can be slightly modified in order to resist Berson-like attacks. In [41] it is proposed to change the encryption map as follows: x = [u + h (e)] G +e, where h is a one-way hash function with e as input and a k-bit output. Its contribution must be obviously removed by Bob, after successful decoding: u = [u + h (e)] + h (e).

23 22 D. Attacks based on Minimum Weight Codewords G Let us define G = as the generator matrix of an (n, k + 1)-linear block code, x obtained by extending Bob s public generator matrix G with an intercepted ciphertext x. It can be shown that this code contains only one minimum weight codeword, and such codeword exactly coincides with e [26]; therefore, if known, it would permit to obtain the plaintext u corresponding to x. A well known probabilistic algorithm for finding low weight codewords is due to Stern [44], and exploits an iterative procedure. If we suppose that the algorithm is performed on a linear block code with length n and dimension k, the probability of finding, in one iteration, a codeword with weight w, supposed it is unique, is given by: ( w n w ) ( w g g)( k/2 g g P w = ) ( n k/2 )( n k/2 w+g k/2 g ( n k/2 k/2 ) ) ( n k w+2g l ( n k l ) ) (24) where g and l are two parameters whose optimal values can be derived considering their influence on the total number of binary operations. Therefore, the average number of iterations needed in order to find such codeword is c = Pw 1. It can be considered that each iteration of the algorithm requires ( ) B = r3 k/2 2 + kr2 + 2gl + 2gr( k/2 g (25) g 2 l binary operations; so, the total work factor is W = cb. In the case of the original McEliece cryptosystem, we have n = 1024, k = 524 and w = t = 50; for this choice of the system parameters, the minimum work factor, found with (g, l) = (3, 28), is W A similar estimate, for the Goppa codes of the original cryptosystem, can be also found by means of the approach proposed in [24], thus confirming that such attack could pose a threat to the security of the original cryptosystem. However, the problem of finding minimum weight codewords is mostly unsolved for the case of LDPC codes. Actually, some algorithms have been studied [45], but they are not effective on long codes, like those adopted here. Moreover, these algorithms exploit iterative decoding, while parity-check matrices suitable for such decoding are not available for the public code. An adaptation of the probabilistic algorithm originally proposed by Stern, that does not rely on iterative decoding, has been applied to LDPC codes by Hirotomo et al. [46]. Even if this ) 2

24 23 approach seems to outperform that based on iterative decoding, it is hard to find minimum weight codewords using Stern s algorithm for code length n 2048, as the authors themselves acknowledge. For n = 4096 and k = 2048 (that are smaller than the values here proposed), the work factor reaches Even considering the quasi-cyclic property, such values of the work factor discourage the attack. On the contrary, the search for low-weight codewords becomes very dangerous for the system security when applied to the dual of the secret LDPC code, as will be described in Subsection IV-H. E. Density Reduction Attacks These attacks, already conjectured in [22], are effective against systems that directly expose an LDPC matrix for the public code. This could occur for the cryptosystem proposed in [22], where H is the private LDPC matrix and H = T H is the public parity-check matrix, visible to an eavesdropper. T is the transformation matrix used to hide the structure of H, that we suppose to be regular with row (or column) weight z. When adopting QC-LDPC codes, multiplication by T should preserve the QC character of H; therefore T should have the form of a p p circulant matrix. If T is sparse, H is sparse as well. Let h i be the i-th row of matrix H and h j the j-th row of matrix H, and let (GF2 n, +, ) be the vector space of all the possible binary n-tuples with the operations of addition (i.e., the logical XOR ) and multiplication (i.e., the logical AND ). Let us define orthogonality in the following sense: two binary vectors u and v are orthogonal, i.e., u v, iff u v = 0. From the cryptosystem description, it follows that h j = h i 1 + h i h iz, where i δ, with δ [1; z], is the position of the δ-th non-zero entry in the j-th row of T. We can suppose that many h i are mutually orthogonal, due to the sparsity of matrix H. Let h j = h a i a + h 1 i a h 2 i a and h z j = h b i b 1 + h i b h i b z be two distinct rows of H and h i a 1 = h i b 1 = h i1 [that occurs when T has two non-zero entries in the same column (i 1 ), at rows j a and j b.] In this case, for small values of z (that is, for T sparse), it may happen with non-negligible probability that: h j a h j = h b i1 (that occurs, for example, when h i a 1 h i b 2,...,h i a 1 h i b z,...,h i a z h i b 1,...,h i a z h i b z ). Therefore, a row of H could be derived as the product of two rows of H. At this point, if the code is quasi-cyclic with the considered form, its whole parity-check matrix can be derived as well, due to the fact that the other rows

25 24 of H are simply block-wise circular shifted versions of the one obtained through the attack. Even when the analysis of all the possible couples of rows of H does not reveal a row of H, it may produce a new matrix, H, sparser than H, able to allow efficient LDPC decoding. Alternatively, the attack can be iterated on H and it can succeed after a number of iterations > 1; in general, the attack requires ρ 1 iterations when not less than ρ rows of H have in common a single row of H. This attack procedure can be even applied on a single circulant block of H, say H i, to derive its corresponding block H i of H, from which T = H i H 1 i be obtained. We have verified [26] that the attack can be avoided through a proper selection of matrix T, but this approach forces constraints on the code parameters. In essence, only matrices T able to ensure that each sub-block of a row of H does not contain any complete sub-block of a row of H are acceptable. This means that, if matrix T is generated at random, a test is necessary to verify if this condition is satisfied or not. The complexity of this (additional) research depends on the number of acceptable matrices with respect to the total number of matrices randomly generable; in practice, this ratio represents the probability P T to find an acceptable T through a random choice. P T can be estimated through combinatorial arguments [26] and, at a parity of z, it decreases for increasing rate. A much simpler approach to combat the density reduction attack consists in using dense T matrices, that is the solution also proposed in [22]. The rationale of this approach, for the classes of LDPC codes considered in this paper is as follows. Let us suppose that the attack is carried out on the single block H i (it can be easily generalized for the whole H ). The first iteration of the attack, for the considered case of circulant H i, is equivalent to calculate the periodic autocorrelation of the first row of H i. When H i can is sparse (i.e., T is sparse) the autocorrelation is everywhere null (or very small), except for a limited number of peaks that reveal the couple of rows of H i able to give information on the structure of H i. On the contrary, when H i is dense (suppose with one half of symbols 1), the autocorrelation is always high, and no information is available for the opponent. In this case, Eve is in the same condition as to guess at random. This attack is not effective against the modified version, described in Subsection II-B, of the system proposed in [22], since a generator matrix G is used as the public key instead of the parity-check matrix H, from which the latter cannot be derived. In this case, to have a dense

26 25 matrix H does not affect the public key length: matrix G, in the form (13), remains described completely by its (k + 1)-th column. F. Attack to codes based on Circulant Permutation Matrices In the previous subsection we have verified that the assumption of a matrix T sufficiently dense can guarantee inapplicability of an attack that aims at finding single rows of H. In this subsection, however, we demonstrate that, independently of the T density, QC-LDPC codes having the form described in Subsection III-A cannot be used in the considered cryptosystem, since a total break attack is possible, that is able to recover the private key with high probability and low complexity. Let the private key H be formed by r 0 n 0 circulant permutation or null matrices P i,j of size p and have lower triangular form, to ensure full rank. For the sake of clarity, we consider a simple example with r 0 = 3 and n 0 = 6: P 0,0 P 0,1 P 0,2 P 0,3 0 0 H = P 1,0 P 1,1 P 1,2 P 1,3 P 1,4 0 (26) P 2,0 P 2,1 P 2,2 P 2,3 P 2,4 P 2,5 We assume that all P i,j s are non null, but this assumption has no implication on the subsequent conclusions. Let T consist of r 0 r 0 square circulant dense blocks T i,j, each of size p: T 0,0 T 0,1 T 0,2 T = T 1,0 T 1,1 T 1,2 (27) T 2,0 T 2,1 T 2,2 Although the exact knowledge of the private key would ensure correct decoding, for the eavesdropper, that knows H = T H, it suffices to find a couple of matrices (T d,h d ), with the same size of (T,H), such that H = T d H d, and H d is sparse enough to allow efficient belief propagation decoding. This can be accomplished by considering that, given an invertible square matrix Z of size r = p r 0, the following relationship holds: H = T H = T Z Z 1 H = T d H d (28)

27 26 where T d = T Z and H d = Z 1 H. If we separate H in its left (r k) and right (r r) parts, H = [H l H r ], a particular choice of Z coincides with the lower triangular part of H, i.e., for the considered example: P 0,3 0 0 Z = H r = P 1,3 P 1,4 0 P 2,3 P 2,4 P 2,5 With this choice of Z, H d assumes a particular form, namely: H d = [H dl H dr ] = H 1 r (29) [H l H r ] = [ H 1 r H l I ] (30) where the right part of H d is an identity matrix of size r = p r 0. From this, it follows that: H = T d H d = [T d H dl T d ] (31) Eq. (31) is the starting point for the eavesdropper: looking at H, Eve immediately knows T d and, hence H d, both corresponding to the choice of Z expressed by Eq. (29). Moreover, it can be proved that H d, so found, can be sparse enough to allow efficient belief propagation decoding. Because of the H d definition, its sparsity depends on that of Z 1. Actually, for the considered example, we have: V 0,0 0 0 Z 1 = H 1 r = V 1,0 V 1,1 0 V 2,0 V 2,1 V 2,2 (32) where V 0,0 = P T 0,3 V 1,1 = P T 1,4 V 2,2 = P T 2,5 V 1,0 = P T 1,4 P 1,3P T 0,3 (33) V 2,1 = P T 2,5 P 2,4P T 1,4 V 2,0 = P T 2,5P 2,3 P T 0,3 + P T 2,5P 2,4 P T 1,4P 1,3 P T 0,3 and the property of permutation matrices to have inversion coincident with transposition has been used. It follows that the main diagonal of Z 1 contains permutation matrices; the underlying diagonal contains products of permutation matrices, i.e., permutation matrices again; the following one

28 27 contains sums of permutation matrices, that have, at most, column (row) weight equal to 2. The analysis can be extended to an arbitrary diagonal: that containing the element V i,1, i [1; r 0 1], is formed by blocks with column (row) weight 2 i 1. It follows that, when r 0 is small, Z 1 is sparse and, consequently, H d is sparse as well; so it is confirmed that can be used by Eve for efficient decoding. Furthermore, the attack can continue and aim at obtaining another matrix, H b, that has the same density of H and, therefore, can produce a total break of the cryptosystem. This new matrix corresponds to another choice of Z, namely Z = Z, that, for the considered example, has the following form: P 0,3 0 0 Z = 0 P 1,4 0 (34) 0 0 P 2,5 For this choice of Z, H b has the same density of H and each of its rows is a permuted version of the corresponding row of H. H b is in row reduced echelon form. An attack that aims at finding H b can be conceived by analyzing the structure of H d in the form (30), with Z 1 expressed by Eqs. (32) and (33). It can be noticed that the first row of H d equals that of H multiplied by P T 0,3, so it corresponds to the first row of H b. The second row of H d equals that of H multiplied by P T 1,4 plus a shifted version of the first row of H b. The third row of H d equals that of H multiplied by P T 2,5 plus two shifted versions of the first row of H b and a shifted version of the second row of H b. Therefore, the attack can exploit a recursive procedure. When sparse matrices are added, it is highly probable that their symbols 1 do not overlap. For this reason, in a sum of rows of H b, the contributions of shifted versions of known rows can be isolated through correlation operations and, therefore, eliminated. This permits to deduce each row of H b from the previous ones and, this way, derive the entire H b. Obviously, the hypothesis of non-overlapping elements is verified with high probability when the blocks of H are sparse and r 0 is not too high. For example, we have found that matrices with r 0 = 6 and p = 40 (i.e., density of the blocks = 0.025) are highly exposed to total break. This is not a trivial case; for example, the codes included in the IEEE e standard [20] have p [24; 96] and r 0 = 6 when the code rate is 3/4.

29 28 The attack described in this subsection has been simulated by means of a simple Octave/Matlab program [26]. Through this program, we have verified that the attack can be successful on matrices with the mentioned choice of the parameters. Fig. 5 shows the scatter plot (dots denote positions of symbols 1) of the original matrix H and the matrix H b found through the attack, in the case of r 0 = 6, n 0 = 24 (i.e., rate R = 3/4) and p = 40. It is evident at sight, but can be numerically verified, that H and H b have exactly the same density. Even when the algorithm does not achieve a global deduction, it may produce an alternative form of the private key H that is sparse enough to allow the eavesdropper to perform efficient (though suboptimal) LDPC decoding. An example of this case is shown in Fig. 6, where matrix H and the matrix H b we have found are depicted, in the case of r 0 = 8, n 0 = 24 (i.e., rate R = 2/3) and p = 40. Matrix H b is now denser than H (it has density equal to ), but this could be sufficient to achieve rather good LDPC decoding. The risk of this attack is even emphasized by the fact that most useful codes have H matrices that contain a number (often high) of null blocks, in order to allow efficient belief propagation decoding. Therefore, the density of Z 1 and, hence, H d is reduced and the attack is simpler. G. Applicability of codes based on Difference Families The attack described in the previous subsection is addressed to LDPC codes based on circulant permutation matrices, and confirms that this kind of codes cannot be used in the McEliece cryptosystem. On the other hand, the same attack is not applicable to LDPC codes based on difference families. The possibility to use QC-LDPC codes having the structure discussed in Subsection III-B is conditioned on their ability to counter the classic and ad hoc attacks conceivable for this kind of codes. Actually, we have designed some LDPC codes that are able to counter all the attacks discussed up to now, and even others, not reported here for the sake of brevity. An example is shown in Table I. The code has length n = and has been designed by means of the RDF approach, assuming n 0 = 4, p = 4032 and d v = 13.

30 29 The table reports the features of the original McEliece cryptosystem (with n = 1024, k = 524 and t = 50), its Niederreiter version (with the same choice of the parameters) and the RSA system (with 1024-bit modulus and public exponent equal to 17) [24]. Finally, the last column shows the parameters of the system based on the QC-LDPC code. In this case, the public key is formed by a quasi-cyclic generator matrix in the systematic form (13); therefore it is completely described by its (k + 1)-th column, that is by k bits. The values of encryption and decryption complexity have been estimated by means of Eqs. (18) and (21). In the latter, q = 6 and I ave = 5 (as results from numerical simulations) have been considered. According with the table, the proposed solution improves the limits of the original McEliece cryptosystem: the long key size and the low transmission rate. It requires public keys that are more than 44 times shorter than those of the original McEliece cryptosystem, and more than 21 times shorter than those of its Niederreiter version. Furthermore, both McEliece and Niederreiter cryptosystems have rate that does not reach 0.6. The proposed system, instead, has rate 0.75, thus allowing higher spectral efficiency. Its encryption complexity is larger than that of both McEliece and Niederreiter cryptosystems, but it is smaller than that of RSA. On the other hand, its decryption complexity is comparable with that of the McEliece cryptosystem and smaller than that of the other solutions (significantly smaller than that of the RSA system). So, everything considered, the system in Table I seems to be a good solution. Unfortunately, however, there is another kind of attack that, for the class of considered codes, and for LDPC codes in general, cannot be ignored: it is the attack to the dual code, that will be described in the next subsection. When this further attack is taken into account, the QC-LDPC code needs to be redesigned; we will show that the features of the QC-LDPC code in Table I do not permit to face efficiently this further attack, to the point that a revision of the cryptosystem will be desirable to recover (in part) the benefits in Table I. This new version of the cryptosystem will be presented in Section V. H. Attack to the Dual Code On the basis of our evaluations, the most dangerous vulnerability of every instance of the McEliece cryptosystem based on LDPC codes rises from the knowledge that the dual of the secret code has very low weight codewords and an opponent can directly search for them, thus recovering H, that is suitable for belief propagation decoding.

31 30 Also cryptosystems not employing LDPC codes can be subject to attacks based on low weight codewords in the dual of the public code. In fact, when a sufficiently large set of redundant check sums of small enough weight can be found, an attacker can perform bit flipping decoding based on such parity-check equations [47]. The dual of the secret code has length n and dimension r, and can be generated by H; so, it has at least A dc r codewords of weight d c = d v / (1 R). From a cryptographic point of view, A dc should be known in order to precisely evaluate the work factor of the attack, but this is not, in general, a simple task. However, it can be considered that, in our case, d c n and that sparse vectors most likely sum into vectors of higher weight. Therefore, we will assume A dc = r in the following. Stern s algorithm can be used to search for minimum weight codewords in the dual of the secret code. In this case, the probability of finding, in one iteration, a codeword with weight w is A w P w, where A w represents the number of w-weight codewords and P w is expressed by Eq. (24), where the roles of k and r are interchanged (dual code). The average number of iterations needed in order to find a w-weight codeword is hence c = (A w P w ) 1 and it can be considered that each iteration of the algorithm requires a number of binary operations expressed by Eq. (25) (with k and r interchanged); so, the total work factor is W = cb. With the choice of the system parameters considered in Table I (i.e., n = 16128, A w = r = 4032, w = d c = n 0 d v = 52), the minimum work factor, that is found with (g, l) = (3, 43), is It is evident that such system would be highly exposed to a total break. Furthermore, the work factor of this attack decreases when the searched weight and the code rate decrease (it is desirable, for the dual code, to have rates as low as possible, i.e., as high as possible for the private and the public codes). In order to increase the work factor, parity-check matrices with greater rows weight and lower code rates should be adopted for the secret code. On the other hand, however, such matrices must be sparse enough to allow efficient belief propagation decoding. This means that it is possible to obtain high work factors only by employing very large codes. For example, by choosing n = 84000, r = 28000, n 0 = 3 and d v = 41 (d c = 123) it is W = (minimal for g = 3 and l = 54), that ensures satisfactory system robustness. The size of the key for this other solution is still one order of magnitude shorter than for the original McEliece cryptosystem but, in spite of this benefit, it is evident that most of the other

32 31 advantages are diminished or even lost. The transmission rate, in particular, is now 2/3 instead of 3/4; the complexity is significantly augmented, while the number of information bits treated at each encryption operation becomes very high, and this could put also latency problems. For these reasons, the attack to the dual code makes the cryptosystem implementation described in Subsection II-B much less appealing. Justified by this conclusion, in the next section we propose a variant of the McEliece cryptosystem that is robust against the attack to the dual code, while exhibiting higher rate and much smaller complexity than that, n = 84000, reported above. V. MCELIECE CRYPTOSYSTEM WITH QC-LDPC CODES: NEW IMPLEMENTATION As shown in subsection IV-H, every LDPC-based instance of the McEliece cryptosystem that exposes the secret code is not secure, since an attacker can successfully search for low weight codewords in its dual. It follows immediately that the cryptosystem proposed in [22] is not secure, since the user publishes the matrix H = T H, that is a valid parity-check matrix for the secret LDPC code. The same holds for the system described in Subsection II-B, since an eavesdropper can derive a valid parity-check matrix for the secret code from the knowledge of G. Also the original McEliece cryptosystem, in which the user publishes the generator matrix G = S G P, is not suitable for adopting sparse matrices. In fact, if we set H = H P, it results H G T = H P (S G P) T = = H P P T G T S T = (35) = H G T S T = 0 where orthogonality of permutation matrices has been used. From Eq. (35) it follows that H = H P is a valid parity-check matrix for the public code. On the other hand, H is a permuted version of the secret LDPC matrix H, so the public code still has low weight codewords in its dual, and an attacker can directly search for them. We instead propose a new version of the system that hides the structure of the secret code in a public code whose dual does not contain low weight codewords. A. System configuration In the new version of the McEliece cryptosystem, Bob, in order to receive encrypted messages, randomly chooses a code in a family of (n 0, d v, p)-rdf based QC-LDPC codes, by selecting its

33 32 parity-check matrix H, and produces a generator matrix G in reduced echelon form. Throughout this section, we will assume n 0 = 4 (i.e., code rate R = 3/4), d v = 13 and p = 4032, that are the same choices that produced Table I, but different values for the code parameters are certainly possible. Matrix H will be part of Bob s private key. The remaining part of the private key is formed by two other matrices: a sparse k k non-singular scrambling matrix S and a sparse n n non-singular transformation matrix Q. S and Q are regular matrices, formed by k 0 k 0 and n 0 n 0 blocks of p p circulants, respectively. S and Q have row/column weight s and m (further details on their structure will be given afterwards). Then, Bob computes the public key as follows: G = S 1 G Q 1 (36) It should be noted that G preserves the quasi-cyclic structure of G, due to the block circulant form of S and Q. Differently from the original McEliece cryptosystem, we use inverses of the transformation matrices (S and Q) to derive the public key. The reason of such choice lies in the fact that the new cryptosystem adopts sparse S and Q matrices in order to reduce the number of operations required by the decryption stage. Therefore, their inverses (that are dense, in general) are used to obtain the public generator matrix. G is made available in a public directory. Alice, who wants to send an encrypted message to Bob, extracts his public key from the public directory and divides her message into k-bit blocks. If u is one of these blocks, Alice obtains the encrypted version according with Eq. (2), repeated here for the sake of clarity: x = u G + e = c + e In this expression, e is a vector of intentional errors, with length n and weight t. The choice of t is discussed next. When Bob receives the encrypted message x, he first computes: x = x Q = u S 1 G + e Q (37) Vector x is a codeword of the LDPC code chosen by Bob (corresponding to the information vector u = u S 1 ), affected by the error vector e Q, whose maximum weight is t = t m. Using belief propagation, Bob should be able to correct all the errors, thus recovering u, and then u

34 33 through a post-multiplication by S. We have already discussed the difficulty in determining the decoding radius of LDPC codes under belief propagation, and the need to resort to simulation for having an estimate of the error correction capability. For the specified example, with reference to Fig. 2, we have verified that the error rate curve begins exhibiting a waterfall behavior in the neighborhood of t = 240 (for t = 210 the BER is about , and becomes rapidly vanishing for smaller t). So, cautiously, we can assume t = 190 as an estimate of the error correction capability of the considered code and fix the values of t and m accordingly. B. Rationale of the new system The new system leaves unchanged the parity-check matrix of the private code, but modifies the structure of the public code, in such a way as to prevent its dual from containing codewords of low weight. This statement can be easily proved as follows. Let us consider the parity-check matrix of the secret code, H, as generator matrix of the dual code; as matrix H is sparse, the dual code surely includes low weight (d c ) codewords. Let us denote one of these codewords by c b, and let u b be the information sequence generating it; the following relationship holds: u b H = c b (38) According with the description given in Subsection V-A, the new system replaces the permutation matrix P with a sparse, regular, matrix Q composed of n 0 n 0 circulant blocks of size p. On the density of this matrix we will come back next. In this case, matrix G is given by Eq. (36). Starting from the parity-check matrix of the secret code, H, the following relationship holds: H Q T G T = H Q T (S 1 G Q 1 ) T = = H Q T (Q 1 ) T G T (S 1 ) T = = H G T (S 1 ) T = 0 (39) that demonstrates that H = H Q T is a valid parity-check matrix for the public code. H has increased density, so, in general, a parity-check matrix with features similar to those of the secret matrix does not exist for the public code. Moreover, differently from the cryptosystem described in Subsection II-B, matrix H is no longer a valid parity-check matrix for the secret code, since

35 34 H G T = H Q T G T 0. Therefore, an eavesdropper, even knowing H, would not be able to easily decode the secret code, neither in an inefficient way. The role of matrix S is now different: instead of propagating residual errors, it is used for hiding the secret structure of G. In order to investigate the codeword weight in the dual of the public code, let us consider again the codeword c b and the information sequence u b. In this case, we have: u b H = u b H Q T = c b Q T However, c b Q T does not coincide with c b nor is a permuted version of it. On the contrary, because of the sparse character of matrix Q and the low weight of c b, the weight of c b Q T will be significantly larger than the weight of c b (with high probability, m times larger). The codeword c b = c b Q T is a codeword of the dual of the public code, as can be easily verified by observing that G Q c T b = S 1 G Q 1 Q c T b = S 1 G c T b = 0 In essence, to conclude the demonstration, we can say that any low weight codeword c b is transformed, through the action of matrix Q, in a medium/large weight codeword c b ; if the value of m is properly chosen, the risk, for the system, to be subject to the attack to the dual code disappears. As a counterpart, we have already seen in Eq. (37) that the authorized decoder must be able to correct a number of errors that may be (in the worst case, from this point of view) m times larger than the weight of the intentional error pattern. This means that the choice of m and t must be a compromise between the error correction capability of the considered QC-LDPC code, and the need to contrast the previous attacks (different from that to the dual code) and even new attacks, as described in the following subsection. C. Other attacks to the new system and choice of m and t Having excluded the attack to the dual code, the most dangerous threat for the proposed cryptosystem is represented by information set decoding attacks. With the values of the parameters set before, t = 27 ensures a work factor W (see Fig. 4), high enough to make this attack ineffective. Since in Subsection V-A we have said that t = 190 is a feasible choice, we can assume m = 7 for the row/column weight of matrix Q. Therefore, we propose to adopt Q matrices with block diagonal form, in which the circulant blocks along the main diagonal have row/column weight m = 7:

36 35 Q = Q Q (40) Q n0 1 Due to its low density, matrix Q is among the weakest components of the new system. In fact, an attack could be tempted on each of the first n 0 1 blocks along the main diagonal, noted by Q i, i [0; n 0 2]. If Q i is known, the attacker could multiply the blocks in the i-th column of G by Q i, thus obtaining the i-th column of S 1 (due to the reduced echelon form of G). However, such approach would require, at least, ( p m) p m binary operations, where p m binary operations have been considered for multiplication by Q i. For our choice of the system parameters, the required number of binary operations would be, approximately, 2 86, that is high enough to discourage the attack. We suggest to adopt the same row/column weight also for the circulant blocks that form matrix S, that, differently from Q, in this proposal, has not block diagonal form. Hence, it results s = k 0 m = 21. By assuming m = 7, the product m d c provides a maximum weight w = 364 for codewords the eavesdropper should search for. Actually, even assuming a lower minimum weight for the dual of the public code, the attack described in Subsection IV-H would remain unfeasible. For example, if we suppose the existence of r codewords of weight 200, searching for one of them would require about binary operations (minimal for p = 3, l = 41), that ensures satisfactory system robustness. So we can conclude that the new cryptosystem, with the specified values of the parameters, is immune to this kind of attack. D. Features and performance of the new scheme The public key specified in Eq. (36) is no longer systematic. This translates in a disadvantage for the length of the key: contrary to the first implementation, where one column was sufficient, n 0 columns are now required. More precisely, the proposed cryptosystem uses, as the public key, a generator matrix, G, formed by k 0 n 0 circulant blocks with size p. Therefore, it can be completely described by the set of columns at positions 0, p, 2p,..., (n 0 1) p, that is a total of n 0 k bits.

37 36 The encryption complexity has been estimated in Subsection III-C; on the other hand, the expression of the decryption complexity must be updated in order to take into account the presence of matrix Q and the sparse character of matrix S. For these reasons, Eq. (21) must be modified as follows: C dec = n m + I ave n [q (8d v + 12R 11) + d v ] + k s (41) where n m and k s binary operations have been considered for multiplication by Q and S, respectively. Numerical simulations have permitted to verify that, for the considered code parameters, assuming q = 6 and t = 190, I ave 5 and this value is further reduced for smaller t. Based on the modified scheme, we have designed a new instance of the McEliece cryptosystem based on QC-LDPC codes; its features are summarized in Table II. The system in Table I seemed even better but, actually, it was unable to counter the attack to the dual code. On the contrary, the modified scheme is able to resist such attack, with the only penalty of increased key size (by a factor n 0 ), that, however, still remains more than one order of magnitude smaller than that of the original cryptosystem. The other features of the system are almost unchanged with respect to Table I. The only exception is represented by the decryption complexity, that is even reduced, thanks to the adoption of sparse scrambling matrices, possible in the new system. Therefore, the proposed instance of McEliece cryptosystem can actually adopt LDPC codes in order to overcome the main drawbacks of the original version, without exposing to known security threats. Its features make the new system a valuable trade-off between the original McEliece cryptosystem (and its Niederreiter version) and other widespread solutions, like RSA. VI. CONCLUSIONS At the end of this paper, we should try to answer the following important question: Can LDPC codes, and QC-LDPC codes in particular, be applied in the McEliece cryptosystem?. In fact, though in cryptography it is very difficult to draw definitive conclusions, several elements have emerged from our study.

38 37 First of all, we have verified that the adoption of randomly generated codes does not permit to reduce the key length, since dense T matrices are required to combat the density reduction attack, as foreseen in [22]. By using QC-LDPC codes, to have a dense matrix T is no longer a problem, since the key length increases as O (n) instead of O (n 2 ), as in the original McEliece cryptosystem or in its version based on random LDPC codes. However, QC-LDPC codes based on permutation matrices cannot be used as well in the system, since highly exposed to the threat of a total break attack. This conclusion is also important, since design based on this kind of matrices is, at present, one of the most accepted solutions. We have proposed a class of QC-LDPC codes, designed through a variant of the difference families approach, that instead gives good chances for overcoming the drawbacks of the original system, while ensuring a high level of security. We have shown that it is rather easy, for these codes, to counter several classic and new attacks, by means of a suitable choice of the parameters. At present, the killer attack, for these codes, seems to be the attack to the dual code. The need for countering this attack penalizes the features of previous versions of the system based on LDPC codes: very long codewords are required and the transmission rate cannot increase so much as expected. However, we have shown that such attack can be efficiently faced by a modified version of the cryptosystem. By sacrificing a little bit the key length, that remains one order of magnitude shorter than in the original cryptosystem, this new version allows to restore rather high transmission rates (3/4 in our examples, but further increases may be possible) while preserving limited complexity. Among the residual drawbacks, we note that the code lengths required remain rather long, and this may imply not negligible latencies, whose mitigation, however, should be possible by exploiting the high degree of parallelism allowed by LDPC co/decoding architectures. APPENDIX: In this appendix we give proof of the lower bound on the cardinality of an RDF family presented in the paper. Let us consider the parity-check matrix H in the form (12) and let D be a multiset of n 0 base-blocks: D = {D 0, D 1,...,D n0 1} (42)

39 38 where each base-block is a subset of Z p and is associated with a circulant block in H; D i H i, in the sense that D i contains the position of the non-null entries in the first row of H i. In other terms, the base-block D i contains the exponents of the variable x in the polynomial associated with H i, as provided by Eq. (15). Given a and b that take values over Z p, let δ p ab = (a b) mod p denote the difference modulo p between a and b; it can be easily shown [26] that, to avoid the presence of 4-length cycles in the Tanner graph of the code, its base-blocks (42) must satisfy the following property: δ p ab δp cd, a, b D i, c, d D j, i, j [0, n 0 1] (43) In other terms, there must not be a difference (modulo p) that coincides with another difference (modulo p) inside the same base-block or in another base-block. Now let RDF(n 0, d v, p) denote a random difference family of n 0 base-blocks, each composed by d v elements over Z p. The set of base-blocks is constructed by adding one element at a time, and verifying that it does not introduce repeated differences modulo p. Let us focus, at first, on a single base-block D i and let P Di (d v, p) denote the probability that, when D i is formed by d v random chosen distinct elements of Z p, it verifies conditions (43) (i.e., it does not contain repeated differences modulo p). Let us suppose that a new element a Z p is inserted in the block D i, and let us consider its differences with existing elements. It can be easily proved that, in order to satisfy condition (43), we must ensure that: i) δ p ab δp ba b D i ii) δ p ab δp ca b, c D i iii) δ p bc δp ca b, c D i (44) iv) δ p ab δp cd b, c, d D i where a, b, c, d represent distinct elements of Z p. Let us suppose that the block D i already contains j elements, j [0, d v 1], and that a represents the (j + 1)-th element. Due to the hypothesis of distinct elements, a can assume p j values. Some of these values could be forbidden, due to the fact that they do not verify conditions (44). We evaluate the number of forbidden values as N F (j) = N i F (j)+nii F (j)+niii F (j)+niv F (j), where NF x (j) represents the number of forbidden values due to condition x. However, the same value of a could not verify more than one condition in (44), so N F (j) is an upper bound on

40 39 the actual number of forbidden values. Consequently, a lower bound on the probability that the introduction of a does not produce repeated differences in D i can be calculated as follows: P j+1 D i (j, p) p j N F (j) p j (45) In order to evaluate P j+1 D i (j, p), we must enumerate the forbidden values due to each condition in (44). This can be done as follows. Condition i): Without loss of generality, we can suppose a > b. In this case, condition i) is not verified for: a b = p (a b) (46) a = b + p 2 Eq. (46) admits a solution only for even p and, in this case, there are j possible values for b; therefore, it is NF i (j) = j (1 p mod2). Condition ii): This condition is not verified for: a b a c a where denotes congruence modulo p. In order to enumerate the number of forbidden values for a, we can consider that b+c, over the field of integers, assumes values [1; 2p 3], so a can assume the value (b + c) /2 or (b + c ± p)/2 (only one of the two values ±p gives a positive result, depending on b and c). Obviously, (b + c) /2 can assume integer values when b+c is even, while it cannot give an integer result when b + c is odd. Similarly, (b + c ± p) /2 can produce an integer result for b + c ± p even, while this is not possible for b + c ± p odd. Considering that there are, at most, j (j 1)/2 = ( j 2) different values for b + c, we have reported in Table III the maximum number of integer results both for (b + c)/2 and (b + c ± p)/2. Considering that the occurrence of b + c odd and b + c even is equally probable, we have NF ii (j) = j (j 1)/2 forbidden values for a in the case of both odd and even p. Condition iii): This condition is not verified for: b+c 2 (47) b c a c a 2c b (48)

41 40 Given b and c, 2c b, over the field of integers, assumes values [ p + 1; 2p 2]; therefore the forbidden value for a is unique and expressed as follows: a = 2c b, 0 2c b < p 2c b p, p 2c b 2p 2 2c b + p, 0 > 2c b p + 1 (49) 2c b can assume, at most, j (j 1) distinct values (in Z), so it results N iii F Condition iv): This condition is not verified for: (j) = j (j 1). a b a c d b + c d (50) Given b, c and d, b+c d, over the field of integers, assumes values [ p + 2; 2p 3]; therefore the forbidden value for a is unique and expressed as follows: a = b + c d, 0 b + c d < p b + c d p, p b + c d 2p 3 b + c d + p, 0 > b + c d p + 2 (51) b + c d can assume, at most, j (j 1)(j 2)/2 distinct values (in Z), so it results N iv F (j) = j (j 1) (j 2) /2. Based on these arguments, with simple algebra, P j+1 D i (j, p), expressed by Eq. (45), can be rewritten as follows: P j+1 D i (j, p) p j [2 p mod2 + (j2 1)/2] p j (52) With the help of Eq. (52), we can express P Di (d v, p) in the following form: P Di (d v, p) = d v 1 j=1 P j+1 D i (j, p) (53) This expression must be modified in order to evaluate the probability that a set of n 0 blocks of d v elements, chosen at random, forms an RDF (n 0, d v, p). For this purpose, we must consider the number of differences already present in the set of blocks when generating the (l + 1)-th block, given by: d l = l d v (d v 1) (54)

42 41 Let P j+1 D i (j, p, l + 1) denote the probability that, when inserting the (j + 1)-th element in the block D i, i = l + 1, containing j elements, the new element does not introduce 4-length cycles. P j+1 D i (j, p, l + 1) differs from P j+1 (j, p) in the sense that it considers the position of the current D i block, l + 1, that is generated after other l blocks. Therefore, in order to not introduce 4-length cycles, in addition to conditions (44), the new element a must avoid to create a new difference δ p ab, b D l+1, that coincides with a difference in another block. So: P j+1 D i (j, p, l + 1) p p j + j [ 2 p mod2 + (j 1) 2 /2 + l d v (d v 1) ] p j and the probability P Di (d v, p, l + 1) that the (l + 1)-th block, formed by d v randomly chosen distinct elements of Z p, does not produce 4-length cycles can be expressed as follows: P Di (d v, p, l + 1) = d v 1 j=1 (55) P j+1 D i (j, p, l + 1) (56) We can now write the probability that a set of n 0 base-blocks of d v randomly chosen elements of Z p does not introduce 4-length cycles, i.e., forms an RDF(n 0, d v, p), as follows: P RDF (n 0, d v, p) = n 0 1 l=0 P Di (d v, p, l + 1) n 0 1 l=0 d v 1 j=1 p j [2 p mod2 + (j 2 1)/2 + l d v (d v 1)] p j Finally, to determine a lower bound on the average number of random difference families N (n 0, d v, p) for a given choice of the parameters n 0, d v, p, it is sufficient to multiply expression (57) by the total number of possible sets of n 0 base-blocks of d v randomly chosen elements of Z p, P RDF (n 0, d v, p). However, we must take into account that a set of randomly chosen baseblocks could be a shifted version of another set of base-blocks. Since each set of base-blocks can correspond, at most, to p shifted versions of itself, we must divide by p the total number of different sets of base-blocks, thus obtaining: N (n 0, d v, p) 1 p p d v (57) n0 P RDF (n 0, d v, p) (58)

43 42 from which expression (16) results. Being a lower bound, expression (58) sometimes provides too loose results. In these cases, the simplifying assumption of non-overlapping forbidden values, adopted in the proof, should be removed, in such a way as to find a tighter estimate. However, this is not a simple task; alternatively, a minimum number of distinct codes achievable through an RDF with specified parameters can be also determined through a partly heuristic approach, described next. Let us consider an RDF composed of n 0 blocks, with w elements each. Every set of n 0 blocks consisting of subsets of its blocks maintains the properties of the RDF, and is therefore an RDF in its turn. Let us call this new RDF a sub-rdf and let d v be the number of elements in each sub-block. We want to compute the number of sub-rdfs generable for an arbitrary choice of n 0, d v and w. By considering the first block of the RDF only, the number of sub-rdfs is: ( ) w S RDF (w, d v, n 0 = 1) = (59) In principle, one could be tempted to consider also the p possible cyclic shifts of each subset (numerically, the triplet [x, y, z], for example, is different from the triplet [x + a, y + a, z + a], with 1 a < p), but all these shifts, although generating formally different families, produce the same circulant matrix, apart from an inessential reordering of its rows [this is the same reason why, in (58), a division by p has been introduced]. By considering a second block of the RDF, the number of choices is also given by the expression above. In addition, however, a shift of this other group of subsets, to be combined with the previous subsets, produces a different paritycheck matrix, as not attainable simply through a reordering of the rows. As an example, the RDF composed of two blocks and with elements [x, y, z; a, b, c] (three for each subset) produces a parity-check matrix different from that of the RDF with elements [x, y, z; a + 1, b + 1, c + 1]. So, the number of sub-rdfs generable with n 0 = 2 is: ( ) 2 w S RDF (w, d v, n 0 = 2, p) = p (60) Extending the procedure for an arbitrary n 0, we find: ( ) n0 S RDF (w, d v, n 0, p) = p n 0 1 w (61) At this point, to generate a family of RDFs with d v elements in each block, it is sufficient to generate one RDF with a larger number of elements w; this usually can be made in a very short time, even for the highest values of the parameters considered. The RDF so determined is the d v d v d v

44 43 seed for the generation of a number of RDFs as given by Eq. (61). This number represents, as well, a lower bound on the actual number of distinct RDFs for the same choice of the parameters. We have applied this procedure with p = 4032, n 0 = 4, d v = 13 and w = 19, finding S RDF (19, 13, 4, 4032) = REFERENCES [1] W. Diffie and M. Hellman, New directions in cryptography, IEEE Trans. Inform. Theory, vol. 22, pp , Nov [2] R. J. McEliece, A public-key cryptosystem based on algebraic coding theory. DSN Progress Report, pp , [3] E. Berlekamp, R. McEliece, and H. van Tilborg, On the inherent intractability of certain coding problems, IEEE Trans. Inform. Theory, vol. 24, pp , May [4] P. W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Comput., vol. 26, no. 5, pp , [5] K. Kobara and H. Imai, On the one-wayness against chosen-plaintext attacks of the Loidreau s modified McEliece PKC, IEEE Trans. Inform. Theory, vol. 49, pp , Dec [6] V. Sidelnikov and S. Shestakov, On cryptosystems based on generalized Reed-Solomon codes, Diskretnaya Math, vol. 4, pp , [7] N. Sendrier, On the structure of a randomly permuted concatenated code, in Proc. EUROCODE 94, 1994, pp [8] H. Niederreiter, Knapsack-type cryptosystems and algebraic coding theory, Probl. Contr. and Inform. Theory, vol. 15, pp , [9] Y. X. Li, R. Deng, and X. M. Wang, On the equivalence of McEliece s and Niederreiter s public-key cryptosystems, IEEE Trans. Inform. Theory, vol. 40, pp , Jan [10] N. T. Courtois, M. Finiasz, and N. Sendrier, How to achieve a McEliece-based digital signature scheme, Advances in Cryptology - ASIACRYPT 2001, Lecture Notes in Computer Science, vol. 2248, pp , [11] E. M. Gabidulin, A. V. Paramonov, and O. V. Tretjakov, Ideals over a non-commutative ring and their application in cryptography, D. W. Davies, Ed., Advances in Cryptology - EUROCRYPT 91, Lecture Notes in Computer Science, vol. 547, pp , [12] K. Gibson, The security of the Gabidulin public key cryptosystem, U. Maurer, Ed., Advances in Cryptology - EUROCRYPT 96, Lecture Notes in Computer Science, vol. 1070, pp , May [13] T. R. N. Rao and K.-H. Nam, Private-key algebraic cryptosystems, in Advances in cryptology CRYPTO 86. New York: Springer-Verlag, 1986, pp [14] R. Struik and J. van Tilburg, The Rao-Nam scheme is insecure against a chosen-plaintext attack, in CRYPTO 87: A Conference on the Theory and Applications of Cryptographic Techniques on Advances in Cryptology. London, UK: Springer-Verlag, 1987, pp [15] T. R. N. Rao, On Struik-Tilburg cryptanalysis of Rao-Nam scheme, in CRYPTO 87: A Conference on the Theory and Applications of Cryptographic Techniques on Advances in Cryptology. London, UK: Springer-Verlag, 1988, pp [16] J. Riek, Observations on the application of error correcting codes to public key encryption, in Proc. IEEE International Carnahan Conference on Security Technology. Crime Countermeasures, Lexington, KY, USA, Oct. 1990, pp

45 44 [17] M. Alabbadi and S. Wicker, Integrated security and error control for communication networks using the McEliece cryptosystem, in IEEE International Carnahan Conference on Security Technology. Crime Countermeasures., Oct. 1992, pp [18] T. Richardson and R. Urbanke, The capacity of low-density parity-check codes under message-passing decoding, IEEE Trans. Inform. Theory, vol. 47, pp , Feb [19] Digital Video Broadcasting (DVB); Second Generation Framing Structure, Channel Coding and Modulation Systems for Broadcasting, Interactive Services, News Gathering and Other Broadband Satellite Applications, ETSI EN Std , Rev , Jun [20] IEEE Standard for Local and Metropolitan Area Networks - Part 16: Air Interface for Fixed and Mobile Broadband Wireless Access Systems - Amendment for Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Bands, IEEE Std e-2005, Dec [21] IEEE P Wireless LANs WWiSE Proposal: High throughput extension to the Standard, IEEE Std n, Aug [22] C. Monico, J. Rosenthal, and A. Shokrollahi, Using low density parity check codes in the McEliece cryptosystem, in Proc. IEEE ISIT 2000, Sorrento, Italy, Jun. 2000, p [23] P. Gaborit, Shorter keys for code based cryptography, in Proc. Int. Workshop on Coding and Cryptography WCC 2005, Bergen, Norway, Mar. 2005, pp [24] A. Canteaut and F. Chabaud, A new algorithm for finding minimum-weight words in a linear code: application to McEliece s cryptosystem and to narrow-sense BCH codes of length 511, IEEE Trans. Inform. Theory, vol. 44, pp , Jan [25] V. D. Goppa, A new class of linear correcting codes, Probl. Pered. Info., vol. 6, no. 3, pp , [26] M. Baldi, Quasi-cyclic low-density parity-check codes and their application to cryptography, Ph.D. dissertation, Università Politecnica delle Marche, Ancona, Italy, Nov [27] R. Townsend and E. J. Weldon, Self-orthogonal quasi-cyclic codes, IEEE Trans. Inform. Theory, vol. 13, pp , Apr [28] R. Tanner, D. Sridhara, and T. Fuja, A class of group-structured LDPC codes, in Proc. Int. Symp. Commun. Theory and Appl., ISCTA 01, Ambleside, UK, Jul [29] M. P. C. Fossorier, Quasi-cyclic low-density parity-check codes from circulant permutation matrices, IEEE Trans. Inform. Theory, vol. 50, pp , Aug [30] D. Hocevar, LDPC code construction with flexible hardware implementation, in Proc. IEEE ICC 2003, vol. 4, Anchorage, Alaska, May 2003, pp [31], Efficient encoding for a family of quasi-cyclic LDPC codes, in Proc. IEEE Global Telecommunications Conference (GLOBECOM 03), vol. 7, San Francisco, CA, Dec. 2003, pp [32] S. Johnson and S. Weller, A family of irregular LDPC codes with low encoding complexity, IEEE Commun. Lett., vol. 7, pp , Feb [33] M. Baldi and F. Chiaraluce, New quasi cyclic low density parity check codes based on difference families, in Proc. Int. Symp. Commun. Theory and Appl., ISCTA 05, Ambleside, UK, Jul. 2005, pp [34] T. Xia and B. Xia, Quasi-cyclic codes from extended difference families, in Proc. IEEE Wireless Commun. and Networking Conf., vol. 2, New Orleans, USA, Mar. 2005, pp

46 45 [35] M. Bodrato, Towards optimal Toom-Cook multiplication for univariate and multivariate polynomials in characteristic 2 and 0, in Proc. WAIFI 07, C. Carlet and B. Sunar, Eds., vol Madrid, Spain: Springer, Jun. 2007, pp [36], private communication, Apr [37] X.-Y. Hu, E. Eleftheriou, D.-M. Arnold, and A. Dholakia, Efficient implementations of the sum-product algorithm for decoding LDPC codes, in Proc. IEEE Global Telecommunications Conference GLOBECOM 01, vol. 2, San Antonio, TX, Nov. 2001, pp E. [38] T. Johansson and F. Jonsson, On the complexity of some cryptographic problems based on the general decoding problem, IEEE Trans. Inform. Theory, vol. 48, pp , Oct [39] A. Canteaut and N. Sendrier, Cryptoanalysis of the original McEliece cryptosystem. in ASIACRYPT, 1998, pp [40] T. A. Berson, Failure of the McEliece public-key cryptosystem under message-resend and related-message attack, Advances in Cryptology - CRYPTO 97, Lecture Notes in Computer Science, vol. 1294, pp , Aug [41] H. M. Sun, Improving the security of the McEliece public-key cryptosystem. in ASIACRYPT, 1998, pp [42] P. Lee and E. Brickell, An observation on the security of McEliece s public-key cryptosystem, in Advances in Cryptology - EUROCRYPT 88, Springer-Verlag, Ed., 1988, pp [43] H. Niederreiter, Error-correcting codes and cryptography, Public-Key Cryptography and Computational Number Theory, pp , [44] J. Stern, A method for finding codewords of small weight, in G. Cohen and J. Wolfmann, Coding Theory and Applications, Springer-Verlag, Ed., no. 388 in Lecture Notes in Computer Science, 1989, pp [45] X.-Y. Hu, M. Fossorier, and E. Eleftheriou, On the computation of the minimum distance of low-density parity-check codes, in Proc. IEEE ICC 2004, vol. 2, Paris, France, Jun. 2004, pp [46] M. Hirotomo, M. Mohri, and M. Morii, A probabilistic computation method for the weight distribution of low-density parity-check codes, in Proc. IEEE ISIT 2005, Adelaide, Australia, Sep. 2005, pp [47] M. P. C. Fossorier, K. Kobara, and H. Imai, Modeling bit flipping decoding based on nonorthogonal check sums with application to iterative decoding attack of McEliece cryptosystem, IEEE Trans. Inform. Theory, vol. 53, pp , Jan

47 46 LIST OF FIGURES 1 Performance of a QC-LDPC code with n = 8000, k = 6000 and d v = 13 obtained by H and by H in absence of quantization Performance of a QC-LDPC code with n = 16128, k = and d v = 13 under 6-bit quantization Encoder circuit for a QC code based on its generator matrix Work factor of the attack based on information set decoding Matrix H (a) and matrix H b (b) in the case of successful global deduction Matrix H (a) and matrix H b (b) in the case of unsuccessful global deduction

48 FIGURES BER FER decoding by H decoding by H' uncoded t Fig. 1. Performance of a QC-LDPC code with n = 8000, k = 6000 and d v = 13 obtained by H and by H in absence of quantization BER FER uncoded t Fig. 2. Performance of a QC-LDPC code with n = 16128, k = and d v = 13 under 6-bit quantization.

49 FIGURES 48 B A u p-1 u p-2 u 1 u 0 G 0 G p-1 + G 2 G 1 Fig. 3. Encoder circuit for a QC code based on its generator matrix. log 2 (W j ) t j = 1 j = 2 j = 3 j = 4 j = 5 Fig. 4. Work factor of the attack based on information set decoding.

50 FIGURES 49 (a) (b) Fig. 5. Matrix H (a) and matrix H b (b) in the case of successful global deduction. (a) (b) Fig. 6. Matrix H (a) and matrix H b (b) in the case of unsuccessful global deduction.

Constructive aspects of code-based cryptography

Constructive aspects of code-based cryptography DIMACS Workshop on The Mathematics of Post-Quantum Cryptography Rutgers University January 12-16, 2015 Constructive aspects of code-based cryptography Marco Baldi Università Politecnica delle Marche Ancona,

More information

LDPC codes in the McEliece cryptosystem: attacks and countermeasures

LDPC codes in the McEliece cryptosystem: attacks and countermeasures arxiv:0710.0142v2 [cs.it] 11 Jan 2009 LDPC codes in the McEliece cryptosystem: attacks and countermeasures Marco BALDI 1 Polytechnic University of Marche, Ancona, Italy Abstract. The McEliece cryptosystem

More information

A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors

A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors A Key Recovery Attack on MDPC with CCA Security Using Decoding Errors Qian Guo Thomas Johansson Paul Stankovski Dept. of Electrical and Information Technology, Lund University ASIACRYPT 2016 Dec 8th, 2016

More information

Lecture Notes, Week 6

Lecture Notes, Week 6 YALE UNIVERSITY DEPARTMENT OF COMPUTER SCIENCE CPSC 467b: Cryptography and Computer Security Week 6 (rev. 3) Professor M. J. Fischer February 15 & 17, 2005 1 RSA Security Lecture Notes, Week 6 Several

More information

Notes 10: Public-key cryptography

Notes 10: Public-key cryptography MTH6115 Cryptography Notes 10: Public-key cryptography In this section we look at two other schemes that have been proposed for publickey ciphers. The first is interesting because it was the earliest such

More information

Enhanced public key security for the McEliece cryptosystem

Enhanced public key security for the McEliece cryptosystem Enhanced public key security for the McEliece cryptosystem Marco Baldi 1, Marco Bianchi 1, Franco Chiaraluce 1, Joachim Rosenthal 2, and Davide Schipani 2 1 Università Politecnica delle Marche, Ancona,

More information

Lecture 1: Introduction to Public key cryptography

Lecture 1: Introduction to Public key cryptography Lecture 1: Introduction to Public key cryptography Thomas Johansson T. Johansson (Lund University) 1 / 44 Key distribution Symmetric key cryptography: Alice and Bob share a common secret key. Some means

More information

Error-correcting codes and applications

Error-correcting codes and applications Error-correcting codes and applications November 20, 2017 Summary and notation Consider F q : a finite field (if q = 2, then F q are the binary numbers), V = V(F q,n): a vector space over F q of dimension

More information

McEliece type Cryptosystem based on Gabidulin Codes

McEliece type Cryptosystem based on Gabidulin Codes McEliece type Cryptosystem based on Gabidulin Codes Joachim Rosenthal University of Zürich ALCOMA, March 19, 2015 joint work with Kyle Marshall Outline Traditional McEliece Crypto System 1 Traditional

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 11 February 21, 2013 CPSC 467b, Lecture 11 1/27 Discrete Logarithm Diffie-Hellman Key Exchange ElGamal Key Agreement Primitive Roots

More information

Structured Low-Density Parity-Check Codes: Algebraic Constructions

Structured Low-Density Parity-Check Codes: Algebraic Constructions Structured Low-Density Parity-Check Codes: Algebraic Constructions Shu Lin Department of Electrical and Computer Engineering University of California, Davis Davis, California 95616 Email:shulin@ece.ucdavis.edu

More information

Security and complexity of the McEliece cryptosystem based on QC-LDPC codes

Security and complexity of the McEliece cryptosystem based on QC-LDPC codes This paper is a preprint of a paper accepted by IET Information Security and is subject to Institution of Engineering and Technology Copyright. When the final version is published, the copy of record will

More information

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 33 The Diffie-Hellman Problem

More information

Side-channel analysis in code-based cryptography

Side-channel analysis in code-based cryptography 1 Side-channel analysis in code-based cryptography Tania RICHMOND IMATH Laboratory University of Toulon SoSySec Seminar Rennes, April 5, 2017 Outline McEliece cryptosystem Timing Attack Power consumption

More information

9 Knapsack Cryptography

9 Knapsack Cryptography 9 Knapsack Cryptography In the past four weeks, we ve discussed public-key encryption systems that depend on various problems that we believe to be hard: prime factorization, the discrete logarithm, and

More information

Public-Key Cryptosystems CHAPTER 4

Public-Key Cryptosystems CHAPTER 4 Public-Key Cryptosystems CHAPTER 4 Introduction How to distribute the cryptographic keys? Naïve Solution Naïve Solution Give every user P i a separate random key K ij to communicate with every P j. Disadvantage:

More information

Security Implications of Quantum Technologies

Security Implications of Quantum Technologies Security Implications of Quantum Technologies Jim Alves-Foss Center for Secure and Dependable Software Department of Computer Science University of Idaho Moscow, ID 83844-1010 email: jimaf@cs.uidaho.edu

More information

5th March Unconditional Security of Quantum Key Distribution With Practical Devices. Hermen Jan Hupkes

5th March Unconditional Security of Quantum Key Distribution With Practical Devices. Hermen Jan Hupkes 5th March 2004 Unconditional Security of Quantum Key Distribution With Practical Devices Hermen Jan Hupkes The setting Alice wants to send a message to Bob. Channel is dangerous and vulnerable to attack.

More information

Errors, Eavesdroppers, and Enormous Matrices

Errors, Eavesdroppers, and Enormous Matrices Errors, Eavesdroppers, and Enormous Matrices Jessalyn Bolkema September 1, 2016 University of Nebraska - Lincoln Keep it secret, keep it safe Public Key Cryptography The idea: We want a one-way lock so,

More information

A Reaction Attack on the QC-LDPC McEliece Cryptosystem

A Reaction Attack on the QC-LDPC McEliece Cryptosystem A Reaction Attack on the QC-LDPC McEliece Cryptosystem Tomáš Fabšič 1, Viliam Hromada 1, Paul Stankovski 2, Pavol Zajac 1, Qian Guo 2, Thomas Johansson 2 1 Slovak University of Technology in Bratislava

More information

An Introduction to Probabilistic Encryption

An Introduction to Probabilistic Encryption Osječki matematički list 6(2006), 37 44 37 An Introduction to Probabilistic Encryption Georg J. Fuchsbauer Abstract. An introduction to probabilistic encryption is given, presenting the first probabilistic

More information

Strengthening McEliece Cryptosystem

Strengthening McEliece Cryptosystem Strengthening McEliece Cryptosystem Pierre Loidreau Project CODES, INRIA Rocquencourt Research Unit - B.P. 105-78153 Le Chesnay Cedex France Pierre.Loidreau@inria.fr Abstract. McEliece cryptosystem is

More information

An Overview to Code based Cryptography

An Overview to Code based Cryptography Joachim Rosenthal University of Zürich HKU, August 24, 2016 Outline Basics on Public Key Crypto Systems 1 Basics on Public Key Crypto Systems 2 3 4 5 Where are Public Key Systems used: Public Key Crypto

More information

10 Public Key Cryptography : RSA

10 Public Key Cryptography : RSA 10 Public Key Cryptography : RSA 10.1 Introduction The idea behind a public-key system is that it might be possible to find a cryptosystem where it is computationally infeasible to determine d K even if

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Instructor: Michael Fischer Lecture by Ewa Syta Lecture 13 March 3, 2013 CPSC 467b, Lecture 13 1/52 Elliptic Curves Basics Elliptic Curve Cryptography CPSC

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 9 February 6, 2012 CPSC 467b, Lecture 9 1/53 Euler s Theorem Generating RSA Modulus Finding primes by guess and check Density of

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 10 February 19, 2013 CPSC 467b, Lecture 10 1/45 Primality Tests Strong primality tests Weak tests of compositeness Reformulation

More information

during transmission safeguard information Cryptography: used to CRYPTOGRAPHY BACKGROUND OF THE MATHEMATICAL

during transmission safeguard information Cryptography: used to CRYPTOGRAPHY BACKGROUND OF THE MATHEMATICAL THE MATHEMATICAL BACKGROUND OF CRYPTOGRAPHY Cryptography: used to safeguard information during transmission (e.g., credit card number for internet shopping) as opposed to Coding Theory: used to transmit

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Iterative Soft-Decision Decoding of Binary Cyclic Codes

Iterative Soft-Decision Decoding of Binary Cyclic Codes 142 JOURNA OF COMMUNICATIONS SOFTWARE AND SYSTEMS, VO. 4, NO. 2, JUNE 2008 Iterative Soft-Decision Decoding of Binary Cyclic Codes Marco Baldi, Giovanni Cancellieri, and Franco Chiaraluce Original scientific

More information

Ideals over a Non-Commutative Ring and their Application in Cryptology

Ideals over a Non-Commutative Ring and their Application in Cryptology Ideals over a Non-Commutative Ring and their Application in Cryptology E. M. Gabidulin, A. V. Paramonov and 0. V. Tretjakov Moscow Institute of Physics and Technology 141700 Dolgoprudnii Moscow Region,

More information

8 Elliptic Curve Cryptography

8 Elliptic Curve Cryptography 8 Elliptic Curve Cryptography 8.1 Elliptic Curves over a Finite Field For the purposes of cryptography, we want to consider an elliptic curve defined over a finite field F p = Z/pZ for p a prime. Given

More information

APPLICATIONS. Quantum Communications

APPLICATIONS. Quantum Communications SOFT PROCESSING TECHNIQUES FOR QUANTUM KEY DISTRIBUTION APPLICATIONS Marina Mondin January 27, 2012 Quantum Communications In the past decades, the key to improving computer performance has been the reduction

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer 1 Lecture 13 October 16, 2017 (notes revised 10/23/17) 1 Derived from lecture notes by Ewa Syta. CPSC 467, Lecture 13 1/57 Elliptic Curves

More information

1 Number Theory Basics

1 Number Theory Basics ECS 289M (Franklin), Winter 2010, Crypto Review 1 Number Theory Basics This section has some basic facts about number theory, mostly taken (or adapted) from Dan Boneh s number theory fact sheets for his

More information

RSA RSA public key cryptosystem

RSA RSA public key cryptosystem RSA 1 RSA As we have seen, the security of most cipher systems rests on the users keeping secret a special key, for anyone possessing the key can encrypt and/or decrypt the messages sent between them.

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

Asymmetric Encryption

Asymmetric Encryption -3 s s Encryption Comp Sci 3600 Outline -3 s s 1-3 2 3 4 5 s s Outline -3 s s 1-3 2 3 4 5 s s Function Using Bitwise XOR -3 s s Key Properties for -3 s s The most important property of a hash function

More information

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION

STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION EE229B PROJECT REPORT STUDY OF PERMUTATION MATRICES BASED LDPC CODE CONSTRUCTION Zhengya Zhang SID: 16827455 zyzhang@eecs.berkeley.edu 1 MOTIVATION Permutation matrices refer to the square matrices with

More information

Evidence that the Diffie-Hellman Problem is as Hard as Computing Discrete Logs

Evidence that the Diffie-Hellman Problem is as Hard as Computing Discrete Logs Evidence that the Diffie-Hellman Problem is as Hard as Computing Discrete Logs Jonah Brown-Cohen 1 Introduction The Diffie-Hellman protocol was one of the first methods discovered for two people, say Alice

More information

An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes

An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes An Efficient Algorithm for Finding Dominant Trapping Sets of LDPC Codes Mehdi Karimi, Student Member, IEEE and Amir H. Banihashemi, Senior Member, IEEE Abstract arxiv:1108.4478v2 [cs.it] 13 Apr 2012 This

More information

Noisy Diffie-Hellman protocols

Noisy Diffie-Hellman protocols Noisy Diffie-Hellman protocols Carlos Aguilar 1, Philippe Gaborit 1, Patrick Lacharme 1, Julien Schrek 1 and Gilles Zémor 2 1 University of Limoges, France, 2 University of Bordeaux, France. Classical

More information

arxiv: v3 [cs.cr] 15 Jun 2017

arxiv: v3 [cs.cr] 15 Jun 2017 Use of Signed Permutations in Cryptography arxiv:1612.05605v3 [cs.cr] 15 Jun 2017 Iharantsoa Vero RAHARINIRINA ihvero@yahoo.fr Department of Mathematics and computer science, Faculty of Sciences, BP 906

More information

Public Key Cryptography

Public Key Cryptography Public Key Cryptography Introduction Public Key Cryptography Unlike symmetric key, there is no need for Alice and Bob to share a common secret Alice can convey her public key to Bob in a public communication:

More information

Introduction to Modern Cryptography. Benny Chor

Introduction to Modern Cryptography. Benny Chor Introduction to Modern Cryptography Benny Chor RSA Public Key Encryption Factoring Algorithms Lecture 7 Tel-Aviv University Revised March 1st, 2008 Reminder: The Prime Number Theorem Let π(x) denote the

More information

Lemma 1.2. (1) If p is prime, then ϕ(p) = p 1. (2) If p q are two primes, then ϕ(pq) = (p 1)(q 1).

Lemma 1.2. (1) If p is prime, then ϕ(p) = p 1. (2) If p q are two primes, then ϕ(pq) = (p 1)(q 1). 1 Background 1.1 The group of units MAT 3343, APPLIED ALGEBRA, FALL 2003 Handout 3: The RSA Cryptosystem Peter Selinger Let (R, +, ) be a ring. Then R forms an abelian group under addition. R does not

More information

MATH 158 FINAL EXAM 20 DECEMBER 2016

MATH 158 FINAL EXAM 20 DECEMBER 2016 MATH 158 FINAL EXAM 20 DECEMBER 2016 Name : The exam is double-sided. Make sure to read both sides of each page. The time limit is three hours. No calculators are permitted. You are permitted one page

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography

Lecture 1: Perfect Secrecy and Statistical Authentication. 2 Introduction - Historical vs Modern Cryptography CS 7880 Graduate Cryptography September 10, 2015 Lecture 1: Perfect Secrecy and Statistical Authentication Lecturer: Daniel Wichs Scribe: Matthew Dippel 1 Topic Covered Definition of perfect secrecy One-time

More information

8.1 Principles of Public-Key Cryptosystems

8.1 Principles of Public-Key Cryptosystems Public-key cryptography is a radical departure from all that has gone before. Right up to modern times all cryptographic systems have been based on the elementary tools of substitution and permutation.

More information

Low-density parity-check (LDPC) codes

Low-density parity-check (LDPC) codes Low-density parity-check (LDPC) codes Performance similar to turbo codes Do not require long interleaver to achieve good performance Better block error performance Error floor occurs at lower BER Decoding

More information

} has dimension = k rank A > 0 over F. For any vector b!

} has dimension = k rank A > 0 over F. For any vector b! FINAL EXAM Math 115B, UCSB, Winter 2009 - SOLUTIONS Due in SH6518 or as an email attachment at 12:00pm, March 16, 2009. You are to work on your own, and may only consult your notes, text and the class

More information

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1

On the minimum distance of LDPC codes based on repetition codes and permutation matrices 1 Fifteenth International Workshop on Algebraic and Combinatorial Coding Theory June 18-24, 216, Albena, Bulgaria pp. 168 173 On the minimum distance of LDPC codes based on repetition codes and permutation

More information

Public Key 9/17/2018. Symmetric Cryptography Review. Symmetric Cryptography: Shortcomings (1) Symmetric Cryptography: Analogy

Public Key 9/17/2018. Symmetric Cryptography Review. Symmetric Cryptography: Shortcomings (1) Symmetric Cryptography: Analogy Symmetric Cryptography Review Alice Bob Public Key x e K (x) y d K (y) x K K Instructor: Dr. Wei (Lisa) Li Department of Computer Science, GSU Two properties of symmetric (secret-key) crypto-systems: The

More information

Logic gates. Quantum logic gates. α β 0 1 X = 1 0. Quantum NOT gate (X gate) Classical NOT gate NOT A. Matrix form representation

Logic gates. Quantum logic gates. α β 0 1 X = 1 0. Quantum NOT gate (X gate) Classical NOT gate NOT A. Matrix form representation Quantum logic gates Logic gates Classical NOT gate Quantum NOT gate (X gate) A NOT A α 0 + β 1 X α 1 + β 0 A N O T A 0 1 1 0 Matrix form representation 0 1 X = 1 0 The only non-trivial single bit gate

More information

One can use elliptic curves to factor integers, although probably not RSA moduli.

One can use elliptic curves to factor integers, although probably not RSA moduli. Elliptic Curves Elliptic curves are groups created by defining a binary operation (addition) on the points of the graph of certain polynomial equations in two variables. These groups have several properties

More information

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2

1 Recommended Reading 1. 2 Public Key/Private Key Cryptography Overview RSA Algorithm... 2 Contents 1 Recommended Reading 1 2 Public Key/Private Key Cryptography 1 2.1 Overview............................................. 1 2.2 RSA Algorithm.......................................... 2 3 A Number

More information

Cryptographie basée sur les codes correcteurs d erreurs et arithmétique

Cryptographie basée sur les codes correcteurs d erreurs et arithmétique with Cryptographie basée sur les correcteurs d erreurs et arithmétique with with Laboratoire Hubert Curien, UMR CNRS 5516, Bâtiment F 18 rue du professeur Benoît Lauras 42000 Saint-Etienne France pierre.louis.cayrel@univ-st-etienne.fr

More information

Discrete Logarithm Problem

Discrete Logarithm Problem Discrete Logarithm Problem Finite Fields The finite field GF(q) exists iff q = p e for some prime p. Example: GF(9) GF(9) = {a + bi a, b Z 3, i 2 = i + 1} = {0, 1, 2, i, 1+i, 2+i, 2i, 1+2i, 2+2i} Addition:

More information

An Introduction. Dr Nick Papanikolaou. Seminar on The Future of Cryptography The British Computer Society 17 September 2009

An Introduction. Dr Nick Papanikolaou. Seminar on The Future of Cryptography The British Computer Society 17 September 2009 An Dr Nick Papanikolaou Research Fellow, e-security Group International Digital Laboratory University of Warwick http://go.warwick.ac.uk/nikos Seminar on The Future of Cryptography The British Computer

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Introduction to Modern Cryptography. Benny Chor

Introduction to Modern Cryptography. Benny Chor Introduction to Modern Cryptography Benny Chor RSA: Review and Properties Factoring Algorithms Trapdoor One Way Functions PKC Based on Discrete Logs (Elgamal) Signature Schemes Lecture 8 Tel-Aviv University

More information

Improving the efficiency of Generalized Birthday Attacks against certain structured cryptosystems

Improving the efficiency of Generalized Birthday Attacks against certain structured cryptosystems Improving the efficiency of Generalized Birthday Attacks against certain structured cryptosystems Robert Niebuhr 1, Pierre-Louis Cayrel 2, and Johannes Buchmann 1,2 1 Technische Universität Darmstadt Fachbereich

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY BURTON ROSENBERG UNIVERSITY OF MIAMI Contents 1. Perfect Secrecy 1 1.1. A Perfectly Secret Cipher 2 1.2. Odds Ratio and Bias 3 1.3. Conditions for Perfect

More information

CHAPTER 3 LOW DENSITY PARITY CHECK CODES

CHAPTER 3 LOW DENSITY PARITY CHECK CODES 62 CHAPTER 3 LOW DENSITY PARITY CHECK CODES 3. INTRODUCTION LDPC codes were first presented by Gallager in 962 [] and in 996, MacKay and Neal re-discovered LDPC codes.they proved that these codes approach

More information

Code-based Cryptography

Code-based Cryptography a Hands-On Introduction Daniel Loebenberger Ηράκλειο, September 27, 2018 Post-Quantum Cryptography Various flavours: Lattice-based cryptography Hash-based cryptography Code-based

More information

PERFECTLY secure key agreement has been studied recently

PERFECTLY secure key agreement has been studied recently IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 499 Unconditionally Secure Key Agreement the Intrinsic Conditional Information Ueli M. Maurer, Senior Member, IEEE, Stefan Wolf Abstract

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 11 October 7, 2015 CPSC 467, Lecture 11 1/37 Digital Signature Algorithms Signatures from commutative cryptosystems Signatures from

More information

Lecture 19: Public-key Cryptography (Diffie-Hellman Key Exchange & ElGamal Encryption) Public-key Cryptography

Lecture 19: Public-key Cryptography (Diffie-Hellman Key Exchange & ElGamal Encryption) Public-key Cryptography Lecture 19: (Diffie-Hellman Key Exchange & ElGamal Encryption) Recall In private-key cryptography the secret-key sk is always established ahead of time The secrecy of the private-key cryptography relies

More information

Making Error Correcting Codes Work for Flash Memory

Making Error Correcting Codes Work for Flash Memory Making Error Correcting Codes Work for Flash Memory Part I: Primer on ECC, basics of BCH and LDPC codes Lara Dolecek Laboratory for Robust Information Systems (LORIS) Center on Development of Emerging

More information

CODING AND CRYPTOLOGY III CRYPTOLOGY EXERCISES. The questions with a * are extension questions, and will not be included in the assignment.

CODING AND CRYPTOLOGY III CRYPTOLOGY EXERCISES. The questions with a * are extension questions, and will not be included in the assignment. CODING AND CRYPTOLOGY III CRYPTOLOGY EXERCISES A selection of the following questions will be chosen by the lecturer to form the Cryptology Assignment. The Cryptology Assignment is due by 5pm Sunday 1

More information

Computers and Mathematics with Applications

Computers and Mathematics with Applications Computers and Mathematics with Applications 61 (2011) 1261 1265 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: wwwelseviercom/locate/camwa Cryptanalysis

More information

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach

Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Constructions of Nonbinary Quasi-Cyclic LDPC Codes: A Finite Field Approach Shu Lin, Shumei Song, Lan Lan, Lingqi Zeng and Ying Y Tai Department of Electrical & Computer Engineering University of California,

More information

Cryptographic applications of codes in rank metric

Cryptographic applications of codes in rank metric Cryptographic applications of codes in rank metric Pierre Loidreau CELAr and Université de Rennes Pierre.Loidreau@m4x.org June 16th, 2009 Introduction Rank metric and cryptography Gabidulin codes and linearized

More information

McEliece and Niederreiter Cryptosystems That Resist Quantum Fourier Sampling Attacks

McEliece and Niederreiter Cryptosystems That Resist Quantum Fourier Sampling Attacks McEliece and Niederreiter Cryptosystems That Resist Quantum Fourier Sampling Attacks Hang Dinh Indiana Uniersity South Bend joint work with Cristopher Moore Uniersity of New Mexico Alexander Russell Uniersity

More information

Analysis of Hidden Field Equations Cryptosystem over Odd-Characteristic Fields

Analysis of Hidden Field Equations Cryptosystem over Odd-Characteristic Fields Nonlinear Phenomena in Complex Systems, vol. 17, no. 3 (2014), pp. 278-283 Analysis of Hidden Field Equations Cryptosystem over Odd-Characteristic Fields N. G. Kuzmina and E. B. Makhovenko Saint-Petersburg

More information

Notes for Lecture 17

Notes for Lecture 17 U.C. Berkeley CS276: Cryptography Handout N17 Luca Trevisan March 17, 2009 Notes for Lecture 17 Scribed by Matt Finifter, posted April 8, 2009 Summary Today we begin to talk about public-key cryptography,

More information

Coset Decomposition Method for Decoding Linear Codes

Coset Decomposition Method for Decoding Linear Codes International Journal of Algebra, Vol. 5, 2011, no. 28, 1395-1404 Coset Decomposition Method for Decoding Linear Codes Mohamed Sayed Faculty of Computer Studies Arab Open University P.O. Box: 830 Ardeya

More information

The Elliptic Curve in https

The Elliptic Curve in https The Elliptic Curve in https Marco Streng Universiteit Leiden 25 November 2014 Marco Streng (Universiteit Leiden) The Elliptic Curve in https 25-11-2014 1 The s in https:// HyperText Transfer Protocol

More information

Network Security Technology Spring, 2018 Tutorial 3, Week 4 (March 23) Due Date: March 30

Network Security Technology Spring, 2018 Tutorial 3, Week 4 (March 23) Due Date: March 30 Network Security Technology Spring, 2018 Tutorial 3, Week 4 (March 23) LIU Zhen Due Date: March 30 Questions: 1. RSA (20 Points) Assume that we use RSA with the prime numbers p = 17 and q = 23. (a) Calculate

More information

Channel Coding for Secure Transmissions

Channel Coding for Secure Transmissions Channel Coding for Secure Transmissions March 27, 2017 1 / 51 McEliece Cryptosystem Coding Approach: Noiseless Main Channel Coding Approach: Noisy Main Channel 2 / 51 Outline We present an overiew of linear

More information

Week 7 An Application to Cryptography

Week 7 An Application to Cryptography SECTION 9. EULER S GENERALIZATION OF FERMAT S THEOREM 55 Week 7 An Application to Cryptography Cryptography the study of the design and analysis of mathematical techniques that ensure secure communications

More information

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures

The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures HE et al.: THE MM-PEGA/MM-QC-PEGA DESIGN THE LDPC CODES WITH BETTER CYCLE-STRUCTURES 1 arxiv:1605.05123v1 [cs.it] 17 May 2016 The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the

More information

Lattices. A Lattice is a discrete subgroup of the additive group of n-dimensional space R n.

Lattices. A Lattice is a discrete subgroup of the additive group of n-dimensional space R n. Lattices A Lattice is a discrete subgroup of the additive group of n-dimensional space R n. Lattices have many uses in cryptography. They may be used to define cryptosystems and to break other ciphers.

More information

CPSC 467b: Cryptography and Computer Security

CPSC 467b: Cryptography and Computer Security Outline Authentication CPSC 467b: Cryptography and Computer Security Lecture 18 Michael J. Fischer Department of Computer Science Yale University March 29, 2010 Michael J. Fischer CPSC 467b, Lecture 18

More information

The failure of McEliece PKC based on Reed-Muller codes.

The failure of McEliece PKC based on Reed-Muller codes. The failure of McEliece PKC based on Reed-Muller codes. May 8, 2013 I. V. Chizhov 1, M. A. Borodin 2 1 Lomonosov Moscow State University. email: ivchizhov@gmail.com, ichizhov@cs.msu.ru 2 Lomonosov Moscow

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

Number theory (Chapter 4)

Number theory (Chapter 4) EECS 203 Spring 2016 Lecture 12 Page 1 of 8 Number theory (Chapter 4) Review Compute 6 11 mod 13 in an efficient way What is the prime factorization of 100? 138? What is gcd(100, 138)? What is lcm(100,138)?

More information

Hexi McEliece Public Key Cryptosystem

Hexi McEliece Public Key Cryptosystem Appl Math Inf Sci 8, No 5, 2595-2603 (2014) 2595 Applied Mathematics & Information Sciences An International Journal http://dxdoiorg/1012785/amis/080559 Hexi McEliece Public Key Cryptosystem K Ilanthenral

More information

Definition: For a positive integer n, if 0<a<n and gcd(a,n)=1, a is relatively prime to n. Ahmet Burak Can Hacettepe University

Definition: For a positive integer n, if 0<a<n and gcd(a,n)=1, a is relatively prime to n. Ahmet Burak Can Hacettepe University Number Theory, Public Key Cryptography, RSA Ahmet Burak Can Hacettepe University abc@hacettepe.edu.tr The Euler Phi Function For a positive integer n, if 0

More information

Chapter 7 Reed Solomon Codes and Binary Transmission

Chapter 7 Reed Solomon Codes and Binary Transmission Chapter 7 Reed Solomon Codes and Binary Transmission 7.1 Introduction Reed Solomon codes named after Reed and Solomon [9] following their publication in 1960 have been used together with hard decision

More information

Lattice Reduction Attack on the Knapsack

Lattice Reduction Attack on the Knapsack Lattice Reduction Attack on the Knapsack Mark Stamp 1 Merkle Hellman Knapsack Every private in the French army carries a Field Marshal wand in his knapsack. Napoleon Bonaparte The Merkle Hellman knapsack

More information

Mathematics of Public Key Cryptography

Mathematics of Public Key Cryptography Mathematics of Public Key Cryptography Eric Baxter April 12, 2014 Overview Brief review of public-key cryptography Mathematics behind public-key cryptography algorithms What is Public-Key Cryptography?

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

Breaking an encryption scheme based on chaotic Baker map

Breaking an encryption scheme based on chaotic Baker map Breaking an encryption scheme based on chaotic Baker map Gonzalo Alvarez a, and Shujun Li b a Instituto de Física Aplicada, Consejo Superior de Investigaciones Científicas, Serrano 144 28006 Madrid, Spain

More information

Quantum-resistant cryptography

Quantum-resistant cryptography Quantum-resistant cryptography Background: In quantum computers, states are represented as vectors in a Hilbert space. Quantum gates act on the space and allow us to manipulate quantum states with combination

More information