Cryptographic Engineering

Similar documents
Chapter 7. Error Control Coding. 7.1 Historical background. Mikael Olofsson 2005

List decoding of binary Goppa codes and key reduction for McEliece s cryptosystem

Error-correcting Pairs for a Public-key Cryptosystem

Channel Coding for Secure Transmissions

Errors, Eavesdroppers, and Enormous Matrices

Sparse Polynomial Interpolation and Berlekamp/Massey algorithms that correct Outlier Errors in Input Values

Code Based Cryptology at TU/e

The BCH Bound. Background. Parity Check Matrix for BCH Code. Minimum Distance of Cyclic Codes

MATH32031: Coding Theory Part 15: Summary

Adaptive decoding for dense and sparse evaluation/interpolation codes


MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

Code-Based Cryptography Error-Correcting Codes and Cryptography

Code-based Cryptography

Cryptographie basée sur les codes correcteurs d erreurs et arithmétique

Codes used in Cryptography

Error-correcting codes and applications

Reed-Solomon codes. Chapter Linear codes over finite fields

An Introduction to (Network) Coding Theory

Physical Layer and Coding

McEliece type Cryptosystem based on Gabidulin Codes

Wild McEliece Incognito

Attacking and defending the McEliece cryptosystem

Coding Theory: Linear-Error Correcting Codes Anna Dovzhik Math 420: Advanced Linear Algebra Spring 2014

MATH 291T CODING THEORY

Code-based cryptography

Arrangements, matroids and codes

An Introduction to (Network) Coding Theory

Lecture 3: Error Correcting Codes

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Section 3 Error Correcting Codes (ECC): Fundamentals

Post-quantum cryptography Why? Kristian Gjøsteen Department of Mathematical Sciences, NTNU Finse, May 2017

Cryptanalysis of the McEliece Public Key Cryptosystem Based on Polar Codes

Coding Theory and Applications. Linear Codes. Enes Pasalic University of Primorska Koper, 2013

The Golay codes. Mario de Boer and Ruud Pellikaan

: Error Correcting Codes. October 2017 Lecture 1

MATH 291T CODING THEORY

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

Decoding Reed-Muller codes over product sets

Capacity of a channel Shannon s second theorem. Information Theory 1/33

MATH3302. Coding and Cryptography. Coding Theory

Side-channel analysis in code-based cryptography

Know the meaning of the basic concepts: ring, field, characteristic of a ring, the ring of polynomials R[x].

16.36 Communication Systems Engineering

Lecture 12. Block Diagram

Solutions of Exam Coding Theory (2MMC30), 23 June (1.a) Consider the 4 4 matrices as words in F 16

ECEN 604: Channel Coding for Communications

Error-correcting codes and Cryptography

Lecture 12: November 6, 2017

Code Based Cryptography

MATH/MTHE 406 Homework Assignment 2 due date: October 17, 2016

Toward Secure Implementation of McEliece Decryption

An Enhanced (31,11,5) Binary BCH Encoder and Decoder for Data Transmission

4F5: Advanced Communications and Coding

Cyclic Redundancy Check Codes

Lecture B04 : Linear codes and singleton bound

Cryptographic applications of codes in rank metric

Algebraic Codes for Error Control

Chapter 6 Reed-Solomon Codes. 6.1 Finite Field Algebra 6.2 Reed-Solomon Codes 6.3 Syndrome Based Decoding 6.4 Curve-Fitting Based Decoding

Notes 10: Public-key cryptography

Error Detection & Correction

Computing over Z, Q, K[X]

Codes over Subfields. Chapter Basics

Lecture 14: Hamming and Hadamard Codes

A Public Key Encryption Scheme Based on the Polynomial Reconstruction Problem

EE512: Error Control Coding

Cryptographic Engineering

Optical Storage Technology. Error Correction

Roll No. :... Invigilator's Signature :.. CS/B.TECH(ECE)/SEM-7/EC-703/ CODING & INFORMATION THEORY. Time Allotted : 3 Hours Full Marks : 70

How SAGE helps to implement Goppa Codes and McEliece PKCSs

Introduction to Wireless & Mobile Systems. Chapter 4. Channel Coding and Error Control Cengage Learning Engineering. All Rights Reserved.

x n k m(x) ) Codewords can be characterized by (and errors detected by): c(x) mod g(x) = 0 c(x)h(x) = 0 mod (x n 1)

Some error-correcting codes and their applications

Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May

Signing with Codes. c Zuzana Masárová 2014

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Outline. EECS Components and Design Techniques for Digital Systems. Lec 18 Error Coding. In the real world. Our beautiful digital world.

And for polynomials with coefficients in F 2 = Z/2 Euclidean algorithm for gcd s Concept of equality mod M(x) Extended Euclid for inverses mod M(x)

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

: Coding Theory. Notes by Assoc. Prof. Dr. Patanee Udomkavanich October 30, upattane

A Fuzzy Sketch with Trapdoor

Binary Linear Codes G = = [ I 3 B ] , G 4 = None of these matrices are in standard form. Note that the matrix 1 0 0

Shannon s noisy-channel theorem

Channel Coding I. Exercises SS 2017

Berlekamp-Massey decoding of RS code

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Information redundancy

An Overview to Code based Cryptography

Lecture 19 : Reed-Muller, Concatenation Codes & Decoding problem

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

Outline. MSRI-UP 2009 Coding Theory Seminar, Week 2. The definition. Link to polynomials

Coset Decomposition Method for Decoding Linear Codes

ELEC-E7240 Coding Methods L (5 cr)

On Syndrome Decoding of Chinese Remainder Codes

Cyclic codes: overview

Coding problems for memory and storage applications

Cyclic Codes. Saravanan Vijayakumaran August 26, Department of Electrical Engineering Indian Institute of Technology Bombay

The Method of Types and Its Application to Information Hiding

Channel Coding I. Exercises SS 2017

Week 3: January 22-26, 2018

Transcription:

Cryptographic Engineering Clément PERNET M2 Cyber Security, UFR-IM 2 AG, Univ. Grenoble-Alpes ENSIMAG, Grenoble INP

Outline Coding Theory Introduction Linear Codes Reed-Solomon codes Application: Mc Eliece cryptosystem

Outline Coding Theory Introduction Linear Codes Reed-Solomon codes Application: Mc Eliece cryptosystem

Errors everywhere

Errors everywhere

Error models Communication channel Radio transmission Ethernet, DSL CD/DVD Audio/Video/ROM RAM HDD electromagnetic interferences electromagnetic interferences scratches, dust cosmic radiations magnetic field, crash

Error models Communication channel Radio transmission Ethernet, DSL CD/DVD Audio/Video/ROM RAM HDD Message Sender Message electromagnetic interferences electromagnetic interferences scratches, dust cosmic radiations magnetic field, crash Message Receiver

Error models Communication channel Radio transmission Ethernet, DSL CD/DVD Audio/Video/ROM RAM HDD Message Sender Message Error electromagnetic interferences electromagnetic interferences scratches, dust cosmic radiations magnetic field, crash Message Receiver

Error models Communication channel Radio transmission Ethernet, DSL CD/DVD Audio/Video/ROM RAM HDD electromagnetic interferences electromagnetic interferences scratches, dust cosmic radiations magnetic field, crash Message Sender encoding code word code word Error received word Receiver decoding Message

Generalities on coding theory Goals: Detect: require retransmission Correct: i.e. when no interraction possible Tool: Adding redundancy (integrity certificate)

Generalities on coding theory Goals: Detect: require retransmission Correct: i.e. when no interraction possible Tool: Adding redundancy (integrity certificate) Example (NATO phonetic alphabet) A Alfa, B Bravo, C Charlie, D Delta... Alpha Bravo India Tango Tango Echo Delta India Oscar Uniform Sierra!

Generalities on coding theory Goals: Detect: require retransmission Correct: i.e. when no interraction possible Tool: Adding redundancy (integrity certificate) Example (NATO phonetic alphabet) A Alfa, B Bravo, C Charlie, D Delta... Alpha Bravo India Tango Tango Echo Delta India Oscar Uniform Sierra! Two categories of codes: stream codes: online processing of the stream of information block codes: cutting information in blocks and applying the same treatment to each block

Generalities on coding theory Goals: Detect: require retransmission Correct: i.e. when no interraction possible Tool: Adding redundancy (integrity certificate) Example (NATO phonetic alphabet) A Alfa, B Bravo, C Charlie, D Delta... Alpha Bravo India Tango Tango Echo Delta India Oscar Uniform Sierra! Two categories of codes: stream codes: online processing of the stream of information block codes: cutting information in blocks and applying the same treatment to each block

Generalities A code is a sub-set C E of a set of possible words. Often, E is built from an alphabet Σ: E = Σ n.

Generalities A code is a sub-set C E of a set of possible words. Often, E is built from an alphabet Σ: E = Σ n. Parity check (x 1, x 2, x 3 ) (x 1, x 2, x 3, s) with s = 3 i=1 x i mod 2 3 i=1 x i + s = 0 mod 2

Generalities A code is a sub-set C E of a set of possible words. Often, E is built from an alphabet Σ: E = Σ n. Parity check (x 1, x 2, x 3 ) (x 1, x 2, x 3, s) (x 1, x 2, x 3, s) Error detected with s = 3 i=1 x i mod 2 3 i=1 x i + s = 0 mod 2

Generalities A code is a sub-set C E of a set of possible words. Often, E is built from an alphabet Σ: E = Σ n. Parity check (x 1, x 2, x 3 ) (x 1, x 2, x 3, s) (x 1, x 2, x 3, s) Error detected with s = 3 i=1 x i mod 2 3 i=1 x i + s = 0 mod 2 Repetition code Say that again?

Generalities A code is a sub-set C E of a set of possible words. Often, E is built from an alphabet Σ: E = Σ n. Parity check (x 1, x 2, x 3 ) (x 1, x 2, x 3, s) (x 1, x 2, x 3, s) Error detected with s = 3 i=1 x i mod 2 3 i=1 x i + s = 0 mod 2 Repetition code Say that again? a aaa aab aaa a

Generalities A code is a sub-set C E of a set of possible words. Often, E is built from an alphabet Σ: E = Σ n. Parity check (x 1, x 2, x 3 ) (x 1, x 2, x 3, s) (x 1, x 2, x 3, s) Error detected with s = 3 i=1 x i mod 2 3 i=1 x i + s = 0 mod 2 Repetition code Say that again? a aaa aab aaa a E : Σ Σ r x (x,..., x), and C = Im(E) Σ }{{} r times

Types of channels Symmetric binary channel p: error rate 0 1 p 0 1 p p 1 p 1 Floppy CD Optic fibre Satellite DSL LAN 10 9 10 5 (7kB every 700MB) 10 9 10 6 10 3 to 10 9 10 12

Capacity of a channel Input distribution: random var. X with val. in V: Pr[X = v i ] = p i. Output distribution: random var. Y. Transition probabilities: p j i = Pr[Y = v j X = v i ] Mutual information I(X, Y) = x,y log(p(x, y)) p(x, y) log(p(x)) log(p(y)) = H(X) H(X Y) = H(Y) H(Y X). Mesures how much information is shared between X and Y how much information of the source has been transmitted Definition (Channel capacity) C = max all probability distrib for X I(X, Y)

Examples of channel capacities If H(Y) = H(Y X) input does not impact output C = 0 If H(Y X) = 0 no uncertainty on the output for a given input error free channel C = H(X).

Examples of channel capacities If H(Y) = H(Y X) input does not impact output C = 0 If H(Y X) = 0 no uncertainty on the output for a given input error free channel C = H(X). Exercise (Capacity of the symmetric binary channel) Let p X = Pr[X = 0], then Pr[X = 1] = 1 p X symmetric p i j = p constant and p i i = 1 p

Examples of channel capacities If H(Y) = H(Y X) input does not impact output C = 0 If H(Y X) = 0 no uncertainty on the output for a given input error free channel C = H(X). Exercise (Capacity of the symmetric binary channel) Let p X = Pr[X = 0], then Pr[X = 1] = 1 p X symmetric p i j = p constant and p i i = 1 p symmectric H(Y X) = H(Y X = 0) = H(Y X = 1) independent from the probability distribution of X.

Examples of channel capacities If H(Y) = H(Y X) input does not impact output C = 0 If H(Y X) = 0 no uncertainty on the output for a given input error free channel C = H(X). Exercise (Capacity of the symmetric binary channel) Let p X = Pr[X = 0], then Pr[X = 1] = 1 p X symmetric p i j = p constant and p i i = 1 p symmectric H(Y X) = H(Y X = 0) = H(Y X = 1) independent from the probability distribution of X. H(Y) H(Y X) is maximal when H(Y) is maximal Pr[Y = 0] = Pr[Y = 1] = 1/2. { Pr[Y = 0] = px (1 p) + (1 p X )p,, Pr[Y = 1] = p X p + (1 p)(1 p X ) when

Examples of channel capacities If H(Y) = H(Y X) input does not impact output C = 0 If H(Y X) = 0 no uncertainty on the output for a given input error free channel C = H(X). Exercise (Capacity of the symmetric binary channel) Let p X = Pr[X = 0], then Pr[X = 1] = 1 p X symmetric p i j = p constant and p i i = 1 p symmectric H(Y X) = H(Y X = 0) = H(Y X = 1) independent from the probability distribution of X. H(Y) H(Y X) is maximal when H(Y) is maximal Pr[Y = 0] = Pr[Y = 1] = 1/2. { Pr[Y = 0] = px (1 p) + (1 p X )p,, Pr[Y = 1] = p X p + (1 p)(1 p X ) for p X = 1/2 maximal C = H(X) H(Y X) = 1 + p log p + (1 p) log(1 p) when

Examples of channel capacities If H(Y) = H(Y X) input does not impact output C = 0 If H(Y X) = 0 no uncertainty on the output for a given input error free channel C = H(X). Exercise (Capacity of the symmetric binary channel) Let p X = Pr[X = 0], then Pr[X = 1] = 1 p X symmetric p i j = p constant and p i i = 1 p symmectric H(Y X) = H(Y X = 0) = H(Y X = 1) independent from the probability distribution of X. H(Y) H(Y X) is maximal when H(Y) is maximal Pr[Y = 0] = Pr[Y = 1] = 1/2. { Pr[Y = 0] = px (1 p) + (1 p X )p,, Pr[Y = 1] = p X p + (1 p)(1 p X ) for p X = 1/2 maximal C = H(X) H(Y X) = 1 + p log p + (1 p) log(1 p) p=1/2 C = 0 but > 0 otherwise when

Second theorem of Shannon Theorem (Shannon) Over a binary channel of capacity C, for any ɛ > 0, there is an (n, k) code of rate R = k/n that can decode with probability of error < ɛ if and only if 0 R < C.

Outline Coding Theory Introduction Linear Codes Reed-Solomon codes Application: Mc Eliece cryptosystem

Linear Codes Linear Codes Let E = V n over a finite field V. A linear code C is a subspace of E. length: n dimension: k = dim(c) Rate (of information): k/n Encoding function: E : V k V n s.t. C = Im(E) V n

Linear Codes Linear Codes Let E = V n over a finite field V. A linear code C is a subspace of E. length: n dimension: k = dim(c) Rate (of information): k/n Encoding function: E : V k V n s.t. C = Im(E) V n Example Parity code: k = n 1 r-repetition code: k = n/r 1-detector r 1-detector, r 1 2 -corrector

Distance of a code Hamming weight: w H (x) = {i, x i 0}.

Distance of a code Hamming weight: w H (x) = {i, x i 0}. Hamming distance: d H (x, y) = w H (x y) = {i, x i y i }

Distance of a code Hamming weight: w H (x) = {i, x i 0}. Hamming distance: d H (x, y) = w H (x y) = {i, x i y i } minimal distance of a code δ = min x,y C d H (x, y) In a linear code: δ = min x C\{0} w H (x)) t δ

Distance of a code Hamming weight: w H (x) = {i, x i 0}. Hamming distance: d H (x, y) = w H (x y) = {i, x i y i } minimal distance of a code δ = min x,y C d H (x, y) In a linear code: δ = min x C\{0} w H (x)) t C is t-corrector if x E {c C, d H (x, c) t} 1

Distance of a code Hamming weight: w H (x) = {i, x i 0}. Hamming distance: d H (x, y) = w H (x y) = {i, x i y i } minimal distance of a code δ = min x,y C d H (x, y) In a linear code: δ = min x C\{0} w H (x)) t δ C is t-corrector if x E {c C, d H (x, c) t} 1 c 1, c 2 C c 1 c 2 d H (c 1, c 2 ) > 2t

Perfect codes Definition A code is perfect if any detected error can be corrected. Example 4-repetition is not perfect 3-repetition is perfect

Perfect codes Definition A code is perfect if any detected error can be corrected. Example 4-repetition is not perfect 3-repetition is perfect Property A code is perfect if the balls of radius t around the codewords form a partition of the ambiant space.

Generator matrix and parity check matrix Generator matrix The matrix of the encoding function (depends on a choice of basis): E : x T x T G 1 0 Systematic form: G =... G 0 1

Generator matrix and parity check matrix Generator matrix The matrix of the encoding function (depends on a choice of basis): E : x T x T G 1 0 Systematic form: G =... G Parity check matrix 0 1 1. H K (n k) n s.t. ker(h) = C: c C Hc = 0 2. basis of ker(g T ): HG T = 0

Generator matrix and parity check matrix Generator matrix The matrix of the encoding function (depends on a choice of basis): E : x T x T G 1 0 Systematic form: G =... G Parity check matrix 0 1 1. H K (n k) n s.t. ker(h) = C: c C Hc = 0 2. basis of ker(g T ): HG T = 0 Exercise Find G and H of the binary parity check and of the k-repetition codes.

Generator matrix and parity check matrix Generator matrix The matrix of the encoding function (depends on a choice of basis): E : x T x T G 1 0 Systematic form: G =... G Parity check matrix 0 1 1. H K (n k) n s.t. ker(h) = C: c C Hc = 0 2. basis of ker(g T ): HG T = 0 Exercise Find G and H of the binary parity check and of the k-repetition codes. 1 1 G par =......, H par = [1... 1], G rep = H par, G rep = H par 1 1

Role of the parity check matrix Certificate for detecting errors c C Hc = 0 Syndrom: s x = Hx = H(c + e) = He

Role of the parity check matrix Certificate for detecting errors c C Hc = 0 Syndrom: s x = Hx = H(c + e) = He a first correction algorithm: pre-compute all se for w H(e) t in a table S For x received. If sx 0, look for s x in the table S return the corresponding codeword O x s=hx s==0? Return x N s c Return c

Hamming codes 1 0 1 0 1 0 1 Let H = 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Parameters of the corresponding code? Generator matrix? Minimal distance? Is it a perfect code?

Hamming codes 1 0 1 0 1 0 1 Let H = 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Parameters of the corresponding code? (n, k) = (7, 4) Generator matrix? Minimal distance? Is it a perfect code?

Hamming codes 1 0 1 0 1 0 1 Let H = 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Parameters of the corresponding code? (n, k) = (7, 4) 1 1 1 0 0 0 0 Generator matrix? G = 1 0 0 1 1 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 Minimal distance? Is it a perfect code?

Hamming codes 1 0 1 0 1 0 1 Let H = 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Parameters of the corresponding code? (n, k) = (7, 4) 1 1 1 0 0 0 0 Generator matrix? G = 1 0 0 1 1 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 Minimal distance? δ 3. If δ = 1, i, Hi = 0 If δ = 2, i j, Hi = H j δ = 3 Is it a perfect code?

Hamming codes 1 0 1 0 1 0 1 Let H = 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Parameters of the corresponding code? (n, k) = (7, 4) 1 1 1 0 0 0 0 Generator matrix? G = 1 0 0 1 1 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 Minimal distance? δ 3. If δ = 1, i, Hi = 0 If δ = 2, i j, Hi = H j δ = 3 Is it a perfect code? δ = 3 t = 1 corrector. C = 2 k # of elements in each ball of radius 1: 2 k (1 + 7) = 16 8 = 2 7 = K n perfect

Hamming codes 1 0 1 0 1 0 1 Let H = 0 1 1 0 0 1 1 0 0 0 1 1 1 1 Parameters of the corresponding code? (n, k) = (7, 4) 1 1 1 0 0 0 0 Generator matrix? G = 1 0 0 1 1 0 0 0 1 0 1 0 1 0 1 1 0 1 0 0 1 Minimal distance? δ 3. If δ = 1, i, Hi = 0 If δ = 2, i j, Hi = H j δ = 3 Is it a perfect code? δ = 3 t = 1 corrector. C = 2 k # of elements in each ball of radius 1: 2 k (1 + 7) = 16 8 = 2 7 = K n perfect Generalization l: H(2 l 1, 2 l l), is 1-corrector, perfect. Example: Minitel, ECC memory: l = 7

Some bounds Let C be a code (n, k, δ) over a field F q with q elements. k and δ can not be simulatneously large for a given n. Sphere packing: q k t ( n ) i=0 i (q 1) i q n, t = δ 1 2.

Some bounds Let C be a code (n, k, δ) over a field F q with q elements. k and δ can not be simulatneously large for a given n. Sphere packing: q k t ( n ) i=0 i (q 1) i q n, t = δ 1 2. Singleton bound: δ n k + 1

Some bounds Let C be a code (n, k, δ) over a field F q with q elements. k and δ can not be simulatneously large for a given n. Sphere packing: q k t ( n ) i=0 i (q 1) i q n, t = δ 1 2. Singleton bound: δ n k + 1 Sketch of proof: Let H be the parity check matrix (n k) n. δ is the smallest number of linearly dependent cols of H. n k + 1 = rank(h) + 1 cols are always linearly dependent.

Some bounds Let C be a code (n, k, δ) over a field F q with q elements. k and δ can not be simulatneously large for a given n. Sphere packing: q k t ( n ) i=0 i (q 1) i q n, t = δ 1 2. Singleton bound: δ n k + 1 Sketch of proof: Let H be the parity check matrix (n k) n. δ is the smallest number of linearly dependent cols of H. n k + 1 = rank(h) + 1 cols are always linearly dependent. How to build codes correcting up to n k 2.

Outline Coding Theory Introduction Linear Codes Reed-Solomon codes Application: Mc Eliece cryptosystem

Reed-Solomon codes Definition (Reed-Solomon codes) Let K be a finite field, and x 1,..., x n K distinct elements. The Reed-Solomon code of length n and dimension k is defined by C(n, k) = {(f (x 1 ),..., f (x n )), f K[X]; deg f < k}

Reed-Solomon codes Definition (Reed-Solomon codes) Let K be a finite field, and x 1,..., x n K distinct elements. The Reed-Solomon code of length n and dimension k is defined by C(n, k) = {(f (x 1 ),..., f (x n )), f K[X]; deg f < k} Example (n, k) = (5, 3), f = x 2 + 2x + 1 over Z/31Z. (1, 2, 1, 0, 0) Eval (f (1), f (5), f (8), f (10), f (12)) = (4, 17, 5, 7, 17) (4, 17, 5, 7, 17) Interp. (1, 2, 1, 0, 0) x 2 + 2x + 1 (4, 17, 13, 7, 17) Interp. (12, 8, 11, 10, 1) x 4 + 10x 3 + 11x 2 + 8x + 12

Minimal distance of Reed-Solomon codes Property δ = n k + 1 (Maximum Distance Separable codes)

Minimal distance of Reed-Solomon codes Property δ = n k + 1 (Maximum Distance Separable codes) Proof. Singeton bound: δ n k + 1

Minimal distance of Reed-Solomon codes Property δ = n k + 1 (Maximum Distance Separable codes) Proof. Singeton bound: δ n k + 1 Let f, g C: deg f, deg g < k. If f (x i ) g(x i ) for d < n k + 1 values x i, Then f (x j ) g(x j ) = 0 for at least n d > k 1 values x j. Now deg(f g) < k, hence f = g.

Minimal distance of Reed-Solomon codes Property δ = n k + 1 (Maximum Distance Separable codes) Proof. Singeton bound: δ n k + 1 Let f, g C: deg f, deg g < k. If f (x i ) g(x i ) for d < n k + 1 values x i, Then f (x j ) g(x j ) = 0 for at least n d > k 1 values x j. Now deg(f g) < k, hence f = g. correct up to n k 2 errors.

Rational fraction reconstruction Problem (RFR: Rational Fraction Reconstruction) Given A, B K[X] with deg B < deg A = n, find f, g K[X], with deg f d F, deg g n d F 1 and f = gb mod A. Theorem Let (f 0 = A, f 1 = B,..., f l ) the sequence of remainders of the extended Euclidean algorithm applied on (A, B) and u i, v i the coefficients s.t. f i = u i f 0 + v i f 1. Then, at iteration j s.t. deg f j d F < deg f j 1, 1. (f j, v j ) is a solution of problem RFR. 2. it is minimal: any other solution (f, g) writes f = qf j, g = qv j for q K[X].

Reed-Solomon decoding with Extended Euclidean algorithm Berlekamp-Welch using extended Euclidean algorithm Erroneous interpolant: P = Interp((y i, x i )) Error locator polynomial: Λ = i y iis erroneous (X x i) Find f with deg f d F s.t.. f and P match on n t evaluations x i. Λf }{{} = }{{} Λ P mod f j g j n (X x i ) i=1 and (Λf, Λ) is minimal computed by extended Euclidean Algorithm f = f j /g j.

Another decoding algorithm: syndrom based From now on: K = F q, n = q 1, x i = α i where α is a primitive n-th root of unity. E(f ) = (f (α 0 ), f (α 1 ), f (α 2 ),..., f (α n 1 )) = DFT α (f )

Another decoding algorithm: syndrom based From now on: K = F q, n = q 1, x i = α i where α is a primitive n-th root of unity. E(f ) = (f (α 0 ), f (α 1 ), f (α 2 ),..., f (α n 1 )) = DFT α (f ) Linear recurring sequences Sequences (a 0, a 1,..., a n,... ) such that t 1 j 0 a j+t = λ i a i+j i=0 generator polynomial: Λ(z) = z t t 1 i=0 λ iz i minimal polynomial: Λ(z) of minimal degree linear complexity of (a i ) i : degree t of the minimal polynomial Λ Computing Λ min : Berlekamp/Massey algorithm, from 2t consecutive elements, in O ( t 2)

Blahut theorem Theorem ( [Blahut84], [Prony1795] ) The DFT α of a vector of weight t has linear complexity t.

Blahut theorem Theorem ( [Blahut84], [Prony1795] ) The DFT α of a vector of weight t has linear complexity t. Skecth of proof Let v = e i be a 1-weight vector. Then DFT α (v) = Ev (α0,α 1,...,α n )(X i ) = ((α 0 ) i, (α 1 ) i,..., (α n 1 ) i ) is linearly generated by Λ(z) = z α i.

Blahut theorem Theorem ( [Blahut84], [Prony1795] ) The DFT α of a vector of weight t has linear complexity t. Skecth of proof Let v = e i be a 1-weight vector. Then DFT α (v) = Ev (α0,α 1,...,α n )(X i ) = ((α 0 ) i, (α 1 ) i,..., (α n 1 ) i ) is linearly generated by Λ(z) = z α i. For v = t j=1 e i j, the sequence DFT α (v) is generated by ppcm j (z α ij ) = t j=1 (z αij )

Blahut theorem Theorem ( [Blahut84], [Prony1795] ) The DFT α of a vector of weight t has linear complexity t. Skecth of proof Let v = e i be a 1-weight vector. Then DFT α (v) = Ev (α0,α 1,...,α n )(X i ) = ((α 0 ) i, (α 1 ) i,..., (α n 1 ) i ) is linearly generated by Λ(z) = z α i. For v = t j=1 e i j, the sequence DFT α (v) is generated by ppcm j (z α ij ) = t j=1 (z αij ) Corollary The roots of Λ localize the non-zero elements les éléments non nuls de v: α ij. error locator

Syndrom Decoding of Reed-Solomon codes C = {(f (x 1 ),..., f (x n )) deg f < k} Evaluation f m = f (x), deg f < k 0 y = Eval(f ), y i = f (x i) error e P = f + Interp(e) 0 Interpolation z = y + e

Syndrom Decoding of Reed-Solomon codes C = {(f (x 1 ),..., f (x n )) deg f < k} Evaluation f m = f (x), deg f < k 0 y = Eval(f ), y i = f (x i) error e g = f + Interp(e) z = y + e 0 Interpolation 000000000000 111111111111 000000000000 111111111111 000000000000 111111111111 Berlekamp Massey algorithm f

Codes derived from Reed Solomon codes Generalized Reed-Solomon codes C GRS (n, k, v) = {(v 1 f (x 1 ),..., v n f (x n )), f K <k [X]} same dimension and minimal distance MDS Allows to prove existence of a dual GRS code in the same evaluation points

Codes derived from Reed Solomon codes Goppa codes Let Motivation: allow arbitrary length n for a fixed field F q. (GRS codes require q < n). Idea: use a GRS over F q m, and restrict to F q. K = F q, K = F q m and (α (K ) n f F q m[x], deg f = t 1 and m(t 1) < n v = ( 1 f (α i) ) C K = C GRS (n, k, v) over K with parameters (n, k, t) Then Minimum distance: t C Goppa = C K F n q Binary Goppa, with f square free Minimum distance: = 2t + 1

Outline Coding Theory Introduction Linear Codes Reed-Solomon codes Application: Mc Eliece cryptosystem

A code based cryptosystem [Mc Eliece 78] Designing a one way function with trapdoor Use the encoder of a linear code: message [ G ] = codeword Encoding is easy (matrix-vector product) Decoding a random linear code is NP-complete Trapdoor: efficient decoding algorithm when the code familiy is known

A code based cryptosystem [Mc Eliece 78] Designing a one way function with trapdoor Use the encoder of a linear code: message [ G ] = codeword Encoding is easy (matrix-vector product) Decoding a random linear code is NP-complete Trapdoor: efficient decoding algorithm when the code familiy is known requires a family F of codes indistinguishable from random linear codes with fast decoding algorithm

Mc Eliece Cryptosystem Key generation Select an (n, k) binary linear code C F correcting t errors, having an efficient decoding algorithm A C, Form G F k n q, a generator matrix for C Select a random k k non-singular matrix S Select a random n-dimensional permutation P. Ĝ = SGP Public key: (Ĝ, t) Private key: (S, G, P) and A C

Mc Eliece Cryptosystem Encrypt E(m) = mĝ + e = msgp + e = y where e is an error vector of Hamming weight at most t. Decrypt 1. y = yp 1 = msg + ep 1 2. m = A C (y ) = ms 3. m = m S 1

Parameters for Mc Eliece in practice (n, k, d) Code family key size Security Attack (256, 128, 129) Gen. Reed-Solomon 67ko 2 95 [SS92]

Parameters for Mc Eliece in practice (n, k, d) Code family key size Security Attack (256, 128, 129) Gen. Reed-Solomon 67ko 2 95 [SS92] subcodes of GRS [Wie10]

Parameters for Mc Eliece in practice (n, k, d) Code family key size Security Attack (256, 128, 129) Gen. Reed-Solomon 67ko 2 95 [SS92] subcodes of GRS [Wie10] (1024, 176, 128) Reed-Muller codes 22.5ko 2 72 [MS07, CB13] (2048, 232, 256) Reed-Muller codes 59.4ko 2 93 [MS07, CB13]

Parameters for Mc Eliece in practice (n, k, d) Code family key size Security Attack (256, 128, 129) Gen. Reed-Solomon 67ko 2 95 [SS92] subcodes of GRS [Wie10] (1024, 176, 128) Reed-Muller codes 22.5ko 2 72 [MS07, CB13] (2048, 232, 256) Reed-Muller codes 59.4ko 2 93 [MS07, CB13] (171, 109, 61) 128 Alg.-Geom. codes 16ko 2 66 [FM08, CMP14]

Parameters for Mc Eliece in practice (n, k, d) Code family key size Security Attack (256, 128, 129) Gen. Reed-Solomon 67ko 2 95 [SS92] subcodes of GRS [Wie10] (1024, 176, 128) Reed-Muller codes 22.5ko 2 72 [MS07, CB13] (2048, 232, 256) Reed-Muller codes 59.4ko 2 93 [MS07, CB13] (171, 109, 61) 128 Alg.-Geom. codes 16ko 2 66 [FM08, CMP14] (1024, 524, 101) 2 Goppa codes 67ko 2 62 (2048, 1608, 48) 2 Goppa codes 412ko 2 96

Advantages of McEliece cryptosystem Security Based on two assumptions: decoding a random linear code is hard (NP complete reduction) the generator matrix of a Goppa code looks random (indistinguishability) Pros: faster encoding/decoding algorithms than RSA, ECC (for a given security parameter) Post quantum security: still robust against quantum computer attacks Cons: harder to use it for signature (non determinstic encoding) large key size