Sparse Polynomial Interpolation and Berlekamp/Massey algorithms that correct Outlier Errors in Input Values

Similar documents
Adaptive decoding for dense and sparse evaluation/interpolation codes

Cryptographic Engineering

Fast algorithms for factoring polynomials A selection. Erich Kaltofen North Carolina State University google->kaltofen google->han lu

Computing over Z, Q, K[X]

4F5: Advanced Communications and Coding

Output-sensitive algorithms for sumset and sparse polynomial multiplication

Fast Estimates of Hankel Matrix Condition Numbers and Numeric Sparse Interpolation*

Solving Sparse Rational Linear Systems. Pascal Giorgi. University of Waterloo (Canada) / University of Perpignan (France) joint work with

Journal of Symbolic Computation. On the Berlekamp/Massey algorithm and counting singular Hankel matrices over a finite field

On the Berlekamp/Massey Algorithm and Counting Singular Hankel Matrices over a Finite Field

Skew cyclic codes: Hamming distance and decoding algorithms 1

Berlekamp-Massey decoding of RS code

On the use of polynomial matrix approximant in the block Wiedemann algorithm

Parallel Sparse Polynomial Interpolation over Finite Fields

Fast algorithms for multivariate interpolation problems

Computing minimal interpolation bases

A Fraction Free Matrix Berlekamp/Massey Algorithm*

DoubleSparseCompressedSensingProblem 1

The Seven Dwarfs of Symbolic Computation and the Discovery of Reduced Symbolic Models. Erich Kaltofen North Carolina State University google->kaltofen

An algebraic perspective on integer sparse recovery

On Irreducible Polynomial Remainder Codes

On Exact and Approximate Interpolation of Sparse Rational Functions*

A fast parallel sparse polynomial GCD algorithm.

An Introduction to (Network) Coding Theory

Compressed Sensing and Linear Codes over Real Numbers

Coding Theory. Ruud Pellikaan MasterMath 2MMC30. Lecture 11.1 May

An Introduction to (Network) Coding Theory

Lecture 3: Error Correcting Codes

arxiv: v2 [cs.sc] 22 Jun 2017

Approximation theory

for some error exponent E( R) as a function R,

An Alternative Decoding Method for Gabidulin Codes in Characteristic Zero

Decoding linear codes via systems solving: complexity issues and generalized Newton identities

Decoding linear codes via systems solving: complexity issues

Binary Primitive BCH Codes. Decoding of the BCH Codes. Implementation of Galois Field Arithmetic. Implementation of Error Correction

Bounding the number of affine roots

Algebraic Geometry Codes. Shelly Manber. Linear Codes. Algebraic Geometry Codes. Example: Hermitian. Shelly Manber. Codes. Decoding.

Error Detection, Correction and Erasure Codes for Implementation in a Cluster File-system

Combining geometry and combinatorics

Section 3 Error Correcting Codes (ECC): Fundamentals

PART II: Research Proposal Algorithms for the Simplification of Algebraic Formulae

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fast computation of minimal interpolation bases in Popov form for arbitrary shifts

On Sparsity, Redundancy and Quality of Frame Representations

Towards High Performance Multivariate Factorization. Michael Monagan. This is joint work with Baris Tuncer.

Bon anniversaire Marc!

Code-Based Cryptography Error-Correcting Codes and Cryptography

A generalized Prony method for sparse approximation

High Performance and Reliable Algebraic Computing

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

List decoding of binary Goppa codes and key reduction for McEliece s cryptosystem

On Stream Ciphers with Small State

Extended Subspace Error Localization for Rate-Adaptive Distributed Source Coding

Progressive Algebraic Soft-Decision Decoding of Reed-Solomon Codes

ABSTRACT. YUHASZ, GEORGE L. Berlekamp/Massey Algorithms for Linearly Generated Matrix Sequences. (Under the direction of Professor Erich Kaltofen.

Why solve integer linear systems exactly? David Saunders - U of Delaware

On Syndrome Decoding of Chinese Remainder Codes

Numerical decoding, Johnson-Lindenstrauss transforms, and linear codes

Notes 10: List Decoding Reed-Solomon Codes and Concatenated codes

List Decoding of Binary Goppa Codes up to the Binary Johnson Bound

Lecture 5: Ideals of Points

An Interpolation Algorithm for List Decoding of Reed-Solomon Codes

An Algorithm for Approximate Factorization of Bivariate Polynomials 1)

Fast Polynomial Multiplication

Sparse Solutions of an Undetermined Linear System

: Error Correcting Codes. November 2017 Lecture 2

Error Correcting Codes: Combinatorics, Algorithms and Applications Spring Homework Due Monday March 23, 2009 in class

Great Theoretical Ideas in Computer Science

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

General error locator polynomials for nth-root codes

Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

Fast Multivariate Power Series Multiplication in Characteristic Zero

Sequences, DFT and Resistance against Fast Algebraic Attacks

Reed-Solomon codes. Chapter Linear codes over finite fields

Generalized Reed-Solomon Codes

Symbolic-Numeric Sparse Interpolation of Multivariate Polynomials

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Interpolation and Polynomial Approximation I

Weakly Secure Data Exchange with Generalized Reed Solomon Codes

Towards High Performance Multivariate Factorization. Michael Monagan. This is joint work with Baris Tuncer.

Coding Theory and Applications. Solved Exercises and Problems of Cyclic Codes. Enes Pasalic University of Primorska Koper, 2013

Chapter 6. BCH Codes

Robust Principal Component Analysis

Efficient Algorithms for Order Bases Computation

Computing Error Distance of Reed-Solomon Codes

Lecture 12: November 6, 2017

Lecture 12: Reed-Solomon Codes

Greedy Signal Recovery and Uniform Uncertainty Principles

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Cover Page. The handle holds various files of this Leiden University dissertation

FFT-based Dense Polynomial Arithmetic on Multi-cores

Polynomial interpolation over finite fields and applications to list decoding of Reed-Solomon codes

: Error Correcting Codes. October 2017 Lecture 1

Error Detection & Correction

Linear Algebra for Computing Gröbner Bases of Linear Recursive Multidimensional Sequences

SPARSE POLYNOMIAL INTERPOLATION AND THE FAST EUCLIDEAN ALGORITHM

Fast computation of normal forms of polynomial matrices

On Computably Enumerable Sets over Function Fields

Lecture Introduction. 2 Linear codes. CS CTT Current Topics in Theoretical CS Oct 4, 2012

Transcription:

Sparse Polynomial Interpolation and Berlekamp/Massey algorithms that correct Outlier Errors in Input Values Clément PERNET joint work with Matthew T. COMER and Erich L. KALTOFEN : LIG/INRIA-MOAIS, Grenoble Université, France : North Carolina State University, USA ISSAC 12, Grenoble, France, July 23rd, 2012

Outline Berlekamp/Massey algorithm with errors Bounds on the decoding capacity Decoding algorithms Sparse Polynomial Interpolation with errors Relations to Reed-Solomon decoding

Introduction Polynomial interpolation with errors Dense case: Reed-Solomon codes / CRT codes number of evaluation points made adaptive on error impact and degree [Khonji & Al. 10]

Introduction Polynomial interpolation with errors Dense case: Reed-Solomon codes / CRT codes number of evaluation points made adaptive on error impact and degree [Khonji & Al. 10] Sparse case: Present work based on Ben-Or & Tiwari s interpolation algorithm itself based on Berlekamp/Massey algorithm develop Berlekamp/Massey Algorithm with errors

Preliminaries Linear recurring sequences Sequence (a 0, a 1,..., a n,... ) such that t 1 j 0 a j+t = λ i a i+j generating polynomial: Λ(z) = z t t 1 i=0 λ iz i minimal generating polynomial: Λ(z) of minimal degree i=0 linear complexity of (a i ) i : the minimal degree of Λ Hamming weight: weight(x) = #{i x i 0} Hamming distance: d H (x, y) = weight(x y)

Berlekamp/Massey algorithm Input: (a 0,..., a n 1 ) a sequence of field elements. Result: Λ(z) = L n i=0 λ iz i a monic polynomial of minimal degree L n n such that L n i=0 λ ia i+j = 0 for j = 0,..., n L n 1. Guarantee : BMA finds Λ of degree t from 2t entries.

Outline Berlekamp/Massey algorithm with errors Bounds on the decoding capacity Decoding algorithms Sparse Polynomial Interpolation with errors Relations to Reed-Solomon decoding

Problem Statement Berlkamp/Massey with errors Suppose (a 0, a 1,... ) is linearly generated by Λ(z) of degree t where Λ(0) 0. Given (b 0, b 1,... ) = (a 0, a 1,... ) + ε, where weight(ε) E: 1. How to recover Λ(z) and (a 0, a 1,... ) 2. How many entries required for a unique solution a list of solutions including (a0, a 1,... )

Problem Statement Berlkamp/Massey with errors Suppose (a 0, a 1,... ) is linearly generated by Λ(z) of degree t where Λ(0) 0. Given (b 0, b 1,... ) = (a 0, a 1,... ) + ε, where weight(ε) E: 1. How to recover Λ(z) and (a 0, a 1,... ) 2. How many entries required for a unique solution a list of solutions including (a0, a 1,... ) Coding Theory formulation Let C be the set of all sequences of linear complexity t. 1. How to decode C? 2. What are the best correction capacity? for unique decoding list decoding

How many entries to guarantee uniqueness? Case E = 1, t = 2 (a i ) Λ(z) (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 2 2z 2 + z 4 + z 6 Where is the error?

How many entries to guarantee uniqueness? Case E = 1, t = 2 (a i ) Λ(z) (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 2 2z 2 + z 4 + z 6 (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 1 + z 2 Where is the error?

How many entries to guarantee uniqueness? Case E = 1, t = 2 (a i ) Λ(z) (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 2 2z 2 + z 4 + z 6 (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 1 + z 2 (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 1 + z 2 Where is the error?

How many entries to guarantee uniqueness? Case E = 1, t = 2 (a i ) Λ(z) (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 2 2z 2 + z 4 + z 6 (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 1 + z 2 (0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0) 1 + z 2 Where is the error? A unique solution is not guaranteed with t = 2, E = 1 and n = 11 Is n 2t(2E + 1) a necessary condition?

Generalization to any E 1 t 1 times {}}{ Let 0 = ( 0,..., 0). Then s = (0, 1, 0, 1, 0, 1, 0, 1) is generated by z t 1 or z t + 1 up to E = 1 error. Then E times {}}{ ( s, s,..., s, 0, 1, 0) is generated by z t 1 or z t + 1 up to E errors. ambiguity with n = 2t(2E + 1) 1 values.

Generalization to any E 1 t 1 times {}}{ Let 0 = ( 0,..., 0). Then s = (0, 1, 0, 1, 0, 1, 0, 1) is generated by z t 1 or z t + 1 up to E = 1 error. Then E times {}}{ ( s, s,..., s, 0, 1, 0) is generated by z t 1 or z t + 1 up to E errors. ambiguity with n = 2t(2E + 1) 1 values. Theorem Necessary condition for unique decoding: n 2t(2E + 1)

The Majority Rule Berlekamp/Massey algorithm 2t E=2 n=2t(2e+1) Λ Λ Λ Λ Λ 1 2 3 4 5

The Majority Rule Berlekamp/Massey algorithm 2t E=2 n=2t(2e+1) Λ Λ Λ Λ Λ 1 2 3 4 5 Input: (a 0,..., a n 1 ) + ε, where n = 2t(2E + 1), weight(ε) E, and (a 0,..., a n 1 ) minimally generated by Λ of degree t, where Λ(0) 0. Output: Λ(z) and (a 0,..., a n 1 ). 1 begin 2 Run BMA on 2E + 1 segments of 2t entries and record Λ i (z) on each segment; 3 Perform majority vote to find Λ(z);

The Majority Rule Berlekamp/Massey algorithm 2t E=2 n=2t(2e+1) Λ Λ Λ Λ Λ 1 2 3 4 5 Input: (a 0,..., a n 1 ) + ε, where n = 2t(2E + 1), weight(ε) E, and (a 0,..., a n 1 ) minimally generated by Λ of degree t, where Λ(0) 0. Output: Λ(z) and (a 0,..., a n 1 ). 1 begin 2 Run BMA on 2E + 1 segments of 2t entries and record Λ i (z) on each segment; 3 Perform majority vote to find Λ(z); 4 Use a clean segment to clean-up the sequence ; 5 return Λ(z) and (a 0, a 1,... );

Algorithm SequenceCleanUp Input: Λ(z) = z t + t 1 i=0 λ ix i where Λ(0) 0 Input: (a 0,..., a n 1 ), where n t + 1 Input: E, the maximum number of corrections to make Input: k, such that (a k, a k+2t 1 ) is clean Output: (b 0,..., b n 1 ) generated by Λ at distance E to (a 0,..., a n 1 )

Algorithm SequenceCleanUp Input: Λ(z) = z t + t 1 i=0 λ ix i where Λ(0) 0 Input: (a 0,..., a n 1 ), where n t + 1 Input: E, the maximum number of corrections to make Input: k, such that (a k, a k+2t 1 ) is clean Output: (b 0,..., b n 1 ) generated by Λ at distance E to (a 0,..., a n 1 ) 1 begin 2 (b 0,..., b n 1 ) (a 0,..., a n 1 ); e, j 0; 3 i k + 2t; 4 while i n 1 and e E do 5 if Λ does not satisfy (b i t+1,..., b i ) then 6 Fix b i using Λ(z) as a LFSR; e e + 1; 11 return (b 0,..., b n 1 ), e

Algorithm SequenceCleanUp Input: Λ(z) = z t + t 1 i=0 λ ix i where Λ(0) 0 Input: (a 0,..., a n 1 ), where n t + 1 Input: E, the maximum number of corrections to make Input: k, such that (a k, a k+2t 1 ) is clean Output: (b 0,..., b n 1 ) generated by Λ at distance E to (a 0,..., a n 1 ) 1 begin 2 (b 0,..., b n 1 ) (a 0,..., a n 1 ); e, j 0; 3 i k + 2t; 4 while i n 1 and e E do 5 if Λ does not satisfy (b i t+1,..., b i ) then 6 Fix b i using Λ(z) as a LFSR; e e + 1; 7 i k 1; 8 while i 0 and e E do 9 if Λ does not satisfy (b i,..., b i+t 1 ) then 10 Fix b i using z t Λ(1/z) as a LFSR; e e + 1; 11 return (b 0,..., b n 1 ), e

Algorithm SequenceCleanUp Input: Λ(z) = z t + t 1 i=0 λ ix i where Λ(0) 0 Input: (a 0,..., a n 1 ), where n t + 1 Input: E, the maximum number of corrections to make Input: k, such that (a k, a k+2t 1 ) is clean Output: (b 0,..., b n 1 ) generated by Λ at distance E to (a 0,..., a n 1 ) 1 begin 2 (b 0,..., b n 1 ) (a 0,..., a n 1 ); e, j 0; 3 i k + 2t; 4 while i n 1 and e E do 5 if Λ does not satisfy (b i t+1,..., b i ) then 6 Fix b i using Λ(z) as a LFSR; e e + 1; 7 i k 1; 8 while i 0 and e E do 9 if Λ does not satisfy (b i,..., b i+t 1 ) then 10 Fix b i using z t Λ(1/z) as a LFSR; e e + 1; 11 return (b 0,..., b n 1 ), e

Finding a clean segment: case E = 1 only one error (a 0,..., a k 2, b k 1 a k 1, a k, a k+1, a 2t 1 ) will be identified by the majority vote (2-to-1 majority).

Finding a clean segment: case E 2 Multiple errors on one segment can still be generated by Λ(z) deceptive segments: not good for SequenceCleanUp Example E = 3: (0, 1, 0, 2, 0, 4, 0, 8,... ) Λ(z) = z 2 2

Finding a clean segment: case E 2 Multiple errors on one segment can still be generated by Λ(z) deceptive segments: not good for SequenceCleanUp Example E = 3: (0, 1, 0, 2, 0, 4, 0, 8,... ) Λ(z) = z 2 2 (1, 1, 2, 2, 4, 4, 0, 8, 0, 16, 0, 32,... )

Finding a clean segment: case E 2 Multiple errors on one segment can still be generated by Λ(z) deceptive segments: not good for SequenceCleanUp Example E = 3: (0, 1, 0, 2, 0, 4, 0, 8,... ) Λ(z) = z 2 2 ( 1, 1, 2, 2, 4, 4, 0, 8, 0, 16, 0, 32,... ) }{{}}{{}}{{} z 2 2 z 2 +2z 2 z 2 2

Finding a clean segment: case E 2 Multiple errors on one segment can still be generated by Λ(z) deceptive segments: not good for SequenceCleanUp Example E = 3: (0, 1, 0, 2, 0, 4, 0, 8,... ) Λ(z) = z 2 2 ( 1, 1, 2, 2, 4, 4, 0, 8, 0, 16, 0, 32,... ) }{{}}{{}}{{} z 2 2 z 2 +2z 2 z 2 2 (1, 1, 2, 2) is deceptive. Applying SequenceCleanUp with this clean segment produces (1, 1, 2, 2, 4, 4, 8, 8, 16, 16, 32, 32, 64,... )

Finding a clean segment: case E 2 Multiple errors on one segment can still be generated by Λ(z) deceptive segments: not good for SequenceCleanUp Example E = 3: (0, 1, 0, 2, 0, 4, 0, 8,... ) Λ(z) = z 2 2 ( 1, 1, 2, 2, 4, 4, 0, 8, 0, 16, 0, 32,... ) }{{}}{{}}{{} z 2 2 z 2 +2z 2 z 2 2 (1, 1, 2, 2) is deceptive. Applying SequenceCleanUp with this clean segment produces (1, 1, 2, 2, 4, 4, 8, 8, 16, 16, 32, 32, 64,... ) E > 3? contradiction. Try (0, 16, 0, 32) as a clean segment instead.

Success of the sequence clean-up Theorem If n t(2e + 1), then a deceptive segment will necessarily be exposed by a failure of the condition e E in algorithm SequenceCleanUp.

Success of the sequence clean-up Theorem If n t(2e + 1), then a deceptive segment will necessarily be exposed by a failure of the condition e E in algorithm SequenceCleanUp. Corollary n 2t(2E + 1) is a necessary and sufficient condition for unique decoding of Λ and the corresponding sequence. Remark Also works with an upper bound t T on deg Λ.

List decoding for n 2t(E + 1) 2t E=2 n=2t(e+1) Λ Λ Λ 1 2 3

List decoding for n 2t(E + 1) 2t E=2 n=2t(e+1) Λ Λ Λ 1 2 3 Input: (a 0,..., a n 1 ) + ε, where n = 2t(E + 1), weight(ε) E, and (a 0,..., a n 1 ) minimally generated by Λ of degree t, where Λ(0) 0. Output: (Λ i (z), s i = (a (i) 0,..., a(i) n 1 )) i a list of E candidates 1 begin 2 Run BMA on E + 1 segments of 2t entries and record Λ i (z) on each segment;

List decoding for n 2t(E + 1) 2t E=2 n=2t(e+1) Λ Λ Λ 1 2 3 Input: (a 0,..., a n 1 ) + ε, where n = 2t(E + 1), weight(ε) E, and (a 0,..., a n 1 ) minimally generated by Λ of degree t, where Λ(0) 0. Output: (Λ i (z), s i = (a (i) 0,..., a(i) n 1 )) i a list of E candidates 1 begin 2 Run BMA on E + 1 segments of 2t entries and record Λ i (z) on each segment; 3 foreach Λ i (z) do 4 Use a clean segment to clean-up the sequence; 5 Withdraw Λ i if no clean segment can be found. 6 return the list (Λ i (z), (a (i) 0,..., a(i) n 1 )) i;

Properties The list contains the right solution (Λ, (a 0,..., a n 1 ))

Properties The list contains the right solution (Λ, (a 0,..., a n 1 )) n 2t(E + 1) is the tightest bound to enable syndrome decoding (BMA on a clean sequence of length 2t). Example n = 2t(E + 1) 1 and ε = (0,..., 0, 1, 0,..., 0, 1..., 1, 0,..., 0). }{{}}{{}}{{} 2t 1 2t 1 2t 1 Then (a 0,..., a n 1 ) + ε has no length 2t clean segment.

Outline Berlekamp/Massey algorithm with errors Bounds on the decoding capacity Decoding algorithms Sparse Polynomial Interpolation with errors Relations to Reed-Solomon decoding

Sparse Polynomial Interpolation x F f (x) Problem f = t i=1 c ix e i Recover a t-sparse polynomial f given a black-box computing evaluations of it.

Sparse Polynomial Interpolation x F f (x) Problem f = t i=1 c ix e i Recover a t-sparse polynomial f given a black-box computing evaluations of it. Ben-Or/Tiwari 1988: Let a i = f (p i ) for p a field element, and let Λ(λ) = t i=1 (z pe i ). Then Λ(λ) is the minimal generator of (a 0, a 1,... ). only need 2t entries to find Λ(λ) (using BMA)

Sparse Polynomial Interpolation x F f (x)+ε Problem f = t i=1 c ix e i Recover a t-sparse polynomial f given a black-box computing evaluations of it. Ben-Or/Tiwari 1988: Let a i = f (p i ) for p a field element, and let Λ(λ) = t i=1 (z pe i ). Then Λ(λ) is the minimal generator of (a 0, a 1,... ). only need 2t entries to find Λ(λ) (using BMA) only need 2T(2E + 1) with e E errors and t T.

Ben-Or & Tiwari s Algorithm Input: (a 0,..., a 2t 1 ) where a i = f (p i ) Input: t, the numvber of (non-zero) terms of f (x) = t j=1 c jx e j Output: f (x) 1 begin 2 Run BMA on (a 0,..., a 2t 1 ) to find Λ(z) 3 Find roots of Λ(z) (polynomial factorization) 4 Recover e j by repeated division (by p) 5 Recover c j by solving the transposed Vandermonde system (p 0 ) e 1 (p 0 ) e 2... (p 0 ) et c 1 (p 1 ) e 1 (p 1 ) e 2... (p 1 ) et c 2.... = (p t ) e 1 (p t ) e 2... (p t ) et c t a 0 a 1. a t 1

Outline Berlekamp/Massey algorithm with errors Bounds on the decoding capacity Decoding algorithms Sparse Polynomial Interpolation with errors Relations to Reed-Solomon decoding

Blahut s theorem Theorem (Blahut) The D.F.T of a vector of weight t has linear complexity at most t DFT ω (v) Vandemonde(ω 0, ω 1, ω 2,... )v Eval ω 0,ω 1,ω 2,...(v)

Blahut s theorem Theorem (Blahut) The D.F.T of a vector of weight t has linear complexity at most t DFT ω (v) Vandemonde(ω 0, ω 1, ω 2,... )v Eval ω 0,ω 1,ω 2,...(v) Univariate Ben-Or & Tiwari as a corollary

Blahut s theorem Theorem (Blahut) The D.F.T of a vector of weight t has linear complexity at most t DFT ω (v) Vandemonde(ω 0, ω 1, ω 2,... )v Eval ω 0,ω 1,ω 2,...(v) Univariate Ben-Or & Tiwari as a corollary Reed-Solomon codes: evaluation of a sparse error BMA

Reed-Solomon codes as Evaluation codes C = {(f (ω 1 ),..., f (ω n )) deg f < k} Evaluation f m=f(x), deg f < k 0 c = Eval(f), c = f(w i i ) error ε g g = f + Interp ( ε) Interpolation y = c + ε g

Reed-Solomon codes as Evaluation codes C = {(f (ω 1 ),..., f (ω n )) deg f < k} Evaluation f m=f(x), deg f < k 0 c = Eval(f), c i = f(w i ) error ε g g = f + Interp ( ε) 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 BMA Interpolation y = c + ε f

Sparse interpolation with errors Find f from (f (w 1 ),..., f (w n )) + ε Interpolation error ε ε 0 c = Interp ( ε ) sparse polynomial f g g = Eval (f) + ε Evaluation y = c + f f

Sparse interpolation with errors Find f from (f (w 1 ),..., f (w n )) + ε Interpolation error ε ε 0 c = Interp ( ε ) sparse polynomial f g 000000 111111 000000 111111 000000 111111 000000 111111 000000 111111 g = Eval (f) + ε Evaluation BMA y = c + f f

Same problems? Interchanging Evaluation and Interpolation Let V ω = Vandermonde(ω, ω 2,..., ω n ). Then (V ω ) 1 = 1 n V ω 1 Given g, find f, t-sparse and an error ε such that g = V ω f + ε V ω 1g = nf + V ω 1ɛ

Same problems? Interchanging Evaluation and Interpolation Let V ω = Vandermonde(ω, ω 2,..., ω n ). Then (V ω ) 1 = 1 n V ω 1 Given g, find f, t-sparse and an error ε such that g = V ω f + ε V ω 1g = nf }{{} + V ω 1ɛ }{{} weight t error RS code word Reed-Solomon decoding: unique solution provided ε has 2t consecutive trailing 0 s clean segment of length 2t n 2t(E + 1)

Same problems? Interchanging Evaluation and Interpolation Let V ω = Vandermonde(ω, ω 2,..., ω n ). Then (V ω ) 1 = 1 n V ω 1 Given g, find f, t-sparse and an error ε such that g = V ω f + ε V ω 1g = nf }{{} + V ω 1ɛ }{{} weight t error RS code word Reed-Solomon decoding: unique solution provided ε has 2t consecutive trailing 0 s clean segment of length 2t n 2t(E + 1) BUT: location of the syndrome, is a priori unknown no uniqueness

Applications and Perspectives Sparse interpolation with noise and outliers [Giesbrecht, Labahn &Lee 06] [Kaltofen, Lee, Yang 11]: Termination criteria for BMA: Exact singularity illconditionnedness Now combined with outliers

Applications and Perspectives Sparse interpolation with noise and outliers [Giesbrecht, Labahn &Lee 06] [Kaltofen, Lee, Yang 11]: Termination criteria for BMA: Perspectives: Exact singularity illconditionnedness Now combined with outliers surprising impact of noise on the sparsity: does not degenerate to dense

Applications and Perspectives Sparse interpolation with noise and outliers [Giesbrecht, Labahn &Lee 06] [Kaltofen, Lee, Yang 11]: Termination criteria for BMA: Perspectives: Exact singularity illconditionnedness Now combined with outliers surprising impact of noise on the sparsity: does not degenerate to dense Sparse rational function reconstruction with errors: dense case: Berlekamp/Welsh decoding and Padé approximant fit well together. sparse case?

Applications and Perspectives Sparse interpolation with noise and outliers [Giesbrecht, Labahn &Lee 06] [Kaltofen, Lee, Yang 11]: Termination criteria for BMA: Perspectives: Exact singularity illconditionnedness Now combined with outliers surprising impact of noise on the sparsity: does not degenerate to dense Sparse rational function reconstruction with errors: dense case: Berlekamp/Welsh decoding and Padé approximant fit well together. sparse case? application to k-error linear complexity (symmetric crypto)...