A generalized Prony method for sparse approximation Gerlind Plonka and Thomas Peter Institute for Numerical and Applied Mathematics University of Göttingen Dolomites Research Week on Approximation September 2014
A generalized Prony method for sparse approximation Outline Original Prony method Reconstruction of sparse exponential sums Ben-Or & Tiwari method Reconstruction of sparse polynomials The generalized Prony method - Application to shift and dilation operator - Application to Sturm-Liouville operator - Recovery of sparse vectors Numerical issues 1
Reconstruction of sparse exponential sums (Prony s method) Function f(x) = M c j e T jx We have M, f(l), l = 0,..., 2M 1 We want c j, T j C, where π Im T j < π, j = 1,..., M. Consider the Prony polynomial P (z) := M ( ) M z e T j = with unknown parameters T j and p M = 1. p l z l M p l f(l + m) = M M p l c j e Tj(l+m) = M c j e T jm M = M p l e T jl c j e T jm P (e T j) = 0, m = 0,..., M 1. 2
Reconstruction algorithm Input: f(l), l = 0,..., 2M 1 Solve the Hankel system f(0) f(1)... f(m 1) f(1) f(2)... f(m)... f(m 1) f(m)... f(2m 2) p 0 p 1. p M 1 = f(m) f(m + 1). f(2m 1) Compute the zeros of the Prony polynomial P (z) = M p lz l and extract the parameters T j from its zeros z j = e T j, j = 1,..., M. Compute c j solving the linear system. f(l) = M c j e T jl, l = 0,..., 2M 1. Output: Parameters T j and c j, j = 1,..., M. 3
Literature [Prony] (1795): [Schmidt] (1979): [Roy, Kailath] (1989): [Hua, Sakar] (1990): [Stoica, Moses] (2000): [Potts, Tasche] (2010, 2011): Reconstruction of a difference equation MUSIC (Multiple Signal Classification) ESPRIT (Estimation of signal parameters via rotational invariance techniques) Matrix-pencil method Annihilating filters Approximate Prony method Golub, Milanfar, Varah ( 99); Vetterli, Marziliano, Blu ( 02); Maravić, Vetterli ( 04); Elad, Milanfar, Golub ( 04); Beylkin, Monzon ( 05, 10); Batenkov, Sarg, Yomdin ( 12, 13); Filbir, Mhaskar, Prestin ( 12); Peter, Potts, Tasche ( 11, 12, 13); Plonka, Wischerhoff ( 13);... 4
Reconstruction of M-sparse polynomials Ben-Or & Tiwari algorithm Function f(x) = M c j x d j 0 d 1 < d 2 <... < d M = N, d j N 0, c j C. We have We want M, f(x l 0 ), l = 0,..., 2M 1, (xl 0 pairwise different) coefficients c j C \ {0}, indices d j of active monomials Consider the Prony polynomial P (z) := M ( ) z x d j 0 = M p l z l with unknown zeros x d j 0 and p M = 1. 5
Prony polynomial Then P (z) := M ( ) z x d j 0 = M p l z l M p l f(x l+m 0 ) = M = M p l M c j x d j(l+m) 0 = M c j x d jm 0 M p l x d jl 0 c j x d jm 0 P (x d j 0 ) = 0, m = 0,..., M 1. Hence M 1 p l f(x l+m 0 ) = f(x l+m 0 ), m = 0,..., M 1. Compute p l P (z) zeros x d j 0 d j c j. 6
Literature [Ben-Or, Tiwari] (1988): [Grigoriev, Karpinski, Singer] (1990): [Dress, Grabmeir] (1991): [Lakshman, Saunders] (1995): [Kaltofen, Lee] (2003): [Giesbrecht, Labahn, Lee] (2008) Reconstruction of multivariate M-sparse polynomials Sparse polynomial interpolation over finite fields Interpolation of k-sparse character sums sparse polynomial interpolation in non-standard bases Early termination in sparse polynomial interpolation Sparse interpolation of multivariate polynomials 7
The generalized Prony method (Peter, Plonka (2013)) Let V be a normed vector space and let A : V V be a linear operator. Let {e n : n I} be a set of eigenfunctions of A to pairwise different eigenvalues λ n C, A e n = λ n e n. Let f = j J c j e j, J I with J = M, c j C. Let F : V C be a linear functional with F (e n ) 0 for all n I. We have M, F (A l f) for l = 0,..., 2M 1 We want J I, c j C for j J 8
Prony polynomial P (z) = j J(z λ j ) = M p l z l with unknown λ j, i.e., with unknown J. Hence, for m = 0, 1,... M p l F (A l+m f) = M = j J = j J p l F c j λ m j ( j J ( M c j A l+m e j ) p l λ l j ) F (e j ) c j λ m j P (λ j)f (e j ) = 0. = M p l F ( j J c j λj l+m e j ) Thus, if F (A l f), l = 0,..., 2M 1 is known, then we can compute the index set J I of the active eigenfunctions and the coefficients c j. 9
Application to linear operators: shift operator Choose the shift operator S h : C(R) C(R), h > 0 S h f(x) := f(x + h) Eigenfunctions of S h S h e Tjx = e Tj(x+h) = e Tjh e Tjx, T j C, Im T j [ π h, π h ). Prony method: For the reconstruction of f(x) = M c j e Tjx we need F (Sh l f) = F (f( + hl)), l = 0,..., 2M 1. Put F (f) := f(x 0 ). F (S l h f) = f(x 0 + hl) Put F (f) := x 0 +h x 0 f(x)dx. F (Sh l x 0 +(l+1)h f) = f(x)dx. x 0 +lh 10
Application to linear operators: dilation operator Choose the dilation operator D h : C(R) C(R), D h f(x) := f(hx) Eigenfunctions of D h : D h x p j = (hx) p j = h p jx p j, p j C, x R. We need: h p j are pairwise different for all j I. Ben-Or & Tiwari method: For reconstruction of f(x) = M c j x p j, we need F (D l h f) = F (f(hl )), l = 0,..., 2M 1. Put F (f) := f(x 0 ). F (D l h f) = f(hl x 0 ) Put F (f) := l 1 0 f(x)dx. F (Dl h f) = 1 h h l 0 f(x)dx. 11
Application to linear operators: Sturm-Liouville operator Choose the Sturm-Liouville operator L p,q : C (R) C (R), L p,q f(x) := p(x)f (x) + q(x)f (x), where p(x), q(x) are polynomials of degree 2 and 1, respectively. Eigenfunctions are orthogonal polynomials, where L p,q Q n = λ n Q n. p(x) q(x) λ n name symbol (1 x 2 ) (β α (α + β +2)x) n(n + α + β +1) Jacobi P n (α,β) (1 x 2 ) (2α +1)x n(n +2α) Gegenbauer C n (α) (1 x 2 ) 2x n(n +1) Legendre P n (1 x 2 ) x n 2 Chebyshev 1. kind T n (1 x 2 ) 3x n(n +2) Chebyshev 2. kind U n 1 2x 2n Hermite H n x (α +1 x) n Laguerre L (α) n 12
Sparse sums of orthogonal polynomials Function f(x) = M c nj Q nj (x). We want: c nj C \ {0}, indices n j of active basis polynomials Q nj Now, f can be uniquely recovered from F (L k p,qf) = L k p,qf(x 0 ) = M c nj λ k n j Q nj (x 0 ), k = 0,..., 2M 1. Theorem. For each polynomial f and each x R, the values L k p,qf(x), k = 0,..., 2M 1, are uniquely determined by f (m) (x), m = 0,..., 4M 2. If p(x 0 ) = 0, then L p,q f(x 0 ) reduces to L p,q f(x 0 ) = q(x 0 )f (x 0 ), and the values L k p,qf(x 0 ), k = 0,..., 2M 1, can be determined uniquely by f (m) (x 0 ), m = 0,..., 2M 1. 13
Example: Sparse Laguerre expansion Operator equation x(l (α) n ) (x) + (α + 1 x)(l (α) n Sparse Laguerre expansion f(x) = 6 Given values: f(0), f (0),..., f (11) (0). ) (x) = n L (α) n (x) c nj L (0) n j (x) (with α = 0) j n j c nj ñ j c nj 1 142 3 142.0000000018223 2.999999999999987 2 125 1 125.0000000494359 1.000000000000034 3 91 2 90.9999998114290 2.000000000000063 4 69 3 69.0000003316075 3.000000000000058 5 53 1 53.0000003445395 0.999999999999988 6 11 2 10.9999999973030 2.000000000000004 14
Application to linear operators: Recovery of sparse vectors Choose the operator D : C N C N Dx := diag(d 0,..., d N 1 ) x with pairwise different d j. Eigenvectors of D: D e j = d j e j j = 0,..., N 1. We want to reconstruct x = M c nj e nj c nj C, 0 n 1 <... < n M N 1. We need F (D l x), l = 0,... 2M 1. Let F (x) := 1 T x := N 1 j=0 x j. Then F (D l x) = 1 T D l x = a T l x, where a T l = (dl 0,..., dl N 1 ), l = 0,... 2M 1. 15
Example: Sparse vectors Choose D = diag(ω 0 N, ω1 N,..., ωn 1 N ), where ω N := e 2πi/N denotes the N-th root of unity. Then an M-sparse vector x can be recovered from y = F 2M,N x, where F 2M,N = (ω kl N )2M 1,N 1 k, C 2M N. More generally, choose σ N with (σ, N) = 1 and D = diag(ω 0 N, ωσ N,..., ωσ(n 1) N ), Then an M-sparse vector x can be recovered from y = F 2M,N x, where F 2M,N = (ωn σkl)2m 1,N 1 k, C 2M N. 16
Numerical issues Let f = j J c j e j with J I and J = M, Ae n = λ n v n for n I. Algorithm (Recovery of f) Input: M, F (A k f), k = 0,..., 2M 1. Solve the linear system M 1 k=0 p k F (A k+m f) = F (A M+m f), m = 0,..., M 1. (1) Form the Prony polynomial P (z) = M k=0 λ j, j J, of P (z) and determine v j, j J. p k z k. Compute the zeros Compute the coefficients c j by solving the overdetermined system F (A k f) = j J c j λ k j v j k = 0,..., 2M 1. Output: c j, v j, j J, determining f. 17
Numerical issues II (Generalization of [Potts,Tasche 13]) Numerically stable procedure for steps 1 and 2: Let g k := (F (A k+m f)) M 1 m=0, k = 0,..., M 1 and G := (F (A k+m f)) M 1 k,m=0 = (g 0... g M 1 ) and G = (g 1... g M ). Then (1) yields Gp = g M, Let C(P ) be the companion matrix of the Prony polynomial P (z) with eigenvalues λ j, j J. Then G C(P ) = G, Hence λ j are the eigenvalues of the matrix pencil zg G, z C. Now apply a QR-decomposition or SVD-decomposition to (g 0... g M ). 18
References Thomas Peter, Gerlind Plonka A generalized Prony method for reconstruction of sparse sums of eigenfunctions of linear operators. Inverse Problems 29 (2013), 025001. Thomas Peter, Gerlind Plonka, Daniela Roşca Representation of sparse Legendre expansions. Journal of Symbolic Computation 50 (2013), 159-169. Gerlind Plonka, Marius Wischerhoff How many Fourier samples are needed for real function reconstruction? Journal of Appl. Math. and Computing 42 (2013), 117-137. Thomas Peter Generalized Prony method. PhD thesis, University of Göttingen, September 2013. Gerlind Plonka, Manfred Tasche Prony methods for recovery of structured functions. GAMM Mitteilungen, 2014, to appear. 19
\thankyou 20