Recovery of Sparsely Corrupted Signals

Size: px
Start display at page:

Download "Recovery of Sparsely Corrupted Signals"

Transcription

1 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and elut Bölcskei, Fellow, IEEE arxiv: v4 [cs.it] 7 Dec 011 Abstract We investigate the recovery of signals exhibiting a sparse representation in a general (i.e., possibly redundant or incoplete) dictionary that are corrupted by additive noise aditting a sparse representation in another general dictionary. This setup covers a wide range of applications, such as iage inpainting, super-resolution, signal separation, and recovery of signals that are ipaired by, e.g., clipping, ipulse noise, or narrowband interference. We present deterinistic recovery guarantees based on a novel uncertainty relation for pairs of general dictionaries and we provide corresponding practicable recovery algoriths. The recovery guarantees we find depend on the signal and noise sparsity levels, on the coherence paraeters of the involved dictionaries, and on the aount of prior knowledge about the signal and noise support sets. Index Ters Uncertainty relations, signal restoration, signal separation, coherence-based recovery guarantees, l 1-nor iniization, greedy algoriths. I. INTRODUCTION We consider the proble of identifying the sparse vector x C Na fro M linear and non-adaptive easureents collected in the vector z = Ax + Be (1) where A C M Na and B C M N b are known deterinistic and general (i.e., not necessarily of the sae cardinality, and possibly redundant or incoplete) dictionaries, and e C N b represents a sparse noise vector. The support set of e and the corresponding nonzero entries can be arbitrary; in particular, e ay also depend on x and/or the dictionary A. This recovery proble occurs in any applications, soe of which are described next: Clipping: Non-linearities in (power-)aplifiers or in analog-to-digital converters often cause signal clipping or saturation []. This ipairent can be cast into the signal odel (1) by setting B = I M, where I M denotes the M M identity atrix, and rewriting (1) as z = y+e Part of this paper was presented at the IEEE International Syposiu on Inforation Theory (ISIT), Saint-Petersburg, Russia, July 011 [1]. This work was supported in part by the Swiss National Science Foundation (SNSF) under Grant PA00P C. Studer was with the Dept. of Inforation Technology and Electrical Engineering, ET Zurich, Switzerland, and is now with the Dept. of Electrical and Coputer Engineering, Rice University, ouston, TX, USA (e-ail: studer@rice.edu). P. Kuppinger was with the Dept. of Inforation Technology and Electrical Engineering, ET Zurich, Switzerland, and is now with UBS, Zurich, Switzerland (e-ail: patrick.kuppinger@gail.co). G. Pope and. Bölcskei are with the Dept. of Inforation Technology and Electrical Engineering, ET Zurich, Switzerland (e-ail: gpope@nari.ee.ethz.ch; boelcskei@nari.ee.ethz.ch). XXXXX IEEE with e = g a (y) y. Concretely, instead of the M- diensional signal vector y = Ax of interest, the device in question delivers g a (y), where the function g a (y) realizes entry-wise signal clipping to the interval [ a, +a]. The vector e will be sparse, provided the clipping level is high enough. Furtherore, in this case the support set of e can be identified prior to recovery, by siply coparing the absolute values of the entries of y to the clipping threshold a. Finally, we note that here it is essential that the noise vector e be allowed to depend on the vector x and/or the dictionary A. Ipulse noise: In nuerous applications, one has to deal with the recovery of signals corrupted by ipulse noise [3]. Specific applications include, e.g., reading out fro unreliable eory [4] or recovery of audio signals ipaired by click/pop noise, which typically occurs during playback of old phonograph records. The odel in (1) is easily seen to incorporate such ipairents. Just set B = I M and let e be the ipulse-noise vector. We would like to ephasize the generality of (1) which allows ipulse noise that is sparse in general dictionaries B. Narrowband interference: In any applications one is interested in recovering audio, video, or counication signals that are corrupted by narrowband interference. Electric hu, as it ay occur in iproperly designed audio or video equipent, is a typical exaple of such an ipairent. Electric hu typically exhibits a sparse representation in the Fourier basis as it (ainly) consists of a tone at soe base-frequency and a series of corresponding haronics, which is captured by setting B = F M in (1), where F M is the M-diensional discrete Fourier transfor (DFT) atrix defined below in (). Super-resolution and inpainting: Our fraework also encopasses super-resolution [5], [6] and inpainting [7] for iages, audio, and video signals. In both applications, only a subset of the entries of the (full-resolution) signal vector y = Ax is available and the task is to fill in the issing entries of the signal vector such that y = Ax. The issing entries are accounted for by choosing the vector e such that the entries of z = y + e corresponding to the issing entries in y are set to soe (arbitrary) value, e.g., 0. The issing entries of y are then filled in by first recovering x fro z and then coputing y = Ax. Note that in both applications the support set E is known (i.e., the locations of the issing entries can easily be identified) and the dictionary A is typically redundant (see, e.g., [8] for a corresponding discussion), i.e., A has ore dictionary eleents (coluns) than rows, which

2 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY deonstrates the need for recovery results that apply to general (i.e., possibly redundant) dictionaries. Signal separation: Separation of (audio or video) signals into two distinct coponents also fits into our fraework. A proinent exaple for this task is the separation of texture fro cartoon parts in iages (see [9], [10] and references therein). In the language of our setup, the dictionaries A and B are chosen such that they allow for sparse representation of the two distinct features; x and e are the corresponding coefficients describing these features (sparsely). Note that here the vector e no longer plays the role of (undesired) noise. Signal separation then aounts to siultaneously extracting the sparse vectors x and e fro the observation (e.g., the iage) z = Ax + Be. Naturally, it is of significant practical interest to identify fundaental liits on the recovery of x (and e, if appropriate) fro z in (1). For the noiseless case z = Ax such recovery guarantees are known [11] [13] and typically set liits on the axiu allowed nuber of nonzero entries of x or ore colloquially on the sparsity level of x. These recovery guarantees are usually expressed in ters of restricted isoetry constants (RICs) [14], [15] or in ters of the coherence paraeter [11] [13], [16] of the dictionary A. In contrast to coherence paraeters, RICs can, in general, not be coputed efficiently. In this paper, we focus exclusively on coherencebased recovery guarantees. For the case of unstructured noise, i.e., z = Ax + n with no constraints iposed on n apart fro n <, coherence-based recovery guarantees were derived in [16] [0]. The corresponding results, however, do not guarantee perfect recovery of x, but only ensure that either the recovery error is bounded above by a function of n or only guarantee perfect recovery of the support set of x. Such results are to be expected, as a consequence of the generality of the setup in ters of the assuptions on the noise vector n. A. Contributions In this paper, we consider the following questions: 1) Under which conditions can the vector x (and the vector e, if appropriate) be recovered perfectly fro the (sparsely corrupted) observation z = Ax + Be, and ) can we forulate practical recovery algoriths with corresponding (analytical) perforance guarantees? Sparsity of the signal vector x and the error vector e will turn out to be key in answering these questions. More specifically, based on an uncertainty relation for pairs of general dictionaries, we establish recovery guarantees that depend on the nuber of nonzero entries in x and e, and on the coherence paraeters of the dictionaries A and B. These recovery guarantees are obtained for the following different cases: I) The support sets of both x and e are known (prior to recovery), II) the support set of only x or only e is known, III) the nuber of nonzero entries of only x or only e is known, and IV) nothing is known about x and e. We forulate efficient recovery algoriths and derive corresponding perforance guarantees. Finally, we copare our analytical recovery thresholds to nuerical results and we deonstrate the application of our algoriths and recovery guarantees to an iage inpainting exaple. B. Outline of the paper The reainder of the paper is organized as follows. In Section II, we briefly review relevant previous results. In Section III, we derive a novel uncertainty relation that lays the foundation for the recovery guarantees reported in Section IV. A discussion of our results is provided in Section V and nuerical results are presented in Section VI. We conclude in Section VII. C. Notation Lowercase boldface letters stand for colun vectors and uppercase boldface letters designate atrices. For the atrix M, we denote its transpose and conjugate transpose by M T and M, respectively, its (Moore Penrose) pseudo-inverse by M = ( M M ) 1 M, its kth colun by k, and the entry in the kth row and lth colun by [M] k,l. The kth entry of the vector is [] k. The space spanned by the coluns of M is denoted by R(M). The M M identity atrix is denoted by I M, the M N all zeros atrix by 0 M,N, and the all-zeros vector of diension M by 0 M. The M M discrete Fourier transfor atrix F M is defined as [F M ] k,l = 1 M exp ( πi(k 1)(l 1) M ), k, l=1,..., M where i = 1. The Euclidean (or l ) nor of the vector x is denoted by x, x 1 stands for the l 1 -nor of x, and x 0 designates the nuber of nonzero entries in x. Throughout the paper, we assue that the coluns of the dictionaries A and B have unit l -nor. The iniu and axiu eigenvalue of the positive-seidefinite atrix M is denoted by λ in (M) and λ ax (M), respectively. The spectral nor of the atrix M is M = λ ax (M M). Sets are designated by uppercase calligraphic letters; the cardinality of the set T is T. The copleent of a set S (in soe superset T ) is denoted by S c. For two sets S 1 and S, s ( ) S 1 + S eans that s is of the for s = s 1 + s, where s 1 S 1 and s S. The support set of the vector is designated by supp(). The atrix M T is obtained fro M by retaining the coluns of M with indices in T ; the vector T is obtained analogously. We define the N N diagonal (projection) atrix P S for the set S {1,..., N} as follows: [P S ] k,l = { 1, k = l and k S 0, otherwise. For x R, we set [x] + = ax{x, 0}. II. REVIEW OF RELEVANT PREVIOUS RESULTS Recovery of the vector x fro the sparsely corrupted easureent z = Ax + Be corresponds to a sparse-signal recovery proble subject to structured (i.e., sparse) noise. In this section, we briefly review relevant existing results for sparse-signal recovery fro noiseless easureents, and we suarize the results available for recovery in the presence of unstructured and structured noise. ()

3 STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS 3 A. Recovery in the noiseless case Recovery of x fro z = Ax where A is redundant (i.e., M < N a ) aounts to solving an underdeterined linear syste of equations. ence, there are infinitely any solutions x, in general. owever, under the assuption of x being sparse, the situation changes drastically. More specifically, one can recover x fro the observation z = Ax by solving (P0) iniize x 0 subject to z = Ax. This approach results, however, in prohibitive coputational coplexity, even for sall proble sizes. Two of the ost popular and coputationally tractable alternatives to solving (P0) by an exhaustive search are basis pursuit (BP) [11] [13], [1] [3] and orthogonal atching pursuit (OMP) [13], [4], [5]. BP is essentially a convex relaxation of (P0) and aounts to solving (BP) iniize x 1 subject to z = Ax. OMP is a greedy algorith that recovers the vector x by iteratively selecting the colun of A that is ost correlated with the difference between z and its current best (in l -nor sense) approxiation. The questions that arise naturally are: Under which conditions does (P0) have a unique solution and when do BP and/or OMP deliver this solution? To forulate the answer to these questions, define n x = x 0 and the coherence of the dictionary A as µ a = ax a k a l. (3) k,l,k l As shown in [11] [13], a sufficient condition for x to be the unique solution of (P0) applied to z = Ax and for BP and OMP to deliver this solution is n x < 1 ( 1 + µ 1 a ). (4) B. Recovery in the presence of unstructured noise Coherence-based recovery guarantees in the presence of unstructured (and deterinistic) noise, i.e., for z = Ax+n, with no constraints iposed on n apart fro n <, were derived in [16] [0] and the references therein. Specifically, it was shown in [16] that a suitably odified version of BP, referred to as BP denoising (BPDN), recovers an estiate ˆx satisfying x ˆx < C n provided that (4) is et. ere, C > 0 depends on the coherence µ a and on the sparsity level n x of x. Note that the support set of the estiate ˆx ay differ fro that of x. Another result, reported in [17], states that OMP delivers the correct support set (but does not perfectly recover the nonzero entries of x) provided that n x < 1 ( ) 1 + µ 1 a n (5) µ a x in where x in denotes the absolute value of the coponent of x with sallest nonzero agnitude. The recovery condition (5) yields sensible results only if n / x in is sall. Results siilar to those reported in [17] were obtained in [18], [19]. Recovery guarantees in the case of stochastic noise n can be found in [19], [0]. We finally point out that perfect recovery of x is, in general, ipossible in the presence of unstructured noise. In contrast, as we shall see below, perfect recovery is possible under structured noise according to (1). C. Recovery guarantees in the presence of structured noise As outlined in the introduction, any practically relevant signal recovery probles can be forulated as (sparse) signal recovery fro sparsely corrupted easureents, a proble that sees to have received coparatively little attention in the literature so far and does not appear to have been developed systeatically. A straightforward way leading to recovery guarantees in the presence of structured noise, as in (1), follows fro rewriting (1) as z = Ax + Be = Dw (6) with the concatenated dictionary D = [ A B ] and the stacked vector w = [ x T e T ] T. This forulation allows us to invoke the recovery guarantee in (4) for the concatenated dictionary D, which delivers a sufficient condition for w (and hence, x and e) to be the unique solution of (P0) applied to z = Dw and for BP and OMP to deliver this solution [11], [1]. owever, the so obtained recovery condition n w = n x + n e < 1 ( ) 1 + µ 1 d (7) with the dictionary coherence µ d defined as µ d = ax d k d l (8) k,l,k l ignores the structure of the recovery proble at hand, i.e., is agnostic to i) the fact that D consists of the dictionaries A and B with known coherence paraeters µ a and µ b, respectively, and ii) knowledge about the support sets of x and/or e that ay be available prior to recovery. As shown in Section IV, exploiting these two structural aspects of the recovery proble yields superior (i.e., less restrictive) recovery thresholds. Note that condition (7) guarantees perfect recovery of x (and e) independent of the l -nor of the noise vector, i.e., Be ay be arbitrarily large. This is in stark contrast to the recovery guarantees for noisy easureents in [16] and (5) (originally reported in [17]). Special cases of the general setup (1), explicitly taking into account certain structural aspects of the recovery proble were considered in [3], [14], [6] [30]. Specifically, in [6] it was shown that for A = F M, B = I M, and knowledge of the support set of e, perfect recovery of the M-diensional vector x is possible if n x n e < M (9) where n e = e 0. In [7], [8], recovery guarantees based on the RIC of the atrix A for the case where B is an orthonoral basis (ONB), and where the support set of e is either known or unknown, were reported; these recovery guarantees are particularly handy when A is, for exaple, i.i.d. Gaussian [31], [3]. owever, results for the case of A and B both general (and deterinistic) dictionaries taking into account prior knowledge about the support sets of x and

4 4 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY P Q [(1 + µ a)(1 ɛ P ) P µ a ] + [(1 + µ b )(1 ɛ Q ) Q µ b ] + µ. (10) e see to be issing in the literature. Recovery guarantees for A i.i.d. non-zero ean Gaussian, B = I M, and the support sets of x and e unknown were reported in [9]. In [30] recovery guarantees under a probabilistic odel on both x and e and for unitary A and B = I M were reported showing that x can be recovered perfectly with high probability (and independently of the l -nor of x and e). The proble of sparse-signal recovery in the presence of ipulse noise (i.e., B = I M ) was considered in [3], where a particular nonlinear easureent process cobined with a non-convex progra for signal recovery was proposed. In [14], signal recovery in the presence of ipulse noise based on l 1 -nor iniization was investigated. The setup in [14], however, differs considerably fro the one considered in this paper as A in [14] needs to be tall (i.e., M > N a ) and the vector x to be recovered is not necessarily sparse. We conclude this literature overview by noting that the present paper is inspired by [6]. Specifically, we note that the recovery guarantee (9) reported in [6] is obtained fro an uncertainty relation that puts liits on how sparse a given signal can siultaneously be in the Fourier basis and in the identity basis. Inspired by this observation, we start our discussion by presenting an uncertainty relation for pairs of general dictionaries, which fors the basis for the recovery guarantees reported later in this paper. III. A GENERAL UNCERTAINTY RELATION FOR ɛ-concentrated VECTORS We next present a novel uncertainty relation, which extends the uncertainty relation in [33, Le. 1] for pairs of general dictionaries to vectors that are ɛ-concentrated rather than perfectly sparse. As shown in Section IV, this extension constitutes the basis for the derivation of recovery guarantees for BP. A. The uncertainty relation Define the utual coherence between the dictionaries A and B as µ = ax a k b l. k,l Furtherore, we will need the following definition, which appeared previously in [6]. Definition 1: A vector r C Nr is said to be ɛ R -concentrated to the set R {1,..., N r } if P R r 1 (1 ɛ R ) r 1, where ɛ R [0, 1]. We say that the vector r is perfectly concentrated to the set R and, hence, R -sparse if P R r = r, i.e., if ɛ R = 0. We can now state the following uncertainty relation for pairs of general dictionaries and for ɛ-concentrated vectors. Theore 1: Let A C M Na be a dictionary with coherence µ a, B C M N b a dictionary with coherence µ b, and denote the utual coherence between A and B by µ. Let s be a vector in C M that can be represented as a linear cobination of coluns of A and, siilarly, as a linear cobination of coluns of B. Concretely, there exists a pair of vectors p C Na and q C N b such that s = Ap = Bq (we exclude the trivial case where p = 0 Na and q = 0 Nb ). 1 If p is ɛ P - concentrated to P and q is ɛ Q -concentrated to Q, then (10) holds. Proof: The proof follows closely that of [33, Le. 1], which applies to perfectly concentrated vectors p and q. We therefore only suarize the odifications to the proof of [33, Le. 1]. Instead of using p P [p] p = p 1 to arrive at [33, Eq. 9] [(1 + µ a ) P µ a ] + p 1 P µ q 1 we invoke p P [p] p (1 ɛ P ) p 1 to arrive at the following inequality valid for ɛ P -concentrated vectors p: [(1 + µ a )(1 ɛ P ) P µ a ] + p 1 P µ q 1. (11) Siilarly, ɛ Q -concentration, i.e., Q [q] q (1 ɛ Q ) q 1, is used to replace [33, Eq. 30] by [(1 + µ b )(1 ɛ Q ) Q µ b ] + q 1 Q µ p 1. (1) The uncertainty relation (10) is then obtained by ultiplying (11) and (1) and dividing the resulting inequality by p 1 q 1. In the case where both p and q are perfectly concentrated, i.e., ɛ P = ɛ Q = 0, Theore 1 reduces to the uncertainty relation reported in [33, Le. 1], which we restate next for the sake of copleteness. Corollary ([33, Le. 1]): If P = supp(p) and Q = supp(q), the following holds: P Q [1 µ a( P 1)] + [1 µ b ( Q 1)] + µ. (13) As detailed in [33], [34], the uncertainty relation in Corollary generalizes the uncertainty relation for two orthonoral bases (ONBs) found in [3]. Furtherore, it extends the uncertainty relations provided in [35] for pairs of square dictionaries (having the sae nuber of rows and coluns) to pairs of general dictionaries A and B. B. Tightness of the uncertainty relation In certain special cases it is possible to find signals that satisfy the uncertainty relation (10) with equality. As in [6], consider A = F M and B = I M, so that µ = 1/ M, and define the cob signal containing equidistant spikes of unit height as { 1, if (l 1) od t = 0 [δ t ] l = 0, otherwise 1 The uncertainty relation continues to hold if either p = 0 Na or q = 0 Nb, but does not apply to the trivial case p = 0 Na and q = 0 Nb. In all three cases we have s = 0 M.

5 STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS 5 where we shall assue that t divides M. It can be shown that the vectors p = δ M and q = δ M, both having M nonzero entries, satisfy F M p = I M q. If P = supp(p) and Q = supp(q), the vectors p and q are perfectly concentrated to P and Q, respectively, i.e., ɛ P = ɛ Q = 0. Since P = Q = M and µ = 1/ M it follows that P Q = 1/µ = M and, hence, p = q = δ M satisfies (10) with equality. We will next show that for pairs of general dictionaries A and B, finding signals that satisfy the uncertainty relation (10) with equality is NP-hard. For the sake of siplicity, we restrict ourselves to the case P = supp(p) and Q = supp(q), which iplies P = p 0 and Q = q 0. Next, consider the proble (U0) { iniize p 0 q 0 subject to Ap = Bq, p 0 1, q 0 1. Since we are interested in the iniu of p 0 q 0 for nonzero vectors p and q, we iposed the constraints p 0 1 and q 0 1 to exclude the case where p = 0 Na and/or q = 0 Nb. Now, it follows that for the particular choice B = z C M and hence q = q C \ {0} (note that we exclude the case q = 0 as a consequence of the requireent q 0 1) the proble (U0) reduces to (U0 ) iniize x 0 subject to Ax = z where x = p/q. owever, as (U0 ) is equivalent to (P0), which is NP-hard [36], in general, we can conclude that finding a pair p and q satisfying the uncertainty relation (10) with equality is NP-hard. IV. RECOVERY OF SPARSELY CORRUPTED SIGNALS Based on the uncertainty relation in Theore 1, we next derive conditions that guarantee perfect recovery of x (and of e, if appropriate) fro the (sparsely corrupted) easureent z = Ax + Be. These conditions will be seen to depend on the nuber of nonzero entries of x and e, and on the coherence paraeters µ a, µ b, and µ. Moreover, in contrast to (5), the recovery conditions we find will not depend on the l -nor of the noise vector Be, which is hence allowed to be arbitrarily large. We consider the following cases: I) The support sets of both x and e are known (prior to recovery), II) the support set of only x or only e is known, III) the nuber of nonzero entries of only x or only e is known, and IV) nothing is known about x and e. The uncertainty relation in Theore 1 is the basis for the recovery guarantees in all four cases considered. To siplify notation, otivated by the for of the right-hand side (RS) of (13), we define the function f(u, v) = [1 µ a(u 1)] + [1 µ b (v 1)] + µ. In the reainder of the paper, X denotes supp(x) and E stands for supp(e). We furtherore assue that the dictionaries A and B are known perfectly to the recovery algoriths. Moreover, we assue that µ > 0. If µ = 0, the space spanned by the coluns of A is orthogonal to the space spanned by the coluns of B. This akes the separation of the coponents Ax and Be given z straightforward. Once this separation is accoplished, x can be recovered fro Ax using (P0), BP, or OMP, if (4) is satisfied. A. Case I: Knowledge of X and E We start with the case where both X and E are known prior to recovery. The values of the nonzero entries of x and e are unknown. This scenario is relevant, for exaple, in applications requiring recovery of clipped band-liited signals with known spectral support X. ere, we would have A = F M, B = I M, and E can be deterined as follows: Copare the easureents [z] i, i = 1,..., M, to the clipping threshold a; if [z] i = a add the corresponding index i to E. Recovery of x fro z is then perfored as follows. We first rewrite the input-output relation in (1) as z = A X x X + B E e E = D X,E s X,E with the concatenated dictionary D X,E = [ A X B E ] and the stacked vector s X,E = [ ] x T T X et E. Since X and E are known, we can recover the stacked vector s X,E = [ ] x T T X et E, perfectly and, hence, the nonzero entries of both x and e, if the pseudo-inverse D X,E exists. In this case, we can obtain s X,E, as s X,E = D X,Ez. (14) The following theore states a sufficient condition for D X,E to have full (colun) rank, which iplies existence of the pseudo-inverse D X,E. This condition depends on the coherence paraeters µ a, µ b, and µ, of the involved dictionaries A and B and on X and E through the cardinalities X and E, i.e., the nuber of nonzero entries in x and e, respectively. Theore 3: Let z = Ax + Be with X = supp(x) and E = supp(e). Define n x = x 0 and n e = e 0. If n x n e < f(n x, n e ), (15) then the concatenated dictionary D X,E = [ A X B E ] has full (colun) rank. Proof: See Appendix A. For the special case A = F M and B = I M (so that µ a = µ b = 0 and µ = 1/ M) the recovery condition (15) reduces to n x n e < M, a result obtained previously in [6]. Tightness of (15) can be established by noting that the pairs x = λδ M, e = (1 λ)δ M with λ (0, 1) and x = λ δ M, e = (1 λ )δ M with λ λ and λ (0, 1) both satisfy (15) with equality and lead to the sae easureent outcoe z = F M x + e = F M x + e [34]. It is interesting to observe that Theore 3 yields a sufficient condition on n x and n e for any (M n e ) n x -subatrix of A to have full (colun) rank. To see this, consider the special case B = I M and hence, D X,E = [ A X I E ]. Condition (15) characterizes pairs (n x, n e ), for which all atrices D X,E with n x = X and n e = E are guaranteed to have full (colun) rank. ence, the sub-atrix consisting of all rows of A X with row index in E c ust have full (colun) rank as well. Since the result holds for all support sets X and E with X = n x and E = n e, all possible (M n e ) n x -subatrices of A ust have full (colun) rank. B. Case II: Only X or only E is known Next, we find recovery guarantees for the case where either only X or only E is known prior to recovery.

6 6 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1) Recovery when E is known and X is unknown: A proinent application for this setup is the recovery of clipped bandliited signals [7], [37], where the signal s spectral support, i.e., X, is unknown. The support set E can be identified as detailed previously in Section IV-A. Further application exaples for this setup include inpainting and super-resolution [5] [7] of signals that adit a sparse representation in A (but with unknown support set X ). The locations of the issing eleents in y = Ax are known (and correspond, e.g., to issing paint eleents in frescos), i.e., the set E can be deterined prior to recovery. Inpainting and super-resolution then aount to reconstructing the vector x fro the sparsely corrupted easureent z = Ax + e and coputing y = Ax. The setting of E known and X unknown was considered previously in [6] for the special case A = F M and B = I M. The recovery condition (18) in Theore 4 below extends the result in [6, Ths. 5 and 9] to pairs of general dictionaries A and B. Theore 4: Let z = Ax + Be where E = supp(e) is known. Consider the proble { iniize x 0 (P0, E) (16) subject to A x ({z} + R(B E )) and the convex progra { iniize x 1 (BP, E) subject to A x ({z} + R(B E )). If n x = x 0 and n e = e 0 satisfy (17) n x n e < f(n x, n e ), (18) then the unique solution of (P0, E) applied to z = Ax + Be is given by x and (BP, E) will deliver this solution. Proof: See Appendix B. Solving (P0, E) requires a cobinatorial search, which results in prohibitive coputational coplexity even for oderate proble sizes. The convex relaxation (BP, E) can, however, be solved ore efficiently. Note that the constraint A x ({z} + R(B E )) reflects the fact that any error coponent B E e E yields consistency on account of E known (by assuption). For n e = 0 (i.e., the noiseless case) the recovery threshold (18) reduces to n x < (1 + 1/µ a )/, which is the well-known recovery threshold (4) guaranteeing recovery of the sparse vector x through (P0) and BP applied to z = Ax. We finally note that RIC-based guarantees for recovering x fro z = Ax (i.e., recovery in the absence of (sparse) corruptions) that take into account partial knowledge of the signal support set X were developed in [38], [39]. Tightness of (18) can be established by setting A = F M and B = I M. Specifically, the pairs x = δ M δ M, e = δ M and x = δ M, e = e both satisfy (18) with equality. One can furtherore verify that x and x are both in the adissible set specified by the constraints in (P0, E) and (BP, E) and x 0 = x 0, x 1 = x 1. ence, (P0, E) and (BP, E) both cannot distinguish between x and x based on the easureent outcoe z. For a detailed discussion of this exaple we refer to [34]. Rather than solving (P0, E) or (BP, E), we ay attept to recover the vector x by exploiting ore directly the fact that R(B E ) is known (since B and E are assued to be known) and projecting the easureent outcoe z onto the orthogonal copleent of R(B E ). This approach would eliinate the (sparse) noise coponent and leave us with a standard sparse-signal recovery proble for the vector x. We next show that this ansatz is guaranteed to recover the sparse vector x provided that condition (18) is satisfied. Let us detail the procedure. If the coluns of B E are linearly independent, the pseudo-inverse B E exists, and the projector onto the orthogonal copleent of R(B E ) is given by R E = I M B E B E. (19) Applying R E to the easureent outcoe z yields R E z = R E (Ax + B E e E ) = R E Ax ẑ (0) where we used the fact that R E B E = 0 M,ne. We are now left with the standard proble of recovering x fro the odified easureent outcoe ẑ = R E Ax. What coes to ind first is that coputing the standard recovery threshold (4) for the odified dictionary R E A should provide us with a recovery threshold for the proble of extracting x fro ẑ = R E Ax. It turns out, however, that the coluns of R E A will, in general, not have unit l -nor, an assuption underlying (4). What coes to our rescue is that under condition (18) we have (as shown in Theore 5 below) R E a l > 0 for l = 1,..., N a. We can, therefore, noralize the odified dictionary R E A by rewriting (0) as ẑ = R E A ˆx (1) where is the diagonal atrix with eleents [ ] l,l = 1 R E a l, l = 1,..., N a, and ˆx 1 x. Now, R E A plays the role of the dictionary (with noralized coluns) and ˆx is the unknown sparse vector that we wish to recover. Obviously, supp(ˆx) = supp(x) and x can be recovered fro ˆx according to 3 x = ˆx. The following theore shows that (18) is sufficient to guarantee the following: i) The coluns of B E are linearly independent, which guarantees the existence of B E, ii) R Ea l > 0 for l = 1,..., N a, and iii) no vector x C Na with x 0 n x lies in the kernel of R E A. ence, (18) enables perfect recovery of x fro (1). Theore 5: If (18) is satisfied, the unique solution of (P0) applied to ẑ = R E A ˆx is given by ˆx. Furtherore, BP and OMP applied to ẑ = R E A ˆx are guaranteed to recover the unique (P0)-solution. Proof: See Appendix C. Since condition (18) ensures that [ ] l,l > 0, l = 1,..., N a, the vector x can be obtained fro ˆx according to x = ˆx. Furtherore, (18) guarantees the existence of B E and hence the nonzero entries of e can be obtained fro x as follows: e E = B E (z Ax). 3 If R E a l > 0 for l = 1,..., N a, then the atrix corresponds to a one-to-one apping.

7 STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS 7 Theore 5 generalizes the results in [6, Ths. 5 and 9] obtained for the special case A = F M and B = I M to pairs of general dictionaries and additionally shows that OMP delivers the correct solution provided that (18) is satisfied. It follows fro (1) that other sparse-signal recovery algoriths, such as iterative thresholding-based algoriths [40], CoSaMP [41], or subspace pursuit [4] can be applied to recover x. 4 Finally, we note that the idea of projecting the easureent outcoe onto the orthogonal copleent of the space spanned by the active coluns of B and investigating the effect on the RICs, instead of the coherence paraeter µ a (as was done in Appendix C-C) was put forward in [7], [43] along with RIC-based recovery guarantees that apply to rando atrices A and guarantee the recovery of x with high probability (with respect to A and irrespective of the locations of the sparse corruptions). ) Recovery when X is known and E is unknown: A possible application scenario for this situation is the recovery of spectrally sparse signals with known spectral support that are ipaired by ipulse noise with unknown ipulse locations. It is evident that this setup is forally equivalent to that discussed in Section IV-B1, with the roles of x and e interchanged. In particular, we ay apply the projection atrix R X = I M A X A X to the corrupted easureent outcoe z to obtain the standard recovery proble ẑ = R X B ê, where is a diagonal atrix with diagonal eleents [ ] l,l = 1/ R X b l. The corresponding unknown vector is given by ê ( ) 1 e. The following corollary is a direct consequence of Theore 5. Corollary 6: Let z = Ax + Be where X = supp(x) is known. If the nuber of nonzero entries in x and e, i.e., n x = x 0 and n e = e 0, satisfy n x n e < f(n x, n e ) () then the unique solution of (P0) applied to ẑ = R X B ê is given by ê = ( ) 1 e. Furtherore, BP and OMP applied to ẑ = R X B ê recover the unique (P0)-solution. Once we have ê, the vector e can be obtained easily, since e = ê and the nonzero entries of x are given by x X = A X (z Be). Since () ensures that the coluns of A X are linearly independent, the pseudo-inverse A X is guaranteed to exist. Note that tightness of the recovery condition () can be established analogously to the case of E known and X unknown (discussed in Section IV-B1). C. Case III: Cardinality of E or X known We next consider the case where neither X nor E are known, but knowledge of either x 0 or e 0 is available (prior to recovery). An application scenario for x 0 unknown and e 0 known would be the recovery of a sparse pulse-strea with unknown pulse-locations fro easureents that are corrupted by electric hu with unknown base-frequency but known nuber of haronics (e.g., deterined by the base frequency of 4 Finding analytical recovery guarantees for these algoriths reains an interesting open proble. the hu and the acquisition bandwidth of the syste under consideration). We state our ain result for the case n e = e 0 known and n x = x 0 unknown. The case where n x is known and n e is unknown can be treated siilarly. Theore 7: Let z = Ax+Be, define n x = x 0 and n e = e 0, and assue that n e is known. Consider the proble (P0, n e ) iniize x 0 subject to A x ({z} + ) R(B E ) E P (3) where P = ne ({1,..., N b }) denotes the set of subsets of {1,..., N b } of cardinality less than or equal to n e. The unique solution of (P0, n e ) applied to z = Ax + Be is given by x if 4n x n e < f(n x, n e ). (4) Proof: See Appendix D. We ephasize that the proble (P0, n e ) exhibits prohibitive (concretely, cobinatorial) coputational coplexity, in general. Unfortunately, replacing the l 0 -nor of x in the iniization in (3) by the l 1 -nor does not lead to a coputationally tractable alternative either, as the constraint A x ({z}+ E P R(B E )) specifies a non-convex set, in general. Nevertheless, the recovery threshold in (4) is interesting as it copletes the picture on the ipact of knowledge about the support sets of x and e on the recovery thresholds. We refer to Section V-A for a detailed discussion of this atter. Note, though, that greedy recovery algoriths, such as OMP [13], [4], [5], CoSaMP [41], or subspace pursuit [4], can be odified to incorporate prior knowledge of the individual sparsity levels of x and/or e. Analytical recovery guarantees corresponding to the resulting odified algoriths do not see to be available. We finally note that tightness of (4) can be established for A = F M and B = I M. Specifically, consider the pair x = δ M, e = δ M and the alternative pair x = δ M δ M, e = δ M + δ M. It can be shown that both x and x are in the adissible set of (P0, n e ) in (3), satisfy x 0 = x 0, and lead to the sae easureent outcoe z. Therefore, (P0, n e ) cannot distinguish between x and x (we refer to [34] for details). D. Case IV: No knowledge about the support sets Finally, we consider the case of no knowledge (prior to recovery) about the support sets X and E. A corresponding application scenario would be the restoration of an audio signal (whose spectru is sparse with unknown support set) that is corrupted by ipulse noise, e.g., click or pop noise occurring at unknown locations. Another typical application can be found in the real of signal separation; e.g., the decoposition of iages into two distinct features, i.e., into a part that exhibits a sparse representation in the dictionary A and another part that exhibits a sparse representation in B. Decoposition of the iage z then aounts to perforing sparsesignal recovery based on z = Ax + Be with no knowledge about the support sets X and E available prior to recovery. The individual iage features are given by Ax and Be.

8 8 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY Recovery guarantees for this case follow fro the results in [33]. Specifically, by rewriting (1) as z = Dw as in (6), we can eploy the recovery guarantees in [33], which are explicit in the coherence paraeters µ a and µ b, and the dictionary coherence µ d of D. For the sake of copleteness, we restate the following result fro [33]. Theore 8 ([33, Th. ]): Let z = Dw with w = [ x T e T ] T and D = [ A B ] with the coherence paraeters µ a µ b and the dictionary coherence µ d as defined in (8). A sufficient condition for the vector w to be the unique solution of (P0) applied to z = Dw is where n x + n e = n w < f(ˆx) + ˆx f(x) = (1 + µ a)(1 + µ b ) xµ b (1 + µ a ) x(µ d µ aµ b ) + µ a (1 + µ b ) (5) and ˆx = in{x b, x s }. Furtherore, x b = (1 + µ b )/(µ b + µ d ) and 1/µ d, if µ a = µ b = µ d, x s = µ d (1 + µa )(1 + µ b ) µ a µ a µ b µ d µ, otherwise. aµ b Obviously, once the vector w has been recovered, we can extract x and e. The following theore, originally stated in [33], guarantees that BP and OMP deliver the unique solution of (P0) applied to z = Dw and the associated recovery threshold, as shown in [33], is only slightly ore restrictive than that for (P0) in (5). Theore 9 ([33, Cor. 4]): A sufficient condition for BP and OMP to deliver the unique solution of (P0) applied to z = Dw is given by δ ( ɛ (µ d + 3µ b ) ) (µ d µ b ), if µ b < µ d and κ(µ d, µ b ) > 1, n w < 1 + µ d + 3µ b µ d δ (µ d + µ, otherwise b) with n w = n x + n e and κ(µ d, µ b ) = δ µ d (µ b + 3µ d + ɛ) µ d µ b (δ + µ d ) (µ d µ b ) (6) where δ = 1 + µ b and ɛ = µ d (µ b + µ d ). We ephasize that both thresholds (5) and (6) are ore restrictive than those in (15), (18), (), and (4) (see also Section V-A), which is consistent with the intuition that additional knowledge about the support sets X and E should lead to higher recovery thresholds. Note that tightness of (5) and (6) was established before in [44] and [33], respectively. V. DISCUSSION OF TE RECOVERY GUARANTEES The ai of this section is to provide an interpretation of the recovery guarantees found in Section IV. Specifically, we discuss the ipact of support-set knowledge on the recovery thresholds we found, and we point out liitations of our results. A. Factor of two in the recovery thresholds Coparing the recovery thresholds (15), (18), (), and (4) (Cases I III), we observe that the price to be paid for not knowing the support set X or E is a reduction of the recovery threshold by a factor of two (note that in Case III, both X and E are unknown, but the cardinality of either X or E is known). For exaple, consider the recovery thresholds (15) and (18). For given n e [0, 1 + 1/µ b ], solving (15) for n x yields n x < (1 + µ a)(1 µ b (n e 1)) n e (µ µ a µ b ) + µ a (1 + µ b ). Siilarly, still assuing n e [0, 1 + 1/µ b ] and solving (18) for n x, we get n x < 1 (1 + µ a )(1 µ b (n e 1)) n e (µ µ a µ b ) + µ a (1 + µ b ). ence, knowledge of X prior to recovery allows for the recovery of a signal with twice as any nonzero entries in x copared to the case where X is not known. This factor-oftwo penalty has the sae roots as the well-known factor-oftwo penalty in spectru-blind sapling [45] [47]. Note that the sae factor-of-two penalty can be inferred fro the RICbased recovery guarantees in [15], [39], when coparing the recovery threshold specified in [39, Th. 1] for signals where partial support-set knowledge is available (prior to recovery) to that given in [15, Th. 1.1] which does not assue prior support-set knowledge. We illustrate the factor-of-two penalty in Figs. 1 and, where the recovery thresholds (15), (18), (), (4), and (6) are shown. In Fig. 1, we consider the case µ a = µ b = 0 and µ = 1/ 64. We can see that for X and E known the threshold evaluates to n x n e < 64. When only X or E is known we have n x n e < 3, and finally in the case where only n e is known we get n x n e < 16. Note furtherore that in Case IV, where no knowledge about the support sets is available, the recovery threshold is ore restrictive than in the case where n e is known. In Fig., we show the recovery thresholds for µ a = 0.158, µ b = , and µ = We see that all threshold curves are straight lines. This behavior can be explained by noting that (in contrast to the assuptions underlying Fig. 1) the dictionaries A and B have µ a, µ b > 0 and the corresponding recovery thresholds are essentially doinated by the nuerator of the RS expressions in (15), (18), (), and (4), which depends on both n x and n e. More concretely, if µ a = µ b = µ = µ d > 0, then the recovery threshold for Case II (where the support set E is known) becoes ( n x + n e < 1 + µ 1 d which reflects the behavior observed in Fig.. B. The square-root bottleneck ) (7) The recovery thresholds presented in Section IV hold for all signal and noise realizations x and e and for all dictionary pairs (with given coherence paraeters). owever, as is wellknown in the sparse-signal recovery literature, coherencebased recovery guarantees are in contrast to RIC-based recovery guarantees fundaentally liited by the so-called

9 STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS 9 error sparsity and known and unknown dictionary B consisting of B ONBs such that µ a = µ b = µ = 1/ M, where A + B M + 1 with M = p k, p prie, and k N +. Now, let us assue that the error sparsity level scales according to n e = α M for soe 0 α 1. For the case where only E is known but X is unknown (Case II), we find fro (18) that any signal x with (order-wise) (1 α) M/ non-zero entries (ignoring ters of order less than M) can be reconstructed. ence, there is a trade-off between the sparsity levels of x and e (here quantified through the paraeter α), and both sparsity levels scale with M. VI. NUMERICAL RESULTS signal sparsity Fig. 1. Recovery thresholds (15), (18), (), (4), and (6) for µ a = µ b = 0, and µ = 1/ 64. Note that the curves for only E known and only X known are on top of each other. error sparsity signal sparsity and known and unknown Fig.. Recovery thresholds (15), (18), (), (4), and (6) for µ a = 0.158, µ b = , and µ = square-root bottleneck [48]. More specifically, in the noiseless case (i.e., for e = 0 Nb ), the threshold (4) states that recovery can be guaranteed only for up to M nonzero entries in x. Put differently, for a fixed nuber of nonzero entries n x in x, i.e., for a fixed sparsity level, the nuber of easureents M required to recover x through (P0), BP, or OMP is on the order of n x. As in the classical sparse-signal recovery literature, the square-root bottleneck can be broken by perforing a probabilistic analysis [48]. This line of work albeit interesting is outside the scope of the present paper and is further investigated in [33], [49], [50]. C. Trade-off between n x and n e We next illustrate a trade-off between the sparsity levels of x and e. Following the procedure outlined in [1], [51], we construct a dictionary A consisting of A ONBs and a We first report siulation results and copare the to the corresponding analytical results in the paper. We will find that even though the analytical thresholds are pessiistic in general, they do reflect the nuerically observed recovery behavior correctly. In particular, we will see that the factor-oftwo penalty discussed in Section V-A can also be observed in the nuerical results. We then deonstrate, through a siple inpainting exaple, that perfect signal recovery in the presence of sparse errors is possible even if the corruptions are significant (in ters of the l -nor of the sparse noise vector e). In all nuerical results, OMP is perfored with a predeterined nuber of iterations [13], [4], [5], i.e., for Case II and Case IV, we set the nuber of iterations to n x and n x + n e, respectively. To ipleent BP, we eploy SPGL1 [5], [53]. A. Ipact of support-set knowledge on recovery thresholds We first copare siulation results to the recovery thresholds (15), (18), (), and (6). For a given pair of dictionaries A and B we generate signal vectors x and error vectors e as follows: We first fix n x and n e, then the support sets of the n x -sparse vector x and the n e -sparse vector e are chosen uniforly at rando aong all possible support sets of cardinality n x and n e, respectively. Once the support sets have been chosen, we generate the nonzero entries of x and e by drawing fro i.i.d. zero ean, unit variance Gaussian rando variables. For each pair of support-set cardinalities n x and n e, we perfor Monte-Carlo trials and declare success of recovery whenever the recovered vector ˆx satisfies ˆx x < 10 3 x. (8) We plot the 50% success-rate contour, i.e., the border between the region of pairs (n x, n e ) for which (8) is satisfied in at least 50% of the trials and the region where (8) is satisfied in less than 50% of the trials. The recovered vector ˆx is obtained as follows: Case I: When X and E are both known, we perfor recovery according to (14). Case II: When either only E or only X is known, we apply BP and OMP using the odified dictionary as detailed in Theore 5 and Corollary 6, respectively. Case IV: When neither X nor E is known, we apply BP and OMP to the concatenated dictionary D = [ A B ] as described in Theore 9.

10 10 TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY error sparsity BP OMP BP signal sparsity pseudo-inverse OMP and known and unknown Fig. 3. Ipact of support-set knowledge on the 50% success-rate contour of OMP and BP for the adaard identity pair. error sparsity BP BP OMP OMP signal sparsity pseudo-inverse OMP BP and known and unknown Fig. 4. Ipact of support-set knowledge on the 50% success-rate contour of OMP and BP perforing recovery in pairs of approxiate ETFs each of diension Note that for Case III, i.e., the case where the cardinality n e of the support set E is known as pointed out in Section IV-C we only have uniqueness results but no analytical recovery guarantees, neither for BP nor for greedy recovery algoriths that ake use of the separate knowledge of n x or n e (whereas, e.g., standard OMP akes use of knowledge of n x + n e, rather than knowledge of n x and n e individually). This case is, therefore, not considered in the siulation results below. 1) Recovery perforance for the adaard identity pair using BP and OMP: We take M = 64, let A be the adaard ONB [54] and set B = I M, which results in µ a = µ b = 0 and µ = 1/ M. Fig. 3 shows 50% success-rate contours, under different assuptions of support-set knowledge. For perfect knowledge of X and E, we observe that the 50% success-rate contour is at about n x +n e M, which is significantly better than the sufficient condition n x n e < M (guaranteeing perfect recovery) provided in (15). 5 When either only X or only E is known, the recovery perforance is essentially independent of whether X or E is known. This is also reflected by the analytical thresholds (18) and () when evaluated for µ a = µ b = 0 (see also Fig. 1). Furtherore, OMP is seen to outperfor BP. When neither X nor E is known, OMP again outperfors BP. It is interesting to see that the factor-of-two penalty discussed in Section V-A is reflected in Fig. 3 (for n x = n e ) between Cases I and II. Specifically, we can observe that for full support-set knowledge (Case I) the 50% success-rate is achieved at n x = n e 31. If either X or E only is known (Case II), OMP achieves 50% success-rate at n x = n e 3, deonstrating a factor-of-two penalty since Note that the results fro BP in Fig. 3 do not see to reflect the factor-of-two penalty. For lack of an efficient recovery algorith (aking use of knowledge of n e ) we do not show nuerical results for Case III. ) Ipact of µ a, µ b > 0: We take M = 64 and generate the dictionaries A and B as follows. Using the alternating projection ethod described in [56], we generate an approxiate equiangular tight frae (ETF) for R M consisting of 160 coluns. We split this frae into two sets of 80 eleents (coluns) each and organize the in the atrices A and B such that the corresponding coherence paraeters are given by µ a 0.158, µ b , and µ Fig. 4 shows the 50% success-rate contour under four different assuptions of support-set knowledge. In the case where either only X or only E is known and in the case where X and E are unknown, we use OMP and BP for recovery. It is interesting to note that the graphs for the cases where only X or only E are known, are syetric with respect to the line n x = n e. This syetry is also reflected in the analytical thresholds (18) and () (see also Fig. and the discussion in Section V-A). We finally note that in all cases considered above, the nuerical results show that recovery is possible for significantly higher sparsity levels n x and n e than indicated by the corresponding analytical thresholds (15), (18), (), and (6) (see also Figs. 1 and ). The underlying reasons are i) the deterinistic nature of the results, i.e., the recovery guarantees in (15), (18), (), and (6) are valid for all dictionary pairs (with given coherence paraeters) and all signal and noise realizations (with given sparsity level), and ii) we plot the 50% success-rate contour, whereas the analytical results guarantee perfect recovery in 100% of the cases. B. Inpainting exaple In transfor coding one is typically interested in axially sparse representations of a given signal to be encoded [57]. In our setting, this would ean that the dictionary A should be chosen so that it leads to axially sparse representations of a given faily of signals. We next deonstrate, however, that in the presence of structured noise, the signal dictionary A should additionally be incoherent to the noise dictionary B. 5 For A = F M and B = I M it was proven in [55] that a set of coluns chosen randoly fro both A and B is linearly independent (with high probability) given that the total nuber of chosen coluns, i.e., n x + n e here, does not exceed a constant proportion of M.

11 STUDER ET AL.: RECOVERY OF SPARSELY CORRUPTED SIGNALS (a) Corrupted iage (MSE = 11. db) 11 (b) Recovery when X and E are (c) Recovery when only E is known (d) Recovery for X and E unknown known (MSE = db) (MSE = db) (MSE = 13.0 db) Fig. 5. Recovery results using the DCT basis for the signal dictionary and the identity basis for the noise dictionary, for the cases where (b) X and E are known, (c) only E is known, and (d) no support-set knowledge is available. (Picture origin: ET Zürich/Esther Raseier). (a) Corrupted iage (MSE = 11. db) (b) Recovery when X and E are (c) Recovery when only E is known (d) Recovery for X and E unknown known (MSE = 7.1 db) (MSE = 7.1 db) (MSE = 1.0 db) Fig. 6. Recovery results for the signal dictionary given by the aar wavelet basis and the noise dictionary given by the identity basis, for the cases where (b) both X and E are known, (c) only E is known, and (d) no support-set knowledge is available. (Picture origin: ET Zürich/Esther Raseier). This extra requireent can lead to very different criteria for designing transfor bases (fraes). To illustrate this point, and to show that perfect recovery can be guaranteed even when the ` -nor of the noise ter Be is large, we consider the recovery of a sparsely corrupted pixel grayscale iage of the ain building of ET Zurich. The dictionary A is taken to be either the two-diensional discrete cosine transfor (DCT) or the aar wavelet decoposed on three octaves [58]. We first sparsify the iage by retaining the 15% largest entries of the iage s representation x in A. We then corrupt (by overwriting with text) 18.8% of the pixels in the sparsified iage by setting the to the brightest grayscale value; this eans that the errors are sparse in B = IM and that the noise is structured but ay have large ` -nor. Iage recovery is perfored according to (14) if X and E are known. (Note, however, that knowledge of X is usually not available in inpainting applications.) We use BP when only E is known and when neither X nor E are known. The recovery results are evaluated by coputing the ean-square error (MSE) between the sparsified iage and its recovered version. Figs. 5 and 6 show the corresponding results. As expected, the MSE increases as the aount of knowledge about the support sets decreases. More interestingly, we note that even though aar wavelets often yield a saller approxiation error in classical transfor coding copared to the DCT, here the wavelet transfor perfors worse than the DCT. This behavior is due to the fact that sparsity is not the only factor deterining the perforance of a transfor coding basis (or frae) in the presence of structured noise. Rather the utual coherence between the dictionary used to represent the signal and that used to represent the structured noise becoes highly relevant. Specifically, in the exaple at hand, we have µ = 1/ for the aar-wavelet and the identity basis, and µ for the DCT and the identity basis. The dependence of the analytical thresholds (15), (18), (), (4), and (6) on the utual coherence µ explains the perforance difference between the aar wavelet basis and the DCT basis. An intuitive explanation for this behavior is as follows: The aar-wavelet basis contains only four non-zero entries in the coluns associated to fine scales, which is reflected in the high utual coherence (i.e., µ = 1/) between the aarwavelet basis and the identity basis. Thus, when projecting onto the orthogonal copleent of (IM )E, it is likely that all non-zero entries of such coluns are deleted, resulting in coluns of all zeros. Recovery of the corresponding nonzero entries of x is thus not possible. In suary, we see that the choice of the transfor basis (frae) for a sparsely corrupted signal should not only ai at sparsifying the signal as uch as possible but should also take into account the

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets

Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets Uncertainty Relations and Sparse Signal Recovery for Pairs of General Signal Sets Patrick Kuppinger, Student Member, IEEE, Giuseppe Durisi, Member, IEEE, and elmut Bölcskei, Fellow, IEEE Abstract We present

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE 6576 IEEE TRANSACTIONS ON INORMATION THEORY, VOL 60, NO 0, OCTOBER 04 Robust Spectral Copressed Sensing via Structured Matrix Copletion Yuxin Chen, Student Meber, IEEE, and Yuejie Chi, Meber, IEEE Abstract

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Signal Recovery, Uncertainty Relations, and Minkowski Dimension

Signal Recovery, Uncertainty Relations, and Minkowski Dimension Signal Recovery, Uncertainty Relations, and Minkowski Dimension Helmut Bőlcskei ETH Zurich December 2013 Joint work with C. Aubel, P. Kuppinger, G. Pope, E. Riegler, D. Stotz, and C. Studer Aim of this

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL zeyuan@csail.it.edu isailo@it.edu kelner@it.edu

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact Physically Based Modeling CS 15-863 Notes Spring 1997 Particle Collision and Contact 1 Collisions with Springs Suppose we wanted to ipleent a particle siulator with a floor : a solid horizontal plane which

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

Testing Properties of Collections of Distributions

Testing Properties of Collections of Distributions Testing Properties of Collections of Distributions Reut Levi Dana Ron Ronitt Rubinfeld April 9, 0 Abstract We propose a fraework for studying property testing of collections of distributions, where the

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Generalized Queries on Probabilistic Context-Free Grammars

Generalized Queries on Probabilistic Context-Free Grammars IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998 1 Generalized Queries on Probabilistic Context-Free Graars David V. Pynadath and Michael P. Wellan Abstract

More information

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation

On the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation

More information

Using a De-Convolution Window for Operating Modal Analysis

Using a De-Convolution Window for Operating Modal Analysis Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

Fourier Series Summary (From Salivahanan et al, 2002)

Fourier Series Summary (From Salivahanan et al, 2002) Fourier Series Suary (Fro Salivahanan et al, ) A periodic continuous signal f(t), - < t

More information

arxiv: v3 [quant-ph] 18 Oct 2017

arxiv: v3 [quant-ph] 18 Oct 2017 Self-guaranteed easureent-based quantu coputation Masahito Hayashi 1,, and Michal Hajdušek, 1 Graduate School of Matheatics, Nagoya University, Furocho, Chikusa-ku, Nagoya 464-860, Japan Centre for Quantu

More information

Complexity reduction in low-delay Farrowstructure-based. filters utilizing linear-phase subfilters

Complexity reduction in low-delay Farrowstructure-based. filters utilizing linear-phase subfilters Coplexity reduction in low-delay Farrowstructure-based variable fractional delay FIR filters utilizing linear-phase subfilters Air Eghbali and Håkan Johansson Linköping University Post Print N.B.: When

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Efficient Filter Banks And Interpolators

Efficient Filter Banks And Interpolators Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical

More information

An Extension to the Tactical Planning Model for a Job Shop: Continuous-Time Control

An Extension to the Tactical Planning Model for a Job Shop: Continuous-Time Control An Extension to the Tactical Planning Model for a Job Shop: Continuous-Tie Control Chee Chong. Teo, Rohit Bhatnagar, and Stephen C. Graves Singapore-MIT Alliance, Nanyang Technological Univ., and Massachusetts

More information

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits Correlated Bayesian odel Fusion: Efficient Perforance odeling of Large-Scale unable Analog/RF Integrated Circuits Fa Wang and Xin Li ECE Departent, Carnegie ellon University, Pittsburgh, PA 53 {fwang,

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

Effective joint probabilistic data association using maximum a posteriori estimates of target states

Effective joint probabilistic data association using maximum a posteriori estimates of target states Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,

More information

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

On Nonlinear Controllability of Homogeneous Systems Linear in Control

On Nonlinear Controllability of Homogeneous Systems Linear in Control IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 48, NO. 1, JANUARY 2003 139 On Nonlinear Controllability of Hoogeneous Systes Linear in Control Jaes Melody, Taer Başar, and Francesco Bullo Abstract This work

More information

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Physics 139B Solutions to Homework Set 3 Fall 2009

Physics 139B Solutions to Homework Set 3 Fall 2009 Physics 139B Solutions to Hoework Set 3 Fall 009 1. Consider a particle of ass attached to a rigid assless rod of fixed length R whose other end is fixed at the origin. The rod is free to rotate about

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Introduction to Machine Learning. Recitation 11

Introduction to Machine Learning. Recitation 11 Introduction to Machine Learning Lecturer: Regev Schweiger Recitation Fall Seester Scribe: Regev Schweiger. Kernel Ridge Regression We now take on the task of kernel-izing ridge regression. Let x,...,

More information