Weighted- 1 minimization with multiple weighting sets

Size: px
Start display at page:

Download "Weighted- 1 minimization with multiple weighting sets"

Transcription

1 Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University of British Colubia, Vancouver - BC, Canada ABSTRACT In this paper, we study the support recovery conditions of weighted 1 iniization for signal reconstruction fro copressed sensing easureents when ultiple support estiate sets with different accuracy are available. We identify a class of signals for which the recovered vector fro 1 iniization provides an accurate support estiate. We then derive stability and robustness guarantees for the weighted 1 iniization proble with ore than one support estiate. We show that applying a saller weight to support estiate that enjoy higher accuracy iproves the recovery conditions copared with the case of a single support estiate and the case with standard, i.e., non-weighted, 1 iniization. Our theoretical results are supported by nuerical siulations on synthetic signals and real audio signals. Keywords: Copressed sensing, weighted 1 iniization, partial support recovery 1. INTRODUCTION A wide range of signal processing applications rely on the ability to realize a signal fro linear and soeties noisy easureents. These applications include the acquisition and storage of audio, natural and seisic iages, and video, which all adit sparse or approxiately sparse representations in appropriate transfor doains. Copressed sensing has eerged as an effective paradig for the acquisition of sparse signals fro significantly fewer linear easureents than their abient diension. 1 3 Consider an arbitrary signal x R N and let y R n be a set of easureents given by y = Ax + e, where A is a known n N easureent atrix, and e denotes additive noise that satisfies e 2 for soe known. Copressed sensing theory states that it is possible to recover x fro y (given A) evenwhen n N, i.e., using very few easureents. When x is strictly sparse, i.e. when there are only k<nnonzero entries in x, and when e =, one ay recover an estiate x of the signal x as the solution of the constrained iniization proble iniize u subject to Au = y. (1) u R N In fact, using (1), the recovery is exact when n 2k and A is in general position. 4 However, iniization is a cobinatorial proble and quickly becoes intractable as the diensions increase. Instead, the convex relaxation iniize u 1 subject to Au y 2 (2) u R N can be used to recover the estiate x. Candés, Roberg and Tao 2 and Donoho 1 show that if n k log(n/k), then 1 iniization (2) can stably and robustly recover x fro inaccurate and what appears to be incoplete easureents y = Ax+e, where, as before, A is an appropriately chosen n N easureent atrix and e 2. Contrary to iniization, (2), which is a convex progra, can be solved efficiently. Consequently, it is possible to recover a stable and robust approxiation of x by solving (2) instead of (1) at the cost of increasing the nuber of easureents taken. Further author inforation: (Send correspondence to Hassan Mansour) Hassan Mansour: E-ail: hassan@cs.ubc.ca; Özgür Yılaz: E-ail: oyilaz@ath.ubc.ca

2 Several works in the literature have proposed alternate algoriths that attept to bridge the gap between and 1 iniization. For exaple, the recovery fro copressed sensing easureents using p iniization with <p<1has been shown to be stable and robust under weaker conditions that those of 1 iniization. 5 9 However, the proble is non-convex and even though various siple and efficient algoriths were proposed and observed to perfor well epirically, 7, 1 so far only local convergence can be proved. Another approach for iproving the recovery perforance of 1 iniization is to incorporate prior knowledge regarding the support of the signal to-be-recovered. One way to accoplish this is to replace 1 iniization in (2) with weighted 1 iniization iniize u 1,w subject to Au y 2, (3) u where w [, 1] N and u 1,w := i w i u i is the weighted 1 nor. This approach has been studied by several groups and ost recently, by the authors, together with Saab and Friedlander [15]. In this work, we proved that conditioned on the accuracy and relative size of the support estiate, weighted 1 iniization is stable and robust under weaker conditions than those of standard 1 iniization. The works entioned above ainly focus on a two-weight scenario: for x R N, one is given a partition of {1,...,N} into two sets, say T and T c. Here T denotes the estiated support of the entries of x that are largest in agnitude. In this paper, we consider the ore general case and study recovery conditions of weighted 1 iniization when ultiple support estiates with different accuracies are available. We first give a brief overview of copressed sensing and review our previous result on weighted 1 iniization in Section 2. In Section 3, we prove that for a certain class of signals it is possible to estiate the support of its best k-ter approxiation using standard 1 iniization. We then derive stability and robustness guarantees for weighted 1 iniization which generalizes our previous work to the case of two or ore weighting sets. Finally, we present nuerical experients in Section 4 that verify our theoretical results. 2. COMPRESSED SENSING WITH PARTIAL SUPPORT INFORMATION Consider an arbitrary signal x R N and let x k be its best k-ter approxiation, given by keeping the k largest-in-agnitude coponents of x and setting the reaining coponents to zero. Let T =supp(x k ), where T {1,...,N} and T k. We wish to reconstruct the signal x fro y = Ax + e, wherea is a known n N easureent atrix with n N, and e denotes the (unknown) easureent error that satisfies e 2 for soe known argin >. Also let the set T {1,...,N} be an estiate of the support T of x k. 2.1 Copressed sensing overview It was shown in [2] that x can be stably and robustly recovered fro the easureents y by solving the optiization proble (1) if the easureent atrix A has the restricted isoetry property 16 (RIP). Definition 1. The restricted isoetry constant δ k of a atrix A is the sallest nuber such that for all k-sparse vectors u, (1 δ k )u 2 2 Au 2 2 (1 + δ k )u 2 2. (4) The following theore uses the RIP to provide conditions and bounds for stable and robust recovery of x by solving (2). Theore 2.1 (Candès, Roberg, Tao 2 ). Suppose that x is an arbitrary vector in R N, and let x k be the best k-ter approxiation of x. Suppose that there exists an a 1 k Z with a>1 and Then the solution x to (2) obeys δ ak + aδ (1+a)k <a 1. (5) x x 2 C + C 1 k 1/2 x x k 1. (6)

3 Reark 1. The constants in Theore 2.1 are explicitly given by C = 2(1+a 1/2 ) 1 δ(a+1)k a 1/2 1+δ ak, C 1 = 2a 1/2 ( 1 δ (a+1)k + 1+δ ak) 1 δ(a+1)k a 1/2. (7) 1+δ ak Theore 2.1 shows that the constrained 1 iniization proble in (2) recovers an approxiation to x with an error that scales well with noise and the copressibility of x, provided (5) is satisfied. Moreover, if x is sufficiently sparse (i.e., x = x k ), and if the easureent process is noise-free, then Theore 2.1 guarantees exact recovery of x fro y. At this point, we note that a slightly stronger sufficient condition copared to (5) that is easier to copare with conditions we obtain in the next section is given by 2.2 Weighted 1 iniization δ (a+1)k < a 1 a +1. (8) The 1 iniization proble (2) does not incorporate any prior inforation about the support of x. However, in any applications it ay be possible to draw an estiate of the support of the signal or an estiate of the indices of its largest coefficients. In our previous work, 15 we considered the case where we are given a support estiate T {1,...,N} for x with a certain accuracy. We investigated the perforance of weighted 1 iniization, as described in (3), where the weights are assigned such that w j = ω [, 1] whenever j T, and w j = 1 otherwise. In particular, we proved that if the (partial) support estiate is at least 5% accurate, then weighted 1 iniization with ω<1 outperfors standard 1 iniization in ters of accuracy, stability, and robustness. Suppose that T has cardinality T = ρk, where ρ N/k is the relative size of the support estiate T. Furtherore, define the accuracy of T T via α := T T,i.e.,αisthefraction of T inside T. As before, we wish to recover an arbitrary vector x R N fro noisy copressive easureents y = Ax + e, wheree satisfies e 2. To that end, we consider the weighted 1 iniization proble with the following choice of weights: 1, i T iniize z 1,w subject to Az y 2 with w i = c, z ω, i T. (9) Here, ω 1 and z 1,w is as defined in (3). Figure 1 illustrates the relationship between the support T, support estiate T and the weight vector w. Figure 1. Illustration of the signal x and weight vector w ephasizing the relationship between the sets T and T.

4 Theore 2.2 (FMSY 15 ). Let x be in R N and let x k be its best k-ter approxiation, supported on T. Let T {1,...,N} be an arbitrary set and define ρ and α as before such that T = ρk and T T = αρk. Suppose that there exists an a 1 k Z,witha (1 α)ρ, a>1, and the easureent atrix A has RIP with δ ak + a a 2 δ (a+1)k < 2 1, (1) ω +(1 ω) 1+ρ 2αρ ω +(1 ω) 1+ρ 2αρ for soe given ω 1. Then the solution x to (9) obeys x x 2 C + C1k 1/2 ωx x k 1 +(1 ω)x T c T c 1, (11) where C and C 1 are well-behaved constants that depend on the easureent atrix A, the weight ω, andthe paraeters α and ρ. Reark 2. The constants C and C1 are explicitly given by the expressions 2 1+ ω+(1 ω) 1+ρ 2αρ C a = 1 δ(a+1)k ω+(1 ω) 1+ρ 2αρ, C 2a 1/2 1 δ (a+1)k + 1+δ ak 1 = 1+δak a 1 δ(a+1)k ω+(1 ω) 1+ρ 2αρ. (12) 1+δak a Consequently, Theore 2.2, with ω =1, reduces to the stable and robust recovery theore of [2], which we stated above see Theore 2.1. Reark 3. It is sufficient that A satisfies δ (a+1)k < ˆδ (ω) := a ω +(1 ω) 1+ρ 2αρ 2 a + ω +(1 ω) 1+ρ 2αρ 2 (13) for Theore 2.2 to hold, i.e., to guarantee stable and robust recovery of the signal x fro easureents y = Ax + e. It is easy to see that the sufficient conditions of Theore 2.2, given in (1) or (13), are weaker than their counterparts for the standard 1 recovery, as given in (5) or (8) respectively, if and only if α>.5. A siilar stateent holds for the constants. In words, if the support estiate is ore than 5% accurate, weighted 1 is ore favorable than 1, at least in ters of sufficient conditions and error bounds. The theoretical results presented above suggest that the weight ω should be set equal to zero when α.5 and to one when α<.5 as these values of ω give the best sufficient conditions and error bound constants. However, we conducted extensive nuerical siulations in [15] which suggest that a choice of ω.5 results in the best recovery when there is little confidence in the support estiate accuracy. An heuristic explanation of this observation is given in [15]. 3. WEIGHTED 1 MINIMIZATION WITH MULTIPLE SUPPORT ESTIMATES The result in the previous section relies on the availability of a support estiate set T on which to apply the weights ω. In this section, we first show that it is possible to draw support estiates fro the solution of (2). We then present the ain theore for stable and robust recovery of an arbitrary vector x R N fro easureents y = Ax + e, y R n and n N, with ultiple support estiates having different accuracies. 3.1 Partial support recovery fro 1 iniization For signals x that belong to certain signal classes, the solution to the 1 iniization proble can carry significant inforation on the support T of the best k-ter approxiation x k of x. We start by recalling the null space property (NSP) of a atrix A as defined in [17]. Necessary conditions as well as sufficient conditions for the existence of soe algorith that recovers x fro easureents y = Ax with an error related to the best k-ter approxiation of x can be forulated in ters of an appropriate NSP. We state below a particular for of the NSP pertaining to the 1-1 instance optiality.

5 Definition 2. AatrixA R n N, n<n, is said to have the null space property of order k and constant c if for any vector h N(A), Ah =, and for every index set T {1...N} of cardinality T = k h 1 c h T c 1. Aong the various iportant conclusions of [17], the following (in a slightly ore general for) will be instruental for our results. Lea 3.1 ([17]). If A has the restricted isoetry property with δ (a+1)k < a 1 a+1 for soe a>1, then it has the NSP of order k and constant c given explicitly by 1+δak c =1+. a 1 δ(a+1)k In what follows, let x be the solution to (2) and define the sets S =supp(x s ), T =supp(x k ), and T = supp(x k ) for soe integers k s>. Proposition 3.2. Suppose that A has the null space property (NSP) of order k with constant c and where η = 2c 2 c. Then S T. The proof is presented in section A of the appendix. Reark 4. Note that if A has RIP so that δ (a+1)k < a 1 a+1 in x(j) (η + 1)x T c 1, (14) j S for soe a>1, then η is given explicitly by η = 2( a 1 δ (a+1)k + 1+δ ak ) a 1 δ(a+1)k 1+δ ak. (15) Proposition 3.2 states that if x belongs to the class of signals that satisfy (14), then the support S of x s i.e., the set of indices of the s largest-in-agnitude coefficients of x is guaranteed to be contained in the set of indices of the k largest-in-agnitude coefficients of x. Consequently, if we consider T to be a support estiate for x k, then it has an accuracy α s k. Note here that Proposition 3.2 specifies a class of signals, defined via (14), for which partial support inforation can be obtained by using the standard 1 recovery ethod. Though this class is quite restrictive and does not include various signals of practical interest, experients suggest that highly accurate support estiates can still be obtained via 1 iniization for signals that only satisfy significantly ilder decay conditions than (14). A theoretical investigation of this observation is an open proble. 3.2 Multiple support estiates with varying accuracy: an idealized otivating exaple Suppose that the entries of x decay according to a power law such that x(j) = cj p for soe scaling constant c, p>1 and j {1,...,N}. Consider the two support sets T 1 =supp(x k1 ) and T 2 =supp(x k2 ) for k 1 >k 2, T 2 T 1. Suppose also that we can find entries x(s 1 ) = cs p 1 c(η + 1) k1 p 1 p 1 and x(s 2)=cs p 2 c(η + 1) k1 p 2 p 1 that satisfy (14) for the sets T 1 and T 2,respectively,wheres 1 k 1 and s 2 k 2.Then s 1 s 2 = p 1 η+1 which follows because < 1 1/p < 1 and k 1 k /p k 1 1/p 1 k 1 1/p 2 p 1 η+1 1/p (k1 k 2 ).

6 Consequently, if we define the support estiate sets T 1 =supp(x k 1 ) and T 2 =supp(x k 2 ), clearly the corresponding accuracies α 1 = s1 k 1 and α 2 = s2 k 2 are not necessarily equal. Moreover, if 1/p p 1 <α 1, (16) η +1 1/p s 1 s 2 <α 1 (k 1 k2), and thus α 1 <α 2. For exaple, if we have p =1.3 and η = 5, we get p 1 η+1.1. Therefore, in this particular case, if α 1 >.1, choosing soe k 2 <k 1 results in α 2 >α 1,i.e.,weidentifytwo different support estiates with different accuracies. This observation raises the question, How should we deal with the recovery of signals fro CS easureents when ultiple support estiates with different accuracies are available? We propose an answer to this question in the next section. 3.3 Stability and robustness conditions In this section we present our ain theore for stable and robust recovery of an arbitrary vector x R N fro easureents y = Ax + e, y R n and n N, with ultiple support estiates having different accuracies. Figure 2 illustrates an exaple of the particular case when only two disjoint support estiate sets are available. Figure 2. Exaple of a sparse vector x with support set T and two support estiate sets T 1 and T 2.Theweightvector is chosen so that weights ω 1 and ω 2 are applied to the sets T 1 and T 2, respectively, and a weight equal to one elsewhere. Let T be the support of the best k-ter approxiation x k of the signal x. Suppose that we have a support estiate T that can be written as the union of disjoint subsets T j, j {1,...,}, each of which has cardinality T j = ρ j k, ρ j a for soe a>1 and accuracy α j = T j T T. j Again, we wish to recover x fro easureents y = Ax + e with e 2. To do this, we consider the general weighted 1 iniization proble in u 1,w subject to Au y (17) u R N

7 where u 1,w = N w i u i, and w i = i=1 ω 1, i T 1. ω, i T 1, i T c for ω j 1, for all j {1,...,} and T = T j. Theore 3.3. Let x R n and y = Ax + e, where A is an n N atrix and e is additive noise with e 2 for soe known >. Denote by x k the best k-ter approxiation of x, supportedont and let T 1,..., T {1,...,N} be as defined above with cardinality T j = ρ j k and accuracy α j = T j T T, j {1,...,}. j For soe given ω 1,...,ω 1, define γ := ω j ( 1) + (1 ω j ) 1+ρ j 2α j ρ j. If the RIP constants of A are such that there exists an a 1 k Z,witha>1, and δ ak + a γ 2 δ (a+1)k < a 1, (18) γ2 then the solution x # to (17) obeys x # x 2 C (γ) + C 1 (γ)k 1/2 ω j x Tj T c 1 + x T c T c 1. (19) The proof is presented in section B of the appendix. Reark 5. The constants C (γ) and C 1 (γ) are well-behaved and given explicitly by the expressions 2 1+ γ a C (γ) = 1 δ(a+1)k γ, C 1 (γ) = 2a 1/2 1 δ (a+1)k + 1+δ ak 1+δak a 1 δ(a+1)k γ. (2) 1+δak a Reark 6. Theore 3.3 is a generalization of Theore 2.2 for 1 support estiates. It is easy to see that when the nuber of support estiates =1, Theore 3.3 reduces to the recovery conditions of Theore 2.2. Moreover, setting ω j =1for all j {1,...,} reduces the result to that in Theore 2.1. Reark 7. The sufficient recovery condition (13) becoes in the case of ultiple support estiates δ (a+1)k < ˆδ (γ) := a γ2 a + γ 2, (21) where γ is as defined in Theore 3.3. It can be shown that when =1, γ reduces to the expression in (13). Reark 8. The value of γ controls the recovery guarantees of the ultiple-set weighted 1 iniization proble. For instance, as γ approaches, condition (21) becoes weaker and the error bound constants C (γ) and C 1 (γ) becoe saller. Therefore, given a set of support estiate accuracies α j for all j {1...}, it is useful to find the corresponding weights ω j that iniize γ. Notice that for all j, γ is a su of linear functions of ω j with α j controlling the slope. When α j >.5, the slope is positive and the optial value of ω j =. Otherwise, when α j.5, the slope is negative and the optial value of ω j =1. Hence, as in the single support estiate case, the theoretical conditions indicate that when the α j are known a choice of ω j equal to zero or one should be optial. However, when the knowledge of α j is not reliable, experiental results indicate that interediate values of ω j produce the best recovery results.

8 L 1 in 2 set wl 1 3 set wl 1 ( 1 =.8, 1 =.1, 2 = ) 12 =.3 2 =.5 45 = SNR (db) SNR (db) SNR (db) Figure 3. Coparison between the recovered SNR (averaged over 1 experients) using two-set weighted 1 with support estiate T and accuracy α, three-set weighted 1 iniization with support estiates T 1 T 2 = T and accuracy α 1 =.8 and α 2 <α,andnon-weighted 1 iniization. 4. NUMERICAL EXPERIMENTS In what follows, we consider the particular case of = 2, i.e. where there exists prior inforation on two disjoint support estiates T 1 and T 2 with respective accuracies α 1 and α 2. We present nuerical experients that illustrate the benefits of using three-set weighted 1 iniization over two-set weighted 1 and non-weighted 1 iniization when additional prior support inforation is available. To that end, we copare the recovery capabilities of these algoriths for a suite of synthetically generated sparse signals. We also present the recovery results for a practical application of recovering audio signals using the proposed weighting. In all of our experients, we use SPGL1 18, 19 to solve the standard and weighted 1 iniization probles. 4.1 Recovery of synthetic signals We generate signals x with an abient diension N = 5 and fixed sparsity k = 35. We copute the (noisy) copressed easureents of x using a Gaussian rando easureent atrix A with diensions n N where n = 1. To quantify the reconstruction quality, we use the reconstruction signal to noise ratio (SNR) average over 1 realizations of the sae experiental conditions. The SNR is easured in db and is given by x 2 SNR(x, x) = 1 log 2 1 x x 2, (22) 2 where x is the true signal and x is the recovered signal. The recovery via two-set weighted 1 iniization uses a support estiate T of size T = 4 (i.e., ρ = 1) where the accuracy α of the support estiate takes on the values {.3,.5,.7}, and the weight ω is chosen fro {.1,.3,.5}. Recovery via three-set weighted 1 iniization assues the existence of two support estiates T 1 and T 2, which are disjoint subsets of T described above. The set T 1 is chosen such that it always has an accuracy α 1 =.8 while T 2 = T \ T 1. In all experients, we fix ω 1 =.1 and set ω 2 = ω. Figure 4.1 illustrates the recovery perforance of three-set weighted 1 iniization copared to twoset weighted 1 using the setup described above and non-weighted 1 iniization. The figure shows that utilizing the extra accuracy of T 1 by setting a saller weight ω 1 results in better signal recovery fro the sae easureents.

9 4.2 Recovery of audio signals Next, we exaine the perforance of three-set weighted 1 iniization for the recovery of copressed sensing easureents of speech signals. In particular, the original signals are sapled at 44.1 khz, but only 1/4th of the saples are retained (with their indices chosen randoly fro the unifor distribution). This yields the easureents y = Rs, wheres is the speech signal and R is a restriction (of the identity) operator. Consequently, by dividing the easureents into blocks of size N, we can write y =[y T 1,y T 2,...] T. Here each y j = R j s j is the easureent vector corresponding to the jth block of the signal, and R j R nj N is the associated restriction atrix. The signals we use in our experients consist of 21 such blocks. We ake the following assuptions about speech signals: 1. The signal blocks are copressible in the DCT doain (for exaple, the MP3 copression standard uses a version of the DCT to copress audio signals.) 2. The support set corresponding to the largest coefficients in adjacent blocks does not change uch fro block to block. 3. Speech signals have large low-frequency coefficients. Thus, for the reconstruction of the jth block, we identify the support estiates T 1 is the set corresponding to the largest n j /16 recovered coefficients of the previous block (for the first block T 1 is epty) and T 2 is the set corresponding to frequencies up to 4kHz. For recovery using two-set weighted 1 iniization, we define T = T 1 T 2 and assign it a weight of ω. In the three-set weighted 1 case, we assign weights ω 1 = ω/2 on the set T 1 and ω 2 = ω on the set T \ T 1. The results of experients on an exaple speech signal with N = 248, and ω {, 1/6, 2/6,...,1} are illustrated in Figure 4. It is clear fro the figure that three-set weighted 1 iniization has better recovery perforance over all 1 values of ω spanning the interval [, 1] SNR 6 2 set wl1 3 set wl1 ( 1 =, 2 = /2) /6 2/6 3/6 4/6 5/6 1 Figure 4. SNRs of the two reconstruction algoriths two-set and three-set weighted 1 iniization for a speech signal fro copressed sensing easureents plotted against ω.

10 5. CONCLUSION In conclusion, we derived stability and robustness guarantees for the weighted 1 iniization proble with ultiple support estiates with varying accuracy. We showed that incorporating additional support inforation by applying a saller weight to the estiated subsets of the support with higher accuracy iproves the recovery conditions copared with the case of a single support estiate and the case of (non-weighted) 1 iniization. We also showed that for a certain class of signals the coefficients of which decay in a particular way it is possible to draw a support estiate fro the solution of the 1 iniization proble. These results raise the question of whether it is possible to iprove on the support estiate by solving a subsequent weighted 1 iniization proble. Moreover, it raises an interest in defining a new iterative weighted 1 algorith which depends on the support accuracy instead of the coefficient agnitude as is the case of the Candès, Wakin, and Boyd 2 (IRL1) algorith. We shall consider these probles elsewhere. APPENDIX A. PROOF OF PROPOSITION 3.2 We want to find the conditions on the signal x and the atrix A which guarantee that the solution x to the 1 iniization proble (2) has the following property in j S x (j) ax x (j) = x (k + 1). j T c Suppose that the atrix A has the Null Space property (NSP) 17 of order k, i.e., for any h N(A), Ah =, then h 1 c h T c 1, where T {1, 2,...N} with T = k, and N (A) denotes the Null-Space of A. If A has RIP with δ (a+1)k < a 1 a+1 for soe constant a>1, then it has the NSP of order k with constant c which can be written explicitly in ters of the RIP constant of A as follows 1+δak c =1+. a 1 δ(a+1)k Define h = x x, thenh N(A) and we can write the 1-1 instance optiality as follows h 1 2c 2 c x T c 1, with c < 2. Let η = 2c 2 c, the bound on h T 1 is then given by h T 1 (η + 1)x T c 1 x T c 1. (23) The next step is to bound x T c 1. Noting that T =supp(x k ), then x T 1 x T c 1, and x T c 1 x T c 1 x (k + 1). Using the reverse triangle inequality, we have j, x(j) x (j) x(j) x (j) which leads to in j S x (j) in x(j) ax x(j) j S j S x (j). But ax j S x(j) x (j) = h S h S 1 h T 1, so cobining the above three equations we get in j S x (j) x (k + 1) +in x(j) (η + 1)x T c 1. (24) j S Equation (24) says that if the atrix A has δ (a+1)k -RIP and the signal x obeys in x(j) (η + 1)x T c 1, j S then the support T of the largest k entries of the solution x to (2) contains the support S of the largest s entries of the signal x.

11 APPENDIX B. PROOF OF THEOREM 3.3 The proof of Theore 3.3 follows in the sae line as our previous work in [15] with soe odifications. Recall that the sets T j and disjoint and T = T j, and define the sets T jα = T T j, for all j {1,...,}, where T jα = α j ρ j k. Let x # = x + h be the iniizer of the weighted 1 proble (17). Then Moreover, by the choice of weights in (17), we have Consequently, x + h 1,w x 1,w. ω 1 x T1 + h T ω x T + h T 1 + x T c + h T c 1 ω 1 x T1 1 + ω x T 1 + x T c 1. x T c T + h T c T 1 + x T c T c + h T c T c 1 + x T c T 1 + x T c T c 1 + ω j x Tj T + h Tj T 1 + ω j x Tj T c ω j x Tj T 1 + ω j x Tj T c 1. + h Tj T c 1 Next, we use the forward and reverse triangle inequalities to get ω j h Tj T c 1 + h T c T c 1 h T c T 1 + ω j h Tj T 1 +2 x T c T c 1 + ω j x Tj T c 1. Adding (1 ω j )h T c j T c 1 on both sides of the inequality above we obtain h Tj T c 1 + h T c T c 1 ω j h Tj T 1 + ω j )h Tj T (1 c 1 + h T c T x T c T c 1 + ω j x Tj T c 1. Since h T c 1 = h T T c 1 + h T c T c 1 and h T T c 1 = h Tj T c 1, this easily reduces to h T c 1 ω j h Tj T 1 + (1 ω j )h Tj T c 1 + h T c T 1 +2 x T c T c 1 + ω j x Tj T c 1. (25) Now consider the following ter fro the left hand side of (25) ω j h Tj T 1 + (1 ω j )h Tj T c 1 + h T c T 1 Add and subtract (1 ω j)h T c j T 1, and since the set T jα = T T j, we can write h T c j T 1 +h Tj T c 1 = h T T \ T jα 1 to get = = ω j h Tj T 1 + h T c j T 1 + (1 ω j ) ω j h T 1 + h T c T 1 h T c j T 1 + ω j +1 h T 1 + (1 ω j )h T T \ T jα 1. h Tj T c 1 + h T c j T 1 + h T c T 1 (1 ω j )h T T \ T jα 1 h T c j T 1

12 The last equality coes fro h T T j c 1 = h T c T 1 +h T ( T \ T j) 1 and h T ( T \ T j) 1 =( 1)h T T 1. Consequently, we can reduce the bound on h T c 1 to the following expression: h T c 1 ω j +1 h T 1 + (1 ω j )h T T \ T jα 1 +2 x T c T c 1 + ω j x Tj T c 1. (26) Next we follow the technique of Candès et al. 2 and sort the coefficients of h T c partitioning T c it into disjoint sets T j,j {1, 2,...} each of size ak, wherea>1. That is, T 1 indexes the ak largest in agnitude coefficients of h T c, T 2 indexes the second ak largest in agnitude coefficients of h T c, and so on. Note that this gives = j 1 h T j,with h T c h Tj 2 akh Tj (ak) 1/2 h Tj 1 1. (27) Let T 1 = T T 1, then using (27) and the triangle inequality we have h T c 1 2 h Tj 2 (ak) 1/2 h Tj 1 j 2 (ak) 1/2 h T c 1. Next, consider the feasibility of x # and x. Both vectors are feasible, so we have Ah 2 2 and Ah T Ah T c Ah Tj 2 j δ ak h Tj 2. Fro (26) and (28) we get Ah T δ akak + 1+δ akak j 2 j 1 x T c T c 1 + ω j x Tj T c 1 ( ω j + 1)h T 1 + (1 ω j )h T T \ T jα 1. Noting that T T \ T jα =(1+ρ j 2α j ρ j )k, 1 δ(a+1)k h T δ akak x T c T c 1 + ω j x Tj T c δ aak ( ω j + 1)h T 2 + (1 ω j ) 1+ρ j 2α j ρ j h T T \ T jα 2. Since for every j we have h T T j\ T jα 2 h T1 2 and h T 2 h T1 2,thus δ ak ak x T c T c 1 + ω j x Tj T c 1 h T1 2 ω. (29) j +1+ (1 ω j) 1+ρ j 2α jρ j 1 δ(a+1)k 1+δak a Finally, using h 2 h T1 2 +h T c 1 2 and let γ = ω j +1+ (1 ω j ) 1+ρ j 2α j ρ j, we cobine (26), (28) and (29) to get 2 1+ a γ 1 δ(a+1)k + 1+δ ak +2 ak x T c T c 1 + ω j x Tj T c 1 h 2 1 δ(a+1)k γ, (3) 1+δak a (28)

13 with the condition that the denoinator is positive, equivalently δ ak + a γ 2 δ (a+1)k < a γ 2 1. ACKNOWLEDGMENTS The authors would like to thank Rayan Saab for helpful discussions and for sharing his code, which we used for conducting our audio experients. Both authors were supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Collaborative Research and Developent Grant DNOISE II ( ). Ö. Yılaz was also supported in part by an NSERC Discovery Grant. REFERENCES [1] Donoho, D., Copressed sensing., IEEE Transactions on Inforation Theory 52(4), (26). [2] Candès, E. J., Roberg, J., and Tao, T., Stable signal recovery fro incoplete and inaccurate easureents, Counications on Pure and Applied Matheatics 59, (26). [3] Candès, E. J., Roberg, J., and Tao, T., Robust uncertainty principles: exact signal reconstruction fro highly incoplete frequency inforation, IEEE Transactions on Inforation Theory 52, (26). [4] Donoho, D. and Elad, M., Optially sparse representation in general (nonorthogonal) dictionaries via 1 iniization, Proceedings of the National Acadey of Sciences of the United States of Aerica 1(5), (23). [5] Gribonval, R. and Nielsen, M., Highly sparse representations fro dictionaries are unique and independent of the sparseness easure, Applied and Coputational Haronic Analysis 22, (May 27). [6] Foucart, S. and Lai, M., Sparsest solutions of underdeterined linear systes via q -iniization for <q 1, Applied and Coputational Haronic Analysis 26(3), (29). [7] Saab, R., Chartrand, R., and Yilaz, O., Stable sparse approxiations via nonconvex optiization, in [IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)], (28). [8] Chartrand, R. and Staneva, V., Restricted isoetry properties and nonconvex copressive sensing, Inverse Probles 24(352) (28). [9] Saab, R. and Yilaz, O., Sparse recovery by non-convex optiization instance optiality, Applied and Coputational Haronic Analysis 29, 3 48 (July 21). [1] Chartrand, R., Exact reconstruction of sparse signals via nonconvex iniization, Signal Processing Letters, IEEE 14(1), (27). [11] von Borries, R., Miosso, C., and Potes, C., Copressed sensing using prior inforation, in [2nd IEEE International Workshop on Coputational Advances in Multi-Sensor Adaptive Processing, CAMPSAP 27.], ( ). [12] Vaswani, N. and Lu, W., Modified-CS: Modifying copressive sensing for probles with partially known support, arxiv:93.566v4 (29). [13] Jacques, L., A short note on copressed sensing with partially known signal support, Signal Processing 9, (Deceber 21). [14] Ain Khajehnejad, M., Xu, W., Salan Avestiehr, A., and Hassibi, B., Weighted l1 iniization for sparse recovery with prior inforation, in [IEEE International Syposiu on Inforation Theory, ISIT 29], (June 29). [15] Friedlander, M. P., Mansour, H., Saab, R., and Özgür Yılaz, Recovering copressively sapled signals using partial support inforation, to appear in the IEEE Transactions on Inforation Theory. [16] Candès, E. J. and Tao, T., Decoding by linear prograing., IEEE Transactions on Inforation Theory 51(12), (25). [17] Cohen, A., Dahen, W., and DeVore, R., Copressed sensing and best k-ter approxiation, Journal of the Aerican Matheatical Society 22(1), (29). [18] van den Berg, E. and Friedlander, M. P., Probing the pareto frontier for basis pursuit solutions, SIAM Journal on Scientific Coputing 31(2), (28). [19] van den Berg, E. and Friedlander, M. P., SPGL1: A solver for large-scale sparse reconstruction, (June 27). [2] Candès, E. J., Wakin, M. B., and Boyd, S. P., Enhancing sparsity by reweighted 1 iniization, The Journal of Fourier Analysis and Applications 14(5), (28).

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

Compressive Sensing Over Networks

Compressive Sensing Over Networks Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

An RIP-based approach to Σ quantization for compressed sensing

An RIP-based approach to Σ quantization for compressed sensing An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion

A Nonlinear Sparsity Promoting Formulation and Algorithm for Full Waveform Inversion A Nonlinear Sparsity Prooting Forulation and Algorith for Full Wavefor Inversion Aleksandr Aravkin, Tristan van Leeuwen, Jaes V. Burke 2 and Felix Herrann Dept. of Earth and Ocean sciences University of

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

OPTIMIZATION in multi-agent networks has attracted

OPTIMIZATION in multi-agent networks has attracted Distributed constrained optiization and consensus in uncertain networks via proxial iniization Kostas Margellos, Alessandro Falsone, Sione Garatti and Maria Prandini arxiv:603.039v3 [ath.oc] 3 May 07 Abstract

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

Recovery of Sparsely Corrupted Signals

Recovery of Sparsely Corrupted Signals TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER

MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER Pingfan Song João F. C. Mota Nikos Deligiannis Miguel Raul Dias Rodrigues Electronic and Electrical Engineering Departent,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Introduction to Discrete Optimization

Introduction to Discrete Optimization Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Optimization of ripple filter for pencil beam scanning

Optimization of ripple filter for pencil beam scanning Nuclear Science and Techniques 24 (2013) 060404 Optiization of ripple filter for pencil bea scanning YANG Zhaoxia ZHANG Manzhou LI Deing * Shanghai Institute of Applied Physics, Chinese Acadey of Sciences,

More information

Shannon Sampling II. Connections to Learning Theory

Shannon Sampling II. Connections to Learning Theory Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

A NEW ROBUST AND EFFICIENT ESTIMATOR FOR ILL-CONDITIONED LINEAR INVERSE PROBLEMS WITH OUTLIERS

A NEW ROBUST AND EFFICIENT ESTIMATOR FOR ILL-CONDITIONED LINEAR INVERSE PROBLEMS WITH OUTLIERS A NEW ROBUST AND EFFICIENT ESTIMATOR FOR ILL-CONDITIONED LINEAR INVERSE PROBLEMS WITH OUTLIERS Marta Martinez-Caara 1, Michael Mua 2, Abdelhak M. Zoubir 2, Martin Vetterli 1 1 School of Coputer and Counication

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Complex Quadratic Optimization and Semidefinite Programming

Complex Quadratic Optimization and Semidefinite Programming Coplex Quadratic Optiization and Seidefinite Prograing Shuzhong Zhang Yongwei Huang August 4 Abstract In this paper we study the approxiation algoriths for a class of discrete quadratic optiization probles

More information

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE 6576 IEEE TRANSACTIONS ON INORMATION THEORY, VOL 60, NO 0, OCTOBER 04 Robust Spectral Copressed Sensing via Structured Matrix Copletion Yuxin Chen, Student Meber, IEEE, and Yuejie Chi, Meber, IEEE Abstract

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence

Best Arm Identification: A Unified Approach to Fixed Budget and Fixed Confidence Best Ar Identification: A Unified Approach to Fixed Budget and Fixed Confidence Victor Gabillon Mohaad Ghavazadeh Alessandro Lazaric INRIA Lille - Nord Europe, Tea SequeL {victor.gabillon,ohaad.ghavazadeh,alessandro.lazaric}@inria.fr

More information

ORIE 6340: Mathematics of Data Science

ORIE 6340: Mathematics of Data Science ORIE 6340: Matheatics of Data Science Daek Davis Contents 1 Estiation in High Diensions 1 1.1 Tools for understanding high-diensional sets................. 3 1.1.1 Concentration of volue in high-diensions...............

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Sparse Signal Reconstruction via Iterative Support Detection

Sparse Signal Reconstruction via Iterative Support Detection Sparse Signal Reconstruction via Iterative Support Detection Yilun Wang and Wotao Yin Abstract. We present a novel sparse signal reconstruction ethod, iterative support detection (), aiing to achieve fast

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

A Theoretical Analysis of a Warm Start Technique

A Theoretical Analysis of a Warm Start Technique A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization

Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization Structured signal recovery fro quadratic easureents: Breaking saple coplexity barriers via nonconvex optiization Mahdi Soltanolkotabi Ming Hsieh Departent of Electrical Engineering University of Southern

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

An Algorithm for Quantization of Discrete Probability Distributions

An Algorithm for Quantization of Discrete Probability Distributions An Algorith for Quantization of Discrete Probability Distributions Yuriy A. Reznik Qualco Inc., San Diego, CA Eail: yreznik@ieee.org Abstract We study the proble of quantization of discrete probability

More information

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x)

Ştefan ŞTEFĂNESCU * is the minimum global value for the function h (x) 7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE

RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS

More information

arxiv: v3 [quant-ph] 18 Oct 2017

arxiv: v3 [quant-ph] 18 Oct 2017 Self-guaranteed easureent-based quantu coputation Masahito Hayashi 1,, and Michal Hajdušek, 1 Graduate School of Matheatics, Nagoya University, Furocho, Chikusa-ku, Nagoya 464-860, Japan Centre for Quantu

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL zeyuan@csail.it.edu isailo@it.edu kelner@it.edu

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

Meta-Analytic Interval Estimation for Bivariate Correlations

Meta-Analytic Interval Estimation for Bivariate Correlations Psychological Methods 2008, Vol. 13, No. 3, 173 181 Copyright 2008 by the Aerican Psychological Association 1082-989X/08/$12.00 DOI: 10.1037/a0012868 Meta-Analytic Interval Estiation for Bivariate Correlations

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information