Sudocodes: Low Complexity Algorithms for Compressive Sampling and Reconstruction

Size: px
Start display at page:

Download "Sudocodes: Low Complexity Algorithms for Compressive Sampling and Reconstruction"

Transcription

1 Sudocodes: Low Complexity Algorithms for Compressive Sampling and Reconstruction Shriram Sarvotham, Dror Baron and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX 77005, USA Abstract Sudocodes are new compressive sampling schemes for measurement and reconstruction of sparse signals using algorithms on graphs. Consider a sparse signal x R N containing K N large coefficients and the remaining small (or zero) coefficients. Sudo-encoding computes the codeword y R M via the linear matrix-vector multiplication y = Φx, with K < M N. We propose a non-adaptive construction of a sparse Φ comprising of values from {0, 1, 1}; hence the computation of y involves only sums and differences of subsets of the elements of x. The accompanying sudo-decoding strategies efficiently recover x given y. Our first reconstruction algorithm is based on graph reduction and is tailor-made for strictly sparse signals, in which x contains exactly K non-zero coefficients. This scheme require M = O(K log(n)) measurements for perfect reconstruction, with worst-case decoding complexity O(K log(k) log(n)). The second reconstruction algorithm is applicable to signals that have approximately K large magnitude coefficients and the remaining coefficients are small (such as compressible signals or strictly sparse signals in noise). We model the signal coefficients as mixture-gaussian and employ message passing over a factor-graph for signal recovery. We develop an O(N polylog(n)) complexity reconstruction algorithm using belief propagation. *** Above complexity needs to be refined *** Sudocodes have numerous potential applications including erasure codes for real-valued data, coding in peer-to-peer networks and distributed data storage systems. *** Need to mention few applications of BP: eg. hardware / parallel processing etc. *** 1 Introduction In many signal processing applications the focus is often on identifying and estimating a few significant coefficients from a high dimension vector. The ground-breaking work in compressive sampling (CS) pioneered by Candés et al. [1] and Donoho [2] demonstrate that the information of the few significant coefficients can be captured (encoded) by a small number of random linear projections. The significant coefficients can then be reconstructed (decoded) from these random projections using optimization strategies. Effectively, CS lowers 1

2 the complexity of data acquisition devices (such as sensors or A2D converters) the at the expense of more computational burden at the decoder. In this paper, we demonstrate that it is possible to design low complexity CS decoders by using carefully crafted measurement operators that can be described by sparse bipartite graphs. The decoding schemes are inspired by low-complexity graph algorithms used in error control codes such as LDPC and fountain codes. We call our proposed CS coding scheme Sudocodes. The name Sudocode is inspired by Sudoku puzzles, where the challenge is to determine unknown values of cells in a grid [3]; 1 our proposed sudocode decoding technique signals work in a similar manner. 1.1 Compressive sampling Consider a discrete-time signal x R N that has K large magnitude coefficients, and the remaining coefficients are small (or zero), for some K N. The core tenet of CS is that it is unnecessary to measure all the N values of x; rather, we can recover x from a small number of projections onto an incoherent basis [1, 2]. To measure (encode) x, we compute the measurement vector y R M as M linear projections of x via the matrix-vector multiplication y = Φx. Our goal is to reconstruct (decode) x either accurately or approximately given y and Φ using M N measurements and low computational complexity. The practical revelation that supports the CS theory is that it is not necessary to resort to combinatorial search to recover the set of non-zero coefficients of x from the ill-posed inverse problem y = Φx. A much easier problem yields an equivalent solution. We need only solve for the l 1 -sparsest coefficients that agree with the measurements y [1, 2] x = arg min x 1 s.t. y = Φx, (1) as long as Φ satisfies certain conditions. These conditions are shown to be satisfied by a measurement strategy using independent and identically distributed (iid) Gaussian entries for Φ. Furthermore, the l 1 optimization problem, also known as Basis Pursuit [4], is significantly more approachable and can be solved with linear programming techniques. The decoder based on linear programming requires ck projections for signal reconstruction where c log 2 (1 + N/K) [5] and reconstruction complexity is Ω(N 3 ) [6 8]. While linear programming techniques figure prominently in the design of tractable CS decoders, their Ω(N 3 ) complexity renders them impractical for many applications. We often encounter sparse signals with large N; for example, current digital cameras acquire images with the number of pixels N of the order of 10 6 or more. For such applications, the need for faster decoding algorithms is critical. Furthermore, the encoding also has high complexity; encoding with a full Gaussian Φ requires Θ(M N) computations. At the expense of slightly more measurements, iterative greedy algorithms have been developed to recover the signal x from the measurements y. Examples include the iterative Orthogonal Matching Pursuit (OMP) [9], matching pursuit (MP), and tree matching pursuit (TMP) [10] algorithms. In CS applications, OMP requires c 2 ln(n) [9] to succeed with high probability; decoding complexity is Θ(NK 2 ) *** to be verified ***. Unfortunately, Θ(NK 2 ) is cubic in N and K, and therefore OMP is also impractical. 1 Thanks to Ingrid Daubechies for pointing out the connection. 2

3 Donoho et al. recently proposed the Stage-wise Orthogonal Matching Pursuit (StOMP) [11]. StOMP is an enhanced version of OMP where multiple coefficients are resolved at each stage of the greedy algorithm, as against only one in the case of OMP. Moreover, StOMP takes a fixed number of stages while OMP takes many stages to recover the large coefficients of x. The authors show that StOMP with fast operators for Φ (such as permuted FFT s) can recover the signal in N log N complexity. Therefore StOMP runs much faster than OMP or l 1 minimization and can be used for solving large-scale problems. While the CS algorithms discussed above use a full Φ (the entries of Φ are non-zero in general), there have been a class of techniques that employ sparse Φ and use group testing to decode x. Cormode and Muthukrishnan recently proposed a fast algorithm based on group testing [12, 13]. Their scheme considers subsets of the signal coefficients, in which we expect at most one large coefficient to lie. Within this set, they locate the position and value of the large coefficient using a Hamming code construction. Their decoding scheme has O(K log 2 (N)) complexity, but they require M = O(K log 2 (N)) measurements. Gilbert et al. [14] propose the Chaining Pursuit (CP) algorithm for reconstructing compressible signals. CP reconstructs with O(K log 2 (N)) non-adaptive linear measurements in O(K log 2 (N) log 2 (K)) time. Simulations reveal that CP works best for super-sparse signals, where the ratio K/N is very small (see Table 1). Define the sparsity rate as the fraction of non-zero coefficients in x: S K/N. As we increase S, CP requires an enormous number of measurements; for some ranges of S the number of measurements M > N, which is undesirable. 1.2 Low complexity decoding of error-control codes via graphs In his seminal work [15], Shannon showed that for any channel there exists families of block codes that achieve arbitrarily small error probability at any encoding rate upto the capacity of the channel. Shannon proved the above result using non-constructive random codes, for which there is no practical encoding and decoding algorithms. Since 1948, several approaches to building practical codes have been proposed. The initial methods relied on imposing structure to the codewords; for example, explicit algebraic construction of very good codes were designed for some channels. Although the coding for such codes has polynomial complexity (and thus practical), most of these codes fared poorly in the asymptotic regime- they achieved arbitrarily small probabilities of decoding errors only by decreasing the information rate to zero. More recently, a class of linear error correcting codes were proposed that use sparse parity check matrices that lend themselves to simple and practical decoding algorithms. In spite of their simple construction, these codes achieve information rates close to the Shannon limit. This endearing combination of high-performance and lowcomplexity was unprecedented in the coding literature; in fact the conventional wisdom that prevailed during the early 1990 s was that there are no high-performance codes that have low-complexity [16 18]. In sparse graph codes, either the parity check matrix (which we shall denote by H) or the generator matrix (G) of a linear code can be represented as a sparse bipartite graph. The nodes in the graph represent the transmitted bits and the constraints they satisfy. While any linear code can be described by a bipartite graph, what makes a sparse bipartite graph 3

4 special is that each constraint node is connected to a small number of variable nodes in the graph. Furthermore, the number of edges in the graph scales linearly with N, as against quadratic in N for a general linear code. Sparse graph codes can be decoded by running low-complexity algorithms over the bipartite graph. The algorithm could either take the form as graph reduction (for example, decoding digital fountain codes) or message passing (for example, decoding LDPC and turbo codes) over the edges of the graph Graph reduction to decode digital fountain codes The class of digital fountain codes [47] (including LT codes and raptor codes) employ graph reduction techniques for decoding. Fountain codes are record-breaking class of sparse graph codes for erasure channels. The principal idea behind fountain codes is as follows. The encoder produces an endless supply of encoded packets from the original source file. We can think of the resulting code as an irregular low-density generator matrix code. Let the original source file be of length Kl bits, and each encoded packet has l encoded bits. When a decoder receives a little larger than K packets, it can recover the original file. The decoder s task is to recover the message bits s from check bits t = Gs, where G is the sparse generator matrix. The key in decoding process is to locate check bits that are connected to only one message bit: in this scenario we immediately resolve the corresponding message bit, and therefore we can remove the edges that connect to that node as well. This operation results in reducing the size of the graph, and increases the probability of locating other check bits with node degree one. We continue the process until we resolve all the message bits. The encoding and decoding complexity of LT codes are (Kl) log e (Kl), where Kl is the file size. Raptor codes, which is another instance of fountain code, achieve linear time encoding and decoding by concatenating an LT code with an outer code that patches the gaps in the LT code. *** Do we need more material? I could do the following if needed: (1) Illustrate the above with a picture. (2) Write about applications of fountain codes like storage and broadcast. *** Message passing to decode LDPC and turbo codes LDPC codes [19] and turbo codes [20] employ message passing over a graph for decoding and come closest to approaching the Shannon limit for a noisy channel. It was the advent of turbo codes and LDPC codes that sparked the realization that low-complexity codes with relatively simple structures could allow reliable transmission at rates approaching the Shannon limit. LDPC codes and turbo codes can be completely described by sparse graphs, and the decoding is performed by iterative message passing algorithms. We start with the input consisting of the prior probabilities of the bits. The message passing algorithm uses the parity check relationship among the bits to iteratively pass messages between the nodes in the graph and extracts the posterior probabilities for the codeword bits. Although the message 4

5 passing algorithm yields accurate posterior probabilities only when the bipartite graph is cycle free, it turns out that the algorithm computes astonishingly accurate estimates for LDPC and turbo decoding in spite of the presence of cycles in the graph. The complexity of each iteration of LDPC and turbo decoding scales linearly with the codeword length (the exact complexity depends on the details of the implementation). 1.3 Sudocodes: connecting CS decoding to graph algorithms The world of error control coding theory and the world of CS share intriguing connections, and one can draw parallels between the two. For example, a fully Gaussian/Bernoulli CS matrix [1, 2] is reminiscent of Shannon s fully random code construction. Although a fully random CS matrix can efficiently capture the information content of sparse signals with very few projections, they are not amenable to fast encoding and decoding. On the other end of the spectrum, a sparse compressed sensing matrix is reminiscent of sparse graph codes of error control coding including LDPC, turbo and fountain codes. This leading insight begs the following question: can sparse CS matrices reduce the encoding and decoding complexity, at the same time maintain the efficiency in coding? In particular, can we derive low complexity CS decoders inspired by the graph algorithms for sparse graph codes? In this paper, we demonstrate that this is indeed possible. We develop strategies called sudocoding to measure the signal using sparse {0, 1} or {0, 1, 1} matrices and accompanying low-complexity graph-based reconstruction schemes. The first decoding algorithm, is based on graph reduction, and is tailor-made for strictly sparse signals: signals in R N that contain exactly K N non-zero coefficients. The key idea here is that decoding the coefficients of x leads to a series of graph reduction steps. This algorithm is super-fast, with comparable decoding times to other low-complexity schemes proposed in the literature [12 14]. Furthermore, sudocodes for strictly sparse signals require a very small number of measurements for perfect reconstruction, compared to the other schemes. The second sudo-decoding algorithm is based on message passing over a graph, and is tailor-made for approximately sparse signals signals in R N that have K N large coefficients and the remaining coefficients are small but non-zero. Examples of such signals are strictly sparse signals in noise and compressible signals Sudo-encoding using graphs Sudo-encoding computes the measurements y = Φx using a sparse CS matrix Φ, with the entries of Φ are restricted to {0, 1, 1}. Hence the measurements are just sums and differences of small subsets of the coefficients of x. A sparse Φ can be represented as a sparse bipartite graph (Figure 1). The graph shown in figure could corresponds to the following Φ: Φ = A signal is compressible if its coefficients sorted by magnitude obey a power law; it can be approximated well by a sparse signal. 5.

6 x Φ y Figure 1: The CS measurement operator Φ can be represented as a bipartite graph. The nodes on the left correspond to the signal coefficients. The right side nodes correspond to the measurements. The design of Φ (such as column weight, row weight, etc) is based on the properties of the signal as well as the accompanying sudo-decoding algorithm. The goal is to construct Φ for which the decoder consumes minimum number of measurements and operates at low complexity Signal models We study two signal models in this paper, and provide sudocoding schemes for each model. We consider a signal x R N to be strictly sparse if it contains exactly K N nonzero coefficients. We make a further assumption involving the magnitudes of the non-zero coefficients: we assume that We consider a signal x R N to be approximately sparse if it contains K large magnitude coefficients and the remaining coefficients are small. We use the two-state Gaussian mixture distribution to model the coefficients of approximately sparse signals [21 23] Sudo-decoding strictly sparse signals Sudo-decoding for strictly sparse signals is based on graph reduction powered by the insight that certain measurement values allow the decoder to resolve a set of coefficients of x. As an example, if a measurement node is connected to only one coefficient node, then we immediately infer that coefficient value (similar to the operation of the decoder for LT codes). We identify two other cases where such coefficient revelations are possible, based on ideas from group testing [24]. Once a subset of coefficients are resolved, we can remove the corresponding nodes and their neighboring edges from the graph. Thus the decoding process is one of graph reduction, reminiscent of decoding in LT codes and tornado codes [25, 26]. Sudocodes for strictly sparse signals have O(M L) = O(N log(n)) encoding complexity and O(K log(k) log(n)) worst case decoding complexity. We show that sudocodes require 6

7 M = O(K log(n)) measurements for perfect reconstruction with high probability. Thus substantially expand the horizon for CS applications. We present simulation results for numbers of practical interest and show that the number of measurements and decoding times are very competitive and outperform recently proposed schemes for a wide range of sparsity (Table 1). One of the highlights of sudocoding is that it employs an avalanche strategy to accelerate the decoding process. The inference of a coefficient value triggers an avalanche of coefficient revelations and thus enables the signal to be recovered using far fewer measurements Sudo-decoding approximately sparse signals Sudo-decoding for approximately sparse signals is based on message passing over graphs to solve a Bayesian inference problem. Using the two-state mixture Gaussian distribution as a prior model for the signal coefficients, we compute an estimate of x that explains the measurements and best matches the prior. We use belief propagation, similar to the decoding in LDPC codes and turbo codes [18 20]. Sudocodes for approximately sparse signals have O(M L) = O(N log(n)) encoding complexity and O(N log(n)) worst case decoding complexity *** Need to get BP theory in place and refine this statement ***. We show that sudocodes require M = O(K log(n)) measurements for perfect reconstruction with high probability *** again, needs refinement ***. *** I ll introduce mean-median here, in a subsection *** *** I ll summarize a list of applications here, in a subsection *** 2 Sudo-encoding A CS matrix Φ can be represented as a bipartite graph G (Figure 1). The edges of G connect a coefficient node x(i) to a measurement node y(j). The neighbors of a given measurement node is the set of coefficient nodes that are used in generating that measurement. Sudocodes rely on the sparse nature of G; and so we choose Φ to be a sparse matrix. Thus, a measurement y(j) is computed using a small subset of the coefficients of x. We impose an additional constraint on Φ to enable even simpler encoding: the entries of Φ are restricted to {0, 1, 1}. Hence the computation of y involves only sums and differences of small subsets of the coefficients of x. The advantage of representing Φ as a graph is that the accompanying decoding procedure can employ graph algorithms over G; these algorithms can either be graph reduction or a message passing scheme over the edges of G. In addition to the core structure of the sudo-encoding matrix described above, we may introduce other constraints to tailor-make the measurement process to the signal model that generated the input signal. Some of these additional constraints that we use in this paper are listed below. 1. Each row of Φ contains exactly L non-zero entries (1 or 1) placed randomly. The row weight L is chosen based on the properties of the signal that we are trying to encode 7

8 (such as sparsity), and also the decoding process. In Sections 3.3 and 4.3 we discuss how to choose an optimal value for L. 2. In some situations, we fix the column weight for each column of Φ to be a constant R. 3. While sparse Φ is well suited for sparse signals in the canonical spike basis, the encoding can be generalized to the case where x is sparse in an arbitrary basis Ψ. In this case, we multiply the sudo-encoding matrix Φ with the sparsifying matrix Ψ and use ΦΨ to encode x as y = (ΦΨ)x. Encoding complexity: Consider the case where the row weight of Φ is a constant given by L. Each measurement requires O(L) additions/subtractions. Therefore, sudo-encoding requires O(M L) = O(N log(n)) computations to generate the M measurements. The above computation assumes that the sparsifying basis Ψ = I. For the general case Ψ I, the encoding matrix ΦΨ may not be sparse. Therefore, the encoding must perform extra computations; this extra cost is O(N 2 ) in general. Fortunately, in many practical situations Ψ is structured (e.g., Fourier or wavelet bases) and thus amenable to fast computation. Therefore, extending sudocodes to such bases is feasible. 3 Sudo-decoding and strictly sparse signals 3.1 Signal model: strictly sparse signals Strictly sparse signal model generates signals in R N that contain exactly K N non-zero coefficients. We select the K indices of the non-zero coefficients at random. The non-zero coefficients are assumed to be drawn from a continuous random distribution. The reason for choosing a continuous random distribution is to enable us to make the following assumption. Let Ω {i : i [1,..., N], x i 0} be the set of non-zero coefficients of x. The cardinality of Ω is given by #(Ω) = K. We assume that for any two subsets Ω 1, Ω 2 Ω with Ω 1 Ω 2, the following condition holds: x i x i > ɛ. (2) i Ω 1 i Ω 2 That is, we assume that the sum of x(i) s corresponding to each subset of Ω is unique up to a given precision ɛ. This assumption is valid with high probability in our setting where the non-zero signal coefficients are drawn from a continuous random distribution. To accommodate signal classes where the non-zero coefficients of the signal do not conform to the above condition, we employ dithering. We add small pseudo-random perturbations to the non-zero coefficients in order to satisfy Equation 2. The seed of the pseudorandom number generator is known to both the encoder and the decoder. The pseudorandom noise can be subtracted out at the decoder after signal reconstruction. 3.2 Decoding via graph reduction Sudo-decoding for strictly sparse signals is a graph reduction algorithm. In this regard it operates very similar to the decoding of LT codes [25]. The key idea is that we as we 8

9 Figure 2: Graph reduction when zero measurement is detected. The coefficient nodes indicated by the box are resolved as zeros. The edges that connect to the resolved coefficients can be removed, resulting in graph reduction. Figure 3: Graph reduction when matching measurements are detected. In this scenario, a set of coefficient values can be resolved as zeros. resolve coefficients of x, we remove the corresponding nodes and the associated edges from the bipartite graph, thereby decreasing the size of the problem at each step. We describe the main steps below. Pseudocode for Sudocode decoding Algorithm(1) illustrates the Sudodecoding process. The decoding scheme is sequential; we process each measurement y(j) in succession, for j = 1, 2,..., M. At the j th step, we obtain the unexplained portion of the measurement y(j), which we shall denote as y (j). *** I ll explain what unexplained measurement is. *** We examine y (j) and check if we can resolve some coefficient value(s) of x. We study three cases in which coefficients of x can be resolved. Case 1: y (j) = 0: We infer that the coefficients of x that are neighboring y (j) = 0 are all zero. This follows from our uniqueness assumption on the non-zero coefficients of x. Thus with just one measurement, we have resolved a group of coefficient values. This situation is illustrated in Figure 2. 9

10 Figure 4: Graph reduction when a measurement node has only one neighboring coefficient. The coefficient node is trivially resolved. Case 2: y (j) = y (k) 0: We compare y (j) to previously examined measurements (i.e., y (k), k < j). If there is a perfect match (within the precision ɛ) between y (j) and a past measurement y (k), then we infer that the two measurements are generated from the same set of non-zero coefficients of x. To be precise, define n(j) as the set of neighboring nodes of y (j). Then, the following statements are true: y (j) = y (k) = x(i), and i n(j) n(k) x(i) = 0 i (n(j) \ n(k)) (n(k) \ n(j)), where \ is the set-difference operator. This inference follows from our assumption (Equation 2) on the signal model. Hence we resolve the coefficients in (n(j) \ n(k)) (n(k) \ n(j)) as zero. This situation is illustrated in Figure 3. Furthermore, if the cardinality # (n(j) n(k)) = 1 (this occurs with overwhelmingly high probability as we increase N), we can use case 3 to determine the coefficient value of the non-zero coefficient x(i ) where i (n(j) n(k)). Case 3: #(n(j)) = 1: In this case, there exists only one coefficient node neighboring y (j). Therefore, the value of the neighboring node x(i) where i n(j) is trivially given by x(i) = y (j). This situation is illustrated in Figure Accelerated decoding In order to facilitate efficient search for matching measurements, we store the past measurements (measurements already processed) in a Binary Search Tree (BST) data structure. For a BST containing n elements, the search, insert, delete, and update operations require O(log(n)) time on average. We also introduce the history matrix H of size R N, where R = O(ML/N). The i th column of H stores the list of measurement nodes that neighbor x(i). The history matrix makes it easier to update the measurements as the coefficients of x are recovered. When x(i) 0 is resolved, we subtract off its value from its neighboring measurements. We also remove x(i) and its incident edges from the bipartite graph. 10

11 3.2.2 Avalanche An avalanche strategy further accelerates the decoding process and also reduces the number of measurements. Whenever a coefficient is resolved, we search the past measurements that measured the coefficient value (using the history matrix) to induce further coefficient revelations. This triggers an avalanche of coefficient revelations (similar to the decoding for LT codes [25] and tornado codes [26]), and thus the signal can be recovered using fewer measurements. An illustration of avalanche is shown in Figure??. *** I didn t have time to generate the figure; will do it in the next draft. *** Two phase coding While the graph reduction strategy (which we call Phase 1) described above quickly resolves most of the coefficients of x using a small number of measurements, it could get stuck when there are only a few coefficients left to decode. This situation can arise in one of the following two situations. First, there could be coefficient nodes that are not connected to the graph (in other words, the coefficient was not measured). Second, consider the case where most of the coefficients are decoded. We would see frequent occurrences of measurements that provide no new information; this occurs when all of the coefficients of x used by the measurement is already resolved. At this stage it is useful to introduce a second phase of coding, which resolves all the remaining small number of coefficients at once. The two-phase coding is reminiscent of Raptor codes [28] that works by concatenating an LT code (that resolves most of the bits) with an outer LDPC code that patches the gaps in the LT code. Phase 2 measurements are encoded differently: we compute K measurements of x using a non-sparse K N matrix Φ. The only requirement for Φ is that every sub-matrix formed by taking K columns from Φ is invertible (the value of K will be described soon). Now consider decoding. In Phase 1 decoding using graph reduction, we proceed as above until we have reconstructed most of the coefficients in x. When at least N K coefficient values have been determined, we transition to Phase 2 decoding. In Phase 2 decoding, we have at most K remaining unknown coefficients. We extract the columns of Φ corresponding to the unresolved coefficients to form a K K matrix Φ K e. The decoder solves for the unknown coefficients by computing the matrix inverse of Φ ek and multiplying it with the Phase 2 measurements. Since matrix inversion has cubic complexity, we select K proportional to K 1/3, thereby requiring O(K) complexity for Phase 2 decoding. 3.3 Number of measurements The number of measurements required for perfect reconstruction of strictly sparse signals with sudo-decoding is O(K log N), using the optimal value of L. We first present the strategy to pick the optimal L and then derive the number of measurements needed. The performance of sudo-decoding (the number of measurements M as well as the encoding and decoding time) depends on the choice of L. The optimal L depends on the probability that we capture all zero coefficients in a set of size L. For large N, this probability depends only on the sparsity rate S = K/N. Figure 5(a) shows the number of measurements needed (M) as a function of L. This plot is based on simulations for N = 16000, 64000, and

12 Algorithm 1: Sudocode pseudocode for strictly sparse signals. Input: Number of phase 1 measurements M, Measurement vector y, Sets Ω j [1, N]: coefficient nodes that are neighbors of y(j) for j [1, M]. Output: Reconstructed signal x with at least N K coefficients decoded. The remaining unknown coefficients are in Ω unknown. 1 begin 2 Initialize Ω unknown {1, 2,..., N} 3 Initialize history matrix H R N 4 Initialize BST B 5 Initialize j 0 6 while (#{Ω unknown } < K) do 7 j j if (j > M) then 9 Not enough measurements to complete phase 1; terminate phase Load the next measurement y(j) and its neighbor set Ω j Ω explained Ω j \ Ω unknown ; /* Set difference */ y (j) y(j) k Ω explained x(k) Ω j Ω j \ Ω explained if (y (j) = 0) then /* Case 1 */ Ω unknown Ω unknown \ Ω j x(ω j) 0 17 else 18 Update history matrix H 19 Insert y (j) to B 20 Search B for a past measurement that matches y (j) 21 if Match found (y (j) = y (k)) then /* Case 2 */ 22 Ω k unknown portion of Ω k 23 Ω xor (Ω j \ Ω k ) + (Ω k \ Ω j ) 24 Ω unknown Ω unknown Ω xor 25 x(ω xor ) 0 26 Ω intersect Ω j Ω k 27 if (#{Ω intersect } = 1) then /* Case 3 */ 28 x(ω intersect ) y (j) 29 Ω unknown Ω unknown \ Ω intersect 30 Update past y s that used Ω intersect ; /* Use history matrix */ 31 end 12

13 and sparsity rate S = Figure 5(b) plots the optimal L (which requires the least number of measurements) for a variety of sparsity rates Number of measurements M The following theorem characterizes the trade-off between M and the fraction of coefficients recovered. Theorem 1 Using L = O(N/K), Phase 1 sudo-decoding requires M = O(K log(1/ɛ)) measurements for exact reconstruction of N(1 ɛ) coefficients. Proof: Let L = c 1 /S = c 1 N/K. Consider a single measurement. We have Pr(zero measurement) = f 1 (c 1 ) + o(1), where the o(1) term vanishes as N increases and the binomial distribution is approximated as Poisson. Similarly, the probability that a single coefficient was measured satisfies Pr(single coefficient) = f 2 (c 1 ) + o(1). Let Phase 1 take M = c 2 K measurements in total. In the remainder of the proof we ignore the o(1) terms, because they are easily accounted for by modifying M slightly. Phase 1 encounters a zero measurement f 1 (c 1 )c 2 K times; each time L = c 1 N/K zero coefficients are determined, for a total of c 1 c 2 f 1 (c 1 )N zero coefficients. However, the same zero coefficient may have been determined multiple times. Because there are N K zero coefficients in total, each was determined c 1 c 2 f 1 (c 1 )N/(N K) > c 1 c 2 f 1 (c 1 ) times. Therefore, the probability that any specific zero coefficient was determined is greater than f 3 (c 1 c 2 f 1 (c 1 )) where the Poisson approximation gives f 3 (θ) = 1 e θ + o(1). We now analyze the total number of non-zero coefficients recovered in Phase 1. The total number of times that a single coefficient was measured is c 2 f 2 (c 1 )K. If the same nonzero coefficient is measured multiple times without other coefficients interfering, then that coefficient is recovered by Phase 1. This happens with probability f 4 (c 2 f 2 (c 1 )), where the Poisson approximation gives f 4 (θ) = 1 e θ (1 + θ) + o(1). We complete the proof by noting that c 1, f 1 (c 1 ), and f 2 (c 1 ) are constants, and so the proportion of coefficients that are not recovered decays as e O(c2). Phase 1 must cover all but the last O(K 1/3 ) coefficients, which corresponds to ɛ = O(K 1/3 /N). Applying Theorem 1, we have M = O(K log(n/k 1/3 )) = O(K log(n)). We express this in the following. Corollary 1 Using L = O(N/K), Phase 1 sudo-decoding requires M = O(K log(n)) measurements for exact reconstruction of N K coefficients, where K = O(K 1/3 ). Finally, Phase 2 requires O(K 1/3 ) measurements, which is much less than the O(K log(n)) measurements required for Phase 1. Therefore, the total number of measurements required for exact reconstruction by sudo-decoding is M = O(K log(n)). 3.4 Decoding complexity The complexity for sudo-decoding strictly sparse signals is O(K log K log N) (derived below). This complexity can be achieved using the following data structures. The mea- 13

14 12 10 N=16000 N=64000 N= L1 reconstruction N=16000 N=64000 N= Predicted Oversampling factor M/K Optimal L to minimize M L (a) Ratio K/N (b) Figure 5: (a) Over sampling factor (M/K) vs. L for sparsity rate S = K/N = (b) Optimal L for different sparsity rates. surement matrix Φ must be stored sparsely. By storing the indices of the L coefficients measured by each row, we can efficiently perform operations such as comparing the support sets of two rows when two non-zero measurements are found to be equal. Using the BST data structure to store past measurements, we can search for matching measurements with O(log(M)) complexity. Using the history matrix H makes it easier to update the measurements as the coefficients of x are recovered, as described in Section 3.2. Every search for past identical measurements using the BST costs O(log(M)). Because the aggregate number of accesses to the heap is O(M), the aggregate decoding complexity is O(M log(m)) = O(K log(n) log(k log(n))) = O(K log(k) log(n)). 3 Phase 2 inverts a matrix of size K K, and so its complexity is O( K 1/3 ) = O(K), which is smaller than the computation in Phase 1. We summarize the discussion with the following theorem. Theorem 2 The computation complexity of the sudo-decoder for strictly sparse signals is O(K log(k) log(n)). 3.5 Storage requirements The bulk of the storage is consumed by the representation of Φ and the history matrix H. The storage required for Φ is O(ML) since we exploit the sparse representation of Φ. The storage required for H is of the same order as Φ, because H is of dimension N ML/N. Therefore, the overall storage requirement is O(M L). We assume that the columns of the Phase 2 encoding matrix can be generated using the pseudo-random seed shared by the encoder and the decoder. This obviates the need to store the Phase 2 matrix (which otherwise would consume O(N K) storage). 3 We assume that K grows faster than log(n). Otherwise, the decoding complexity is O(K log(n) log log(n)). 14

15 Table 1: Comparison of Sudocodes with Chaining pursuit. *** I will update with new sim results *** 5 N,K Chaining Sudocodes pursuit N = 1, 000 M = 1, 645 M = 204 K = 10 T = T = 0.19 N = 1, 000 M =52,636 M = 460 K = 100 T = 1.02 T = 0.21 N = 10, 000 M = 2, 858 M = 456 K = 10 T = T = 0.14 N = 10, 000 M = M = 798 K = 100 T = 4.60 T = 0.30 N = 10, 000 M > 10 6 M = 4, 346 K = 1000 T > 30 T = 1.76 N = 100, 000 M = 45, 785 M = 929 K = 10 T = 1.36 T = 1.85 N = 100, 000 M > 10 6 M = 1, 399 K = 100 T > 30 T = 1.92 N = 100, 000 M > 10 6 M = 5, 089 K = 1000 T > 30 T = 3.38 N = 1, 000, 000 M = 82, 889 M = 2, 049 K = 10 T = 6.7 T = 15.9 N = 1, 000, 000 M > 10 7 M = 2, 506 K = 100 T > 30 T = 18.5 N = 1, 000, 000 M > 10 7 M = 6, 941 K = 1, 000 T > 30 T = 19.8 N = 1, 000, 000 M > 10 7 M = 49, 905 K = 10, 000 T > 30 T = Properties *** I ll write a list of properties like progressivity, etc. *** 3.7 Numerical results The decoding performance of sudocodes (number of measurements M and decoding time T ) is presented in Table 1 for a range of N and K. For comparison, the corresponding performance of Chaining Pursuit (CP) [14] is also shown. Sudocodes perform very well in all sparsity rate regimes shown in the Table. CP is well suited for super-sparse (very 5 The decoding time T is in seconds based on simulations in Matlab running on a Linux platform with a 2.2GHz AMD Athlon 64 X2 Dual Core Processor and 2GB memory. The value of M shown for Sudocodes includes measurements in both decoding phases. *** Two times are shown for Sudo-decoding; T BST is the decoding time using binary search tree, and T history is the decoding time using the old technique (history matrix). *** 15

16 small S = K/N) signals. For moderately sparse signals, CP requires a very large number of measurements; in particular, M > N in some simulations. 4 Sudo-decoding and approximately sparse signals 4.1 Signal model: approximately sparse signals We use the two-state Gaussian mixture distribution to model the coefficients of approximately sparse signals [21 23]. This stochastic model for the signal succinctly captures our prior knowledge about its sparsity. We describe the signal model below. Let X = [X(1), X(2),..., X(N)] be a random vector in R N, and let us consider the signal x = [x(1), x(2),..., x(n)] as an outcome of X. The key observation we exploit is that a sparse signal consists of a small number of large coefficients and a large number of small coefficients. Thus each coefficient of the signal can be associated with a state variable that can take on two values: high, corresponding to a coefficient of large magnitude and low, representing a coefficient of small magnitude. Let the state associated with the j th coefficient be denoted by q(j), where either q(j) = 1 (high) or q(j) = 0 (low). We view q(j) as an outcome of the state random variable Q(j) that can take values from {0, 1}. Let Q = [Q(1), Q(2),..., Q(N)] be the state random vector associated with the signal. The actual state configuration q = [q(1), q(2),..., q(n)] {0, 1} N is one of 2 N possible outcomes of Q. We associate with each possible state of coefficient j a probability density for X(j), resulting in a two state mixture distribution for that coefficient. For the high state, we choose a high-variance zero mean Gaussian distribution, and for low state, we choose a low-variance zero-mean Gaussian distribution to model the coefficients. Thus the distribution of X(j) conditioned on Q(j) is given by X(j) Q(j) = 1 N (0, σ 2 1), and X(j) Q(j) = 0 N (0, σ 2 0 ), where σ1 2 > σ2 0. For simplicity, we assume independence between coefficients: the outcome of the state or value of a coefficient does not influence the state or value of another coefficient. 6 Finally, to ensure that we have about K large magnitude coefficients, we choose the probability mass function (pmf) of the state variable Q(j) to be Bernoulli with P [Q(j) = 1] = S and P [Q(j) = 0] = 1 S, where S = K/N is the sparsity rate (we assume that the sparsity rate is known). The two-state Gaussian mixture model model is completely characterized by three parameters: the distribution of the state variable (parametrized by the sparsity rate S), and the variances σ 2 1 and σ 2 0 of the Gaussian pdf s corresponding to each state. Figure 6 provides a pictorial illustration of the two state mixture Gaussian model. Mixture Gaussian models have been successfully employed in image processing and inference problems because they are simple yet effective in modeling real-world signals [21 23]. Theoretical connections have also been made between wavelet coefficient mixture models and 6 The model can be extended to capture the dependencies between coefficients, if any. We leave this for future work. 16

17 P (Q = 0) P (Q = 1) f(x Q = 0) f(x Q = 1) f(x) Figure 6: Illustration of the 2-state mixture Gaussian model for the Random Variable X. The distribution of X conditioned on the state variable Q are depicted. Also shown is the overall non- Gaussian distribution for X. We use this mixture distribution to model the prior for the coefficients of a sparse signal. the fundamental parameters of Besov spaces function spaces that have proved invaluable for characterizing real-world images [29]. Furthermore, by increasing the number of states and allowing non-zero means, we can make the fit arbitrarily close for densities with with a finite number of discontinuities [30]. Although we study the use of two state mixture Gaussian distributions for modeling the coefficients of the signal, the sudo-decoding proposed in the sequel can be used with other prior distributions as well. 4.2 Decoding via statistical inference Sudo-decoding for approximately sparse signals is a Bayesian inference problem that can be solved efficiently using a message passing algorithm over factor graphs. The key idea in Bayesian inference is to determine that signal that is consistent with the observed measurements that best matches our signal model. As before, we observe the linear projections y R M of a sparse signal x R N given by y = Φx, where Φ is the compressed sensing matrix of dimension M N, with M N. The goal is to estimate x given Φ and y. Resolving x that satisfy y = Φx describes an under-determined set of equations and has infinitely many solutions. The solutions lie along a hyperplane of minimum dimension N M [31]. We use Bayesian inference to locate the solution in this hyperplane that best matches our prior signal model. We pose the CS reconstruction as the following inference problems (we consider MAP and MMSE estimates): x MMSE = arg min x E X x 2 2 s.t. y = Φx, and x MAP = arg max x P [X = x ] s.t. y = Φx, where the expectation is taken over the prior distribution P of X. The MMSE estimate can also be expressed as a conditional mean, given by x MMSE = E [X Y = y], where Y is the random vector in R M corresponding to the measurement. In the sequel, we first present a 17

18 scheme to determine the exact MMSE estimate of the signal (Section 4.2.1). This scheme has exponential complexity, and hence impractical for large N. However, the sparse structure of Φ enables us to use low-complexity message-passing schemes to estimate x approximately; this procedure is described in Section *** The following 2 sub-sections break the parallelism with the strictly sparse decoding section. I see no elegant way around it. *** Exact solution to CS statistical inference To compute the MAP and MMSE estimates of the signal, let us first consider the simple case in which we know the state configuration q of the signal. In this setting, the MMSE estimate of x can be computed using the pseudo-inverse of Φ after applying the covariance matrix of the associated state configuration (as described below). The covariance matrix Σ q associated with a state configuration q is given by Σ q = E[XX T Q = q]. The covariance matrix is diagonal (because the coefficients are independent), with the j th diagonal entry equal to σ0 2 if the state q(j) = 0, or σ2 1 if q(j) = 1. We have the following Theorem: Theorem 3 Let y = Φx be observed. The MMSE and the MAP estimate of x conditioned on knowing the state configuration q is given by x MAP,q = x MMSE,q = Σ q Φ T ( ΦΣ q Φ T ) 1 y, where Σ q is the covariance of X conditioned on knowing the state configuration q. Proof: Conditioned on knowing the state configuration q, the distribution of X is a multivariate Gaussian with covariance Σ q. Therefore, P [X = x Q = q] = The probability P (x) is maximized when x T Σ 1 q 1 (2π) N/2 det(σ q ) x MAP,q : this is given by x MAP,q = arg min x x T Σ 1 q 1 e 2 xt Σ 1 1/2 q x. x is minimum. Consider the MAP estimate x such that y = Φx. Now consider change of variables, by setting θ = Σq 1/2 x. In this setting, we have θ MAP = arg min θ θ T θ such that y = ΦΣ 1/2 q θ. This is a simple least squares problem and the solution can be computed using the pseudo-inverse: θ MAP = (ΦΣ 1/2 q ) + y = Σq 1/2 Φ ( T ΦΣ q Φ ) T 1 y. Because x MAP,q = Σ 1/2 q θ MAP, we have proved the result for x MAP,q. Finally, the MMSE and MAP estimates are identical for multivariate Gaussian random variables, and this proves the Theorem. We present a scheme with exponential complexity that computes x MMSE when the states are unknown. We use this solution as a yardstick to compare the approximate solutions we propose later. To determine x MMSE accurately, we compute the conditional MMSE estimate for each of the 2 N state configurations using Theorem 3. The following Theorem relates the overall MMSE estimate to the conditional MMSE estimates. Theorem 4 The MMSE estimate of x when the state configuration is unknown is given by x MMSE = P [Y = y Q = q] P [Q = q] x MMSE,q q [0,1] N 18

19 Figure 7: Equi-probable surfaces using the mixture Gaussian model for the coefficients. The line represents the set of x s that satisfy Φx = y. As we expand the equi-probable surface, it touches the line at the MAP estimate. *** I will add descriptive notes on the figure *** Proof: *** I ll fill in few words to outline the steps *** x MMSE = E [X Y = y] = q [0,1] N E[X, Q = q Y = y] = q [0,1] N P [Q = q Y = y] E[X Q = q, Y = y] = q [0,1] N P [Q = q Y = y] x MMSE,q = q [0,1] N P [Y = y Q = q] P [Q = q] x MMSE,q The signal estimate can be interpreted geometrically. As a simple example, consider a 2D signal (N = 2) where each coefficient is modeled by a mixture Gaussian distribution. Figure 7 shows a sample sketch of contours of equi-probable values of x. *** I will expand on this and talk about how this relates to MAP and MMSE estimates of x. *** It is well known that exact inference in arbitrary graphical models is an NP hard problem [32]. However, sparse matrices Φ lend themselves to easy approximate inference using message passing algorithms such as BP. In the sequel, we present a message-passing approach for CS reconstruction, where we compute the approximate marginals for [X(j) Y ] and thus evaluate the MMSE or MAP estimate for each coefficient. 19

20 States Q Coefficients X Measurements Y Figure 8: Factor graph depicting the relationship between the variables involved in our CS problem. The variable nodes are depicted in black and the constraint nodes in white. *** For some reason latex renames the state variable from Q to O. I ll fix this in my next draft. *** Approximate solution to CS inference via message passing We use belief propagation (BP), an efficient scheme to solve inference problems by message passing over graphical models [18, 33 38]. We employ BP by message passing over factor graphs. Factor graphs enable fast algorithms to compute global functions of many variables by exploiting the way in which the global function factors into a product of simpler local functions, each of which depends on a subset of the variables [39]. The factor graph shown in Figure 8 captures the relationship between the signal coefficients, their states, and the observed CS measurements. The factor graph contains two types of vertices: variable nodes (black) and constraint nodes (white). The factor graph is bipartite: all edges connect a variable node to a constraint node. A constraint node encapsulates the dependencies (constraints) that its neighbors (variable nodes) are subjected to. The factor graph has three types of variable nodes: states Q(j), coefficients X(j) and the measurements Y (i). Furthermore it has three types of constraint nodes. The first type prior constraint nodes impose the Bernoulli prior on the state variables. The second type of constraint nodes connect the state variables and the coefficient variables. These constraints impose the conditional distribution of the coefficient value given the state. The third type of constraint nodes connect one measurement variable to a set of coefficient variables that are involved in computing that measurement. In the CS inference problem, the observed variables are the outcomes of the measurement variables Y. We employ BP to (approximately) infer the probability distributions (beliefs) of the coefficient and the state variables conditioned on the measurements. In other words, BP provides the mechanism to infer the marginal distributions P [X(j) Y = y] and P [Q(j) Y = y]. Knowing these distributions, one can extract either the MAP estimate (the value of x(j) that maximizes P [X(j) Y = y]) or the MMSE estimate (the mean of P [X(j) Y = y]) for each coefficient. An overview of BP is provided in Appendix A. *** To expand the following 2 points into a paragraph each. *** 20

21 1) There are only 2 types of message processing in CS reconstruction using BP: multiplication of beliefs (at the variable nodes) and convolution (as the constraint node). 2) We need a strategy to encode the beliefs. We propose 2 strategies. (a) Approximating the continuous distribution by a mixture Gaussian with a given number of components. We use the parameters of the mixture Gaussian as the messages. (b) We sample the pdf s uniformly and use the samples as the messages. We present results for both schemes and discuss the pros and cons for each. We could use alternate schemes to encode continuous distributions such as particle filtering, importance sampling and Monte Carlo methods [40], but we leave those directions for future work. Mixture Gaussian parameters as messages In this method, we approximate a distribution by a mixture Gaussian with a maximum number of components. We then use the mixture Gaussian parameters as messages. For both multiplication and convolution, the resulting number of components in the mixture is multiplicative. To keep the message representation tractable, employ model order reduction on mixture Gaussian distributions. We use the Iterative Pairwise Replacement Algorithm (IPRA) scheme for model order reduction [41]. The advantage of using mixture Gaussian parameters to encode continuous distributions is that the messages are short and hence do not consume large amounts of memory. However, this encoding scheme could be too limiting. The scheme is tailor mode to work for mixture Gaussian priors. In particular, this scheme cannot be extended to priors such as l p compressible signals. Also, the model order reduction a necessary step to keep the representation tractable introduces errors in the messages, that can affect the quality of the solution as well as impact the convergence of BP [42]. Furthermore, the model reduction algorithms (such as IPRA) used to limit the number of components in the mixture can be computationally expensive. Samples of the pdf as messages An alternate approach to encode the beliefs is to sample the pdf and send the samples as the messages. In this scheme, multiplication of pdf s correspond to point-wise multiplication of messages. Convolution of the pdf s can be computed efficiently in the frequency domain. 7 The primary advantage of using pdf samples as messages is that it can be applied to a larger class of prior distributions for the coefficients. Also, both multiplication and convolution of the pdf s can be computed efficiently. However, the drawback of this scheme is the large memory requirement: we require finer sampling of the pdf for more accurate reconstruction. As a rule of thumb, we sample the pdf s with a spacing less than σ 0, the standard deviation of the noise level in the signal. Furthermore, the mere process of sampling the pdf introduces errors in the messages that can impact accuracy and convergence of BP [42]. Stabilizing BP using damping We employ damping using message damped belief propagation (MDBP) [43] to stabilize BP in the face of loopy graphs and inaccurate belief representations. We refer the reader to Appendix A for the reasons to use damping. 7 This approach of computing the FFT of the message to enable fast convolution has been used in LDPC decoding over GF (2 q ) using BP [18]. 21

Sudocodes: A Suite of Low Complexity Compressive Sampling Algorithms

Sudocodes: A Suite of Low Complexity Compressive Sampling Algorithms Sudocodes: A Suite of Low Complexity Compressive Sampling Algorithms Shriram Sarvotham, Dror Baron and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX

More information

Compressed Sensing Reconstruction via Belief Propagation

Compressed Sensing Reconstruction via Belief Propagation Compressed Sensing Reconstruction via Belief Propagation Shriram Sarvotham, Dror Baron and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX 77005, USA

More information

Sudocodes: Efficient Compressive Sampling Algorithms for Sparse Signals

Sudocodes: Efficient Compressive Sampling Algorithms for Sparse Signals Sudocodes: Efficient Compressive Sampling Algorithms for Sparse Signals Shriram Sarvotham, Dror Baron and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston,

More information

Bayesian Compressive Sensing via Belief Propagation

Bayesian Compressive Sensing via Belief Propagation Bayesian Compressive Sensing via Belief Propagation Dror Baron, 1 Shriram Sarvotham, 2 and Richard G. Baraniuk 3 arxiv:0812.4627v2 [cs.it] 24 Jun 2009 1 Department of Electrical Engineering, Technion Israel

More information

Compressed Sensing and Linear Codes over Real Numbers

Compressed Sensing and Linear Codes over Real Numbers Compressed Sensing and Linear Codes over Real Numbers Henry D. Pfister (joint with Fan Zhang) Texas A&M University College Station Information Theory and Applications Workshop UC San Diego January 31st,

More information

An Introduction to Sparse Approximation

An Introduction to Sparse Approximation An Introduction to Sparse Approximation Anna C. Gilbert Department of Mathematics University of Michigan Basic image/signal/data compression: transform coding Approximate signals sparsely Compress images,

More information

SPARSE signal representations have gained popularity in recent

SPARSE signal representations have gained popularity in recent 6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

Binary Compressive Sensing via Analog. Fountain Coding

Binary Compressive Sensing via Analog. Fountain Coding Binary Compressive Sensing via Analog 1 Fountain Coding Mahyar Shirvanimoghaddam, Member, IEEE, Yonghui Li, Senior Member, IEEE, Branka Vucetic, Fellow, IEEE, and Jinhong Yuan, Senior Member, IEEE, arxiv:1508.03401v1

More information

Introduction to Low-Density Parity Check Codes. Brian Kurkoski

Introduction to Low-Density Parity Check Codes. Brian Kurkoski Introduction to Low-Density Parity Check Codes Brian Kurkoski kurkoski@ice.uec.ac.jp Outline: Low Density Parity Check Codes Review block codes History Low Density Parity Check Codes Gallager s LDPC code

More information

Part IV Compressed Sensing

Part IV Compressed Sensing Aisenstadt Chair Course CRM September 2009 Part IV Compressed Sensing Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Conclusion to Super-Resolution Sparse super-resolution is sometime

More information

Variable-Rate Universal Slepian-Wolf Coding with Feedback

Variable-Rate Universal Slepian-Wolf Coding with Feedback Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract

More information

Compressed Sensing: Extending CLEAN and NNLS

Compressed Sensing: Extending CLEAN and NNLS Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction

More information

Measurements vs. Bits: Compressed Sensing meets Information Theory

Measurements vs. Bits: Compressed Sensing meets Information Theory Measurements vs. Bits: Compressed Sensing meets Information Theory Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Department of Electrical and Computer Engineering Rice University, Houston, TX

More information

AFRL-RI-RS-TR

AFRL-RI-RS-TR AFRL-RI-RS-TR-200-28 THEORY AND PRACTICE OF COMPRESSED SENSING IN COMMUNICATIONS AND AIRBORNE NETWORKING STATE UNIVERSITY OF NEW YORK AT BUFFALO DECEMBER 200 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

More information

Graph-based codes for flash memory

Graph-based codes for flash memory 1/28 Graph-based codes for flash memory Discrete Mathematics Seminar September 3, 2013 Katie Haymaker Joint work with Professor Christine Kelley University of Nebraska-Lincoln 2/28 Outline 1 Background

More information

Lecture 14 October 22

Lecture 14 October 22 EE 2: Coding for Digital Communication & Beyond Fall 203 Lecture 4 October 22 Lecturer: Prof. Anant Sahai Scribe: Jingyan Wang This lecture covers: LT Code Ideal Soliton Distribution 4. Introduction So

More information

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery

Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Journal of Information & Computational Science 11:9 (214) 2933 2939 June 1, 214 Available at http://www.joics.com Pre-weighted Matching Pursuit Algorithms for Sparse Recovery Jingfei He, Guiling Sun, Jie

More information

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes

EE229B - Final Project. Capacity-Approaching Low-Density Parity-Check Codes EE229B - Final Project Capacity-Approaching Low-Density Parity-Check Codes Pierre Garrigues EECS department, UC Berkeley garrigue@eecs.berkeley.edu May 13, 2005 Abstract The class of low-density parity-check

More information

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond

Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

ECEN 655: Advanced Channel Coding

ECEN 655: Advanced Channel Coding ECEN 655: Advanced Channel Coding Course Introduction Henry D. Pfister Department of Electrical and Computer Engineering Texas A&M University ECEN 655: Advanced Channel Coding 1 / 19 Outline 1 History

More information

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images

Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Sparse Solutions of Systems of Equations and Sparse Modelling of Signals and Images Alfredo Nava-Tudela ant@umd.edu John J. Benedetto Department of Mathematics jjb@umd.edu Abstract In this project we are

More information

Lecture 3: Error Correcting Codes

Lecture 3: Error Correcting Codes CS 880: Pseudorandomness and Derandomization 1/30/2013 Lecture 3: Error Correcting Codes Instructors: Holger Dell and Dieter van Melkebeek Scribe: Xi Wu In this lecture we review some background on error

More information

Lecture Notes 9: Constrained Optimization

Lecture Notes 9: Constrained Optimization Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form

More information

Layered Synthesis of Latent Gaussian Trees

Layered Synthesis of Latent Gaussian Trees Layered Synthesis of Latent Gaussian Trees Ali Moharrer, Shuangqing Wei, George T. Amariucai, and Jing Deng arxiv:1608.04484v2 [cs.it] 7 May 2017 Abstract A new synthesis scheme is proposed to generate

More information

Introduction to Compressed Sensing

Introduction to Compressed Sensing Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral

More information

The Pros and Cons of Compressive Sensing

The Pros and Cons of Compressive Sensing The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal

More information

Part III Advanced Coding Techniques

Part III Advanced Coding Techniques Part III Advanced Coding Techniques José Vieira SPL Signal Processing Laboratory Departamento de Electrónica, Telecomunicações e Informática / IEETA Universidade de Aveiro, Portugal 2010 José Vieira (IEETA,

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Belief propagation decoding of quantum channels by passing quantum messages

Belief propagation decoding of quantum channels by passing quantum messages Belief propagation decoding of quantum channels by passing quantum messages arxiv:67.4833 QIP 27 Joseph M. Renes lempelziv@flickr To do research in quantum information theory, pick a favorite text on classical

More information

Convolutional Coding LECTURE Overview

Convolutional Coding LECTURE Overview MIT 6.02 DRAFT Lecture Notes Spring 2010 (Last update: March 6, 2010) Comments, questions or bug reports? Please contact 6.02-staff@mit.edu LECTURE 8 Convolutional Coding This lecture introduces a powerful

More information

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles

Compressed sensing. Or: the equation Ax = b, revisited. Terence Tao. Mahler Lecture Series. University of California, Los Angeles Or: the equation Ax = b, revisited University of California, Los Angeles Mahler Lecture Series Acquiring signals Many types of real-world signals (e.g. sound, images, video) can be viewed as an n-dimensional

More information

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011

6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture

A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture Youngchun Kim Electrical and Computer Engineering The University of Texas Wenjuan Guo Intel Corporation Ahmed H Tewfik Electrical and

More information

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011

Introduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011 Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear

More information

Turbo Codes are Low Density Parity Check Codes

Turbo Codes are Low Density Parity Check Codes Turbo Codes are Low Density Parity Check Codes David J. C. MacKay July 5, 00 Draft 0., not for distribution! (First draft written July 5, 998) Abstract Turbo codes and Gallager codes (also known as low

More information

A FFAST framework for computing a k-sparse DFT in O(k log k) time using sparse-graph alias codes

A FFAST framework for computing a k-sparse DFT in O(k log k) time using sparse-graph alias codes A FFAST framework for computing a k-sparse DFT in O(k log k) time using sparse-graph alias codes Sameer Pawar and Kannan Ramchandran Dept. of Electrical Engineering and Computer Sciences University of

More information

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery

Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Sparse analysis Lecture V: From Sparse Approximation to Sparse Signal Recovery Anna C. Gilbert Department of Mathematics University of Michigan Connection between... Sparse Approximation and Compressed

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008

The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 The uniform uncertainty principle and compressed sensing Harmonic analysis and related topics, Seville December 5, 2008 Emmanuel Candés (Caltech), Terence Tao (UCLA) 1 Uncertainty principles A basic principle

More information

Variational Inference (11/04/13)

Variational Inference (11/04/13) STA561: Probabilistic machine learning Variational Inference (11/04/13) Lecturer: Barbara Engelhardt Scribes: Matt Dickenson, Alireza Samany, Tracy Schifeling 1 Introduction In this lecture we will further

More information

Compressive Sensing and Beyond

Compressive Sensing and Beyond Compressive Sensing and Beyond Sohail Bahmani Gerorgia Tech. Signal Processing Compressed Sensing Signal Models Classics: bandlimited The Sampling Theorem Any signal with bandwidth B can be recovered

More information

Loopy Belief Propagation for Bipartite Maximum Weight b-matching

Loopy Belief Propagation for Bipartite Maximum Weight b-matching Loopy Belief Propagation for Bipartite Maximum Weight b-matching Bert Huang and Tony Jebara Computer Science Department Columbia University New York, NY 10027 Outline 1. Bipartite Weighted b-matching 2.

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

LDPC Codes. Intracom Telecom, Peania

LDPC Codes. Intracom Telecom, Peania LDPC Codes Alexios Balatsoukas-Stimming and Athanasios P. Liavas Technical University of Crete Dept. of Electronic and Computer Engineering Telecommunications Laboratory December 16, 2011 Intracom Telecom,

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An to (Network) Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland April 24th, 2018 Outline 1 Reed-Solomon Codes 2 Network Gabidulin Codes 3 Summary and Outlook A little bit of history

More information

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes

Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Adaptive Cut Generation for Improved Linear Programming Decoding of Binary Linear Codes Xiaojie Zhang and Paul H. Siegel University of California, San Diego, La Jolla, CA 9093, U Email:{ericzhang, psiegel}@ucsd.edu

More information

MATH3302. Coding and Cryptography. Coding Theory

MATH3302. Coding and Cryptography. Coding Theory MATH3302 Coding and Cryptography Coding Theory 2010 Contents 1 Introduction to coding theory 2 1.1 Introduction.......................................... 2 1.2 Basic definitions and assumptions..............................

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project

More information

Stochastic geometry and random matrix theory in CS

Stochastic geometry and random matrix theory in CS Stochastic geometry and random matrix theory in CS IPAM: numerical methods for continuous optimization University of Edinburgh Joint with Bah, Blanchard, Cartis, and Donoho Encoder Decoder pair - Encoder/Decoder

More information

Greedy Signal Recovery and Uniform Uncertainty Principles

Greedy Signal Recovery and Uniform Uncertainty Principles Greedy Signal Recovery and Uniform Uncertainty Principles SPIE - IE 2008 Deanna Needell Joint work with Roman Vershynin UC Davis, January 2008 Greedy Signal Recovery and Uniform Uncertainty Principles

More information

Lecture 6: Expander Codes

Lecture 6: Expander Codes CS369E: Expanders May 2 & 9, 2005 Lecturer: Prahladh Harsha Lecture 6: Expander Codes Scribe: Hovav Shacham In today s lecture, we will discuss the application of expander graphs to error-correcting codes.

More information

Lecture 4: Proof of Shannon s theorem and an explicit code

Lecture 4: Proof of Shannon s theorem and an explicit code CSE 533: Error-Correcting Codes (Autumn 006 Lecture 4: Proof of Shannon s theorem and an explicit code October 11, 006 Lecturer: Venkatesan Guruswami Scribe: Atri Rudra 1 Overview Last lecture we stated

More information

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory

UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics. EECS 281A / STAT 241A Statistical Learning Theory UC Berkeley Department of Electrical Engineering and Computer Science Department of Statistics EECS 281A / STAT 241A Statistical Learning Theory Solutions to Problem Set 2 Fall 2011 Issued: Wednesday,

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

Algorithms for sparse analysis Lecture I: Background on sparse approximation

Algorithms for sparse analysis Lecture I: Background on sparse approximation Algorithms for sparse analysis Lecture I: Background on sparse approximation Anna C. Gilbert Department of Mathematics University of Michigan Tutorial on sparse approximations and algorithms Compress data

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002

Image Compression. 1. Introduction. Greg Ames Dec 07, 2002 Image Compression Greg Ames Dec 07, 2002 Abstract Digital images require large amounts of memory to store and, when retrieved from the internet, can take a considerable amount of time to download. The

More information

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding

CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding CS264: Beyond Worst-Case Analysis Lecture #11: LP Decoding Tim Roughgarden October 29, 2014 1 Preamble This lecture covers our final subtopic within the exact and approximate recovery part of the course.

More information

Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing

Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing Exploiting Structure in Wavelet-Based Bayesian Compressive Sensing 1 Lihan He and Lawrence Carin Department of Electrical and Computer Engineering Duke University, Durham, NC 2778-291 USA {lihan,lcarin}@ece.duke.edu

More information

Clustering with k-means and Gaussian mixture distributions

Clustering with k-means and Gaussian mixture distributions Clustering with k-means and Gaussian mixture distributions Machine Learning and Object Recognition 2017-2018 Jakob Verbeek Clustering Finding a group structure in the data Data in one cluster similar to

More information

Optimization for Compressed Sensing

Optimization for Compressed Sensing Optimization for Compressed Sensing Robert J. Vanderbei 2014 March 21 Dept. of Industrial & Systems Engineering University of Florida http://www.princeton.edu/ rvdb Lasso Regression The problem is to solve

More information

Compressed Sensing: Lecture I. Ronald DeVore

Compressed Sensing: Lecture I. Ronald DeVore Compressed Sensing: Lecture I Ronald DeVore Motivation Compressed Sensing is a new paradigm for signal/image/function acquisition Motivation Compressed Sensing is a new paradigm for signal/image/function

More information

Introducing Low-Density Parity-Check Codes

Introducing Low-Density Parity-Check Codes Introducing Low-Density Parity-Check Codes Sarah J. Johnson School of Electrical Engineering and Computer Science The University of Newcastle Australia email: sarah.johnson@newcastle.edu.au Topic 1: Low-Density

More information

5. Density evolution. Density evolution 5-1

5. Density evolution. Density evolution 5-1 5. Density evolution Density evolution 5-1 Probabilistic analysis of message passing algorithms variable nodes factor nodes x1 a x i x2 a(x i ; x j ; x k ) x3 b x4 consider factor graph model G = (V ;

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Fountain Uncorrectable Sets and Finite-Length Analysis

Fountain Uncorrectable Sets and Finite-Length Analysis Fountain Uncorrectable Sets and Finite-Length Analysis Wen Ji 1, Bo-Wei Chen 2, and Yiqiang Chen 1 1 Beijing Key Laboratory of Mobile Computing and Pervasive Device Institute of Computing Technology, Chinese

More information

AN INTRODUCTION TO COMPRESSIVE SENSING

AN INTRODUCTION TO COMPRESSIVE SENSING AN INTRODUCTION TO COMPRESSIVE SENSING Rodrigo B. Platte School of Mathematical and Statistical Sciences APM/EEE598 Reverse Engineering of Complex Dynamical Networks OUTLINE 1 INTRODUCTION 2 INCOHERENCE

More information

A Polynomial-Time Algorithm for Pliable Index Coding

A Polynomial-Time Algorithm for Pliable Index Coding 1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n

More information

Quasi-cyclic Low Density Parity Check codes with high girth

Quasi-cyclic Low Density Parity Check codes with high girth Quasi-cyclic Low Density Parity Check codes with high girth, a work with Marta Rossi, Richard Bresnan, Massimilliano Sala Summer Doctoral School 2009 Groebner bases, Geometric codes and Order Domains Dept

More information

Sparse signal recovery using sparse random projections

Sparse signal recovery using sparse random projections Sparse signal recovery using sparse random projections Wei Wang Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2009-69 http://www.eecs.berkeley.edu/pubs/techrpts/2009/eecs-2009-69.html

More information

Network Coding and Schubert Varieties over Finite Fields

Network Coding and Schubert Varieties over Finite Fields Network Coding and Schubert Varieties over Finite Fields Anna-Lena Horlemann-Trautmann Algorithmics Laboratory, EPFL, Schweiz October 12th, 2016 University of Kentucky What is this talk about? 1 / 31 Overview

More information

9 Forward-backward algorithm, sum-product on factor graphs

9 Forward-backward algorithm, sum-product on factor graphs Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous

More information

Low-density parity-check codes

Low-density parity-check codes Low-density parity-check codes From principles to practice Dr. Steve Weller steven.weller@newcastle.edu.au School of Electrical Engineering and Computer Science The University of Newcastle, Callaghan,

More information

MCMC Sampling for Bayesian Inference using L1-type Priors

MCMC Sampling for Bayesian Inference using L1-type Priors MÜNSTER MCMC Sampling for Bayesian Inference using L1-type Priors (what I do whenever the ill-posedness of EEG/MEG is just not frustrating enough!) AG Imaging Seminar Felix Lucka 26.06.2012 , MÜNSTER Sampling

More information

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4

COMPRESSIVE SAMPLING USING EM ALGORITHM. Technical Report No: ASU/2014/4 COMPRESSIVE SAMPLING USING EM ALGORITHM ATANU KUMAR GHOSH, ARNAB CHAKRABORTY Technical Report No: ASU/2014/4 Date: 29 th April, 2014 Applied Statistics Unit Indian Statistical Institute Kolkata- 700018

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 12 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted for noncommercial,

More information

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France.

Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France. Inverse problems and sparse models (6/6) Rémi Gribonval INRIA Rennes - Bretagne Atlantique, France remi.gribonval@inria.fr Overview of the course Introduction sparsity & data compression inverse problems

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Codes on graphs. Chapter Elementary realizations of linear block codes

Codes on graphs. Chapter Elementary realizations of linear block codes Chapter 11 Codes on graphs In this chapter we will introduce the subject of codes on graphs. This subject forms an intellectual foundation for all known classes of capacity-approaching codes, including

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Compressed Sensing and Sparse Recovery

Compressed Sensing and Sparse Recovery ELE 538B: Sparsity, Structure and Inference Compressed Sensing and Sparse Recovery Yuxin Chen Princeton University, Spring 217 Outline Restricted isometry property (RIP) A RIPless theory Compressed sensing

More information

High-dimensional graphical model selection: Practical and information-theoretic limits

High-dimensional graphical model selection: Practical and information-theoretic limits 1 High-dimensional graphical model selection: Practical and information-theoretic limits Martin Wainwright Departments of Statistics, and EECS UC Berkeley, California, USA Based on joint work with: John

More information

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling

27 : Distributed Monte Carlo Markov Chain. 1 Recap of MCMC and Naive Parallel Gibbs Sampling 10-708: Probabilistic Graphical Models 10-708, Spring 2014 27 : Distributed Monte Carlo Markov Chain Lecturer: Eric P. Xing Scribes: Pengtao Xie, Khoa Luu In this scribe, we are going to review the Parallel

More information

Iterative Encoding of Low-Density Parity-Check Codes

Iterative Encoding of Low-Density Parity-Check Codes Iterative Encoding of Low-Density Parity-Check Codes David Haley, Alex Grant and John Buetefuer Institute for Telecommunications Research University of South Australia Mawson Lakes Blvd Mawson Lakes SA

More information

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm

Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm Recovery of Sparse Signals from Noisy Measurements Using an l p -Regularized Least-Squares Algorithm J. K. Pant, W.-S. Lu, and A. Antoniou University of Victoria August 25, 2011 Compressive Sensing 1 University

More information

Signal Recovery from Permuted Observations

Signal Recovery from Permuted Observations EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,

More information

K User Interference Channel with Backhaul

K User Interference Channel with Backhaul 1 K User Interference Channel with Backhaul Cooperation: DoF vs. Backhaul Load Trade Off Borna Kananian,, Mohammad A. Maddah-Ali,, Babak H. Khalaj, Department of Electrical Engineering, Sharif University

More information

6 The SVD Applied to Signal and Image Deblurring

6 The SVD Applied to Signal and Image Deblurring 6 The SVD Applied to Signal and Image Deblurring We will discuss the restoration of one-dimensional signals and two-dimensional gray-scale images that have been contaminated by blur and noise. After an

More information