Multidimensional Sparse Fourier Transform Based on the Fourier Projection-Slice Theorem
|
|
- Adela Dawson
- 5 years ago
- Views:
Transcription
1 Multidimensional Sparse Fourier Transform Based on the Fourier Projection-Slice Theorem Shaogang Wang, Student Member, IEEE, Vishal M. Patel, Senior Member, IEEE, and Athina Petropulu, Fellow, IEEE Abstract We propose MARS-SFT, a sparse Fourier transform for multidimensional, frequency-domain sparse signals, inspired by the idea of the Fourier projection-slice theorem. MARS- SFT identifies frequencies by operating on one-dimensional slices of the discrete-time domain data, taken along specially designed lines; those lines are parametrized by slopes that are randomly generated from a set at runtime. The DFTs of the data slices represent DFT projections onto the lines along which the slices were taken. On designing the lengths and slopes so that they allow for orthogonal and uniform frequency projections, the multidimensional frequencies can be recovered from their projections with low sample and computational complexity. We show theoretically that the large number of degrees of freedom of frequency projections allows for the recovery of less sparse signals. Although the theoretical results are obtained for uniformly distributed frequencies, empirical evidences suggest that MARS- SFT is also effective in recovering clustered frequencies. We also proposed an extension of MARS-SFT to address noisy signals that contain off-grid frequencies, and demonstrate its performance in digital beamforming automotive radar signal processing. In that context, the robust MARS-SFT is used to identify range, velocity and angular parameters of targets with low sample and computational complexity. Index Terms Multidimensional signal processing, sparse Fourier transform, Fourier projection-slice theorem, automotive radar signal processing. I. INTRODUCTION Conventional signal processing methods in radar, sonar, and medical imaging systems usually involve multidimensional discrete Fourier transforms (DFT), which are typically implemented by the fast Fourier transform (FFT). The sample complexity of the FFT is O(N), where N is the number of samples in the multidimensional sample space. For N equal to a power of 2, the computational complexity of the FFT is O(N log N). Recently, by leveraging the sparsity of signals in the frequency domain, the sparse Fourier transform (SFT) has been proposed [] [8]; this is a family of low-complexity DFT algorithms. The state-of-the-art SFT algorithms [5], [7] achieve sample complexity of O(K) and computational complexity of O(K log K) for exactly K-sparse (in the frequency domain) signals. When K << N, those SFT algorithms achieve significant savings both in sample and computation compared to the FFT. SFT algorithms have been investigated for several applications including a fast Global Positioning Shaogang Wang, Vishal M. Patel and Athina Petropulu are with the department of Electrical and Computer Engineering at Rutgers University, Piscataway, NJ USA. {shaogang.wang, vishal.m.patel, athinap}@rutgers.edu. The work was supported by ARO grant W9NF-6--26, NSF Grant ECCS-48437, China Scholarship Council and Shanghai Institute of Spaceflight Electronics Technology. System (GPS) receiver, wide-band spectrum sensing, multidi- 45 mensional radar signal processing [9] [3]. 46 In all SFT algorithms, the reduction of sample and com- 47 putational complexity is achieved by reducing the input data 48 samples. This is implemented via a well designed, randomized 49 subsampling procedure, which leverages the resulting fre- 5 quency domain aliasing. The significant frequencies contained 5 in the original signal are then localized and the corresponding 52 DFT coefficients are estimated with low-complexity opera- 53 tions. Such subsampling-localization-estimation procedure is 54 carried out in an iterative manner in several SFT algorithms 55 [], [2], [5], [7], while in other SFT algorithms [3], [] [3], 56 localization and estimation are implemented in one-shot after 57 gathering sufficient copies of subsampled signals correspond- 58 ing to different subsampling parameters, e.g., subsample rate, 59 offset and number of samples. Generally, iterative based SFT 6 algorithms exhibit lower complexity as compared to one-shot 6 based SFT algorithms, since in the former, in each iteration, 62 the contribution of the recovered frequencies are removed from 63 the signal, which yields a sparser signal (an easier problem) 64 in the next iteration. 65 Multidimensional signal processing requires multidimen- 66 sional SFT algorithms. Most of the existing SFT algorithms, 67 however, are designed for one-dimensional (-D) signals and 68 their extension to multidimensional signals is typically not 69 straightforward. This is because the SFT algorithms are not 7 separable in each dimension due to the fact that detection 7 of significant frequencies in the subsampled signal must be 72 considered jointly for all the dimensions []. Multidimen- 73 sional SFT algorithms are investigated in [5], [2], [3]. The 74 sample-optimal SFT (SO-SFT) of [5] follows the subsampling- 75 localization-estimation iteration, while the SFT algorithms 76 of [2], [3] are one-shot approaches. SO-SFT achieves the 77 sample and computational complexity lower bounds of all 78 known SFT algorithms by reducing a 2-dimensional (2-D) 79 DFT into -D DFTs along a few columns and rows of a data 8 matrix; in the frequency domain, this results into projections 8 of 2-D frequencies onto the corresponding columns and rows 82 of the matrix. Under the assumption that the frequencies are 83 sparse and uniformly distributed, the 2-D frequencies are not 84 likely to be projected to the same bin (we will refer this as 85 collision), and thus the 2-D frequencies can be effectively 86 recovered from their -D projections (see Section III-A for 87 details). The DFT along columns and rows provides two 88 degrees of freedom; a frequency collision has low probability 89 to occur both in the column and row direction. However, when 9 frequencies are less sparse, or when they are clustered, there 9
2 is a high probability that a set of frequencies will collide both in row and columns directions; this is referred to as deadlock situation [5] and results in unrecoverable frequencies (see Fig. ). To reduce the probability of a deadlock, the SFT of [2], [3] introduces more degrees of freedom in projections by applying -D DFT to data samples extracted along some lines of predefined and deterministic slopes as well as along the axes of the data cube. However, the limited choice of line slopes in [2], [3] is still an obstacle in addressing less sparse signals. Moreover, the sample and computational complexity of [2], [3] are higher than that of SO-SFT, as the former applies the one-shot approach for frequency recovery, while the latter recovers the frequencies iteratively. In addition to the iterative approach, the low-cost frequency localization technique adopted in SO-SFT further contributes to the low-complexity of the algorithm. Specifically, SO-SFT applies the OFDMtrick (phase encoding) [2], [4], which effectively encodes the significant frequencies into the phase difference of a pair of DFTs applied on two datasets, obtained by subsampling the data with the same subsample rate but different offsets. In the exactly sparse case, the encoded frequencies can be decoded trivially with a low-complexity (O()) operation (see Section III-A for details). In this work we propose MARS-SFT (MultidimensionAl Random Slice based Sparse Fourier Transform), which enjoys low complexity while avoiding the limitations of the aforementioned methods, i.e., it can handle less sparse data in the frequency domain and accommodate frequencies that are nonuniformly distributed. MARS-SFT uses the low-complexity frequency localization framework of SO-SFT and extends the multiple slopes idea of [2], [3] by using lines parameterized by slopes that are randomly generated from a set of sufficiently large support at runtime. This is not trivial since the line length and slope set should be carefully designed to enable an orthogonal and uniform frequency projection. MARS-SFT can be viewed as a low-complexity, Fourier projection-slice approach for signals that are sparse in the frequency domain. In MARS-SFT, the DFT of an -D slice of the data is the projection of the D-D DFT of the data on that same line along which the time-domain slice was taken. The classical Fourier projection-slice based method either reconstructs the frequency domain signal via interpolation of frequency-domain slices or reconstructs the time-domain samples by solving a system of linear equations relating the DFT along projections and the time-domain samples. Different from the Fourier projection-slice based methods, the proposed MARS-SFT aims to reconstruct the signal directly based on frequency domain projections with low-complexity operations; this is achieved by leveraging the sparsity of the signal in the frequency domain. Another body of related works, referred to as SFT based on rank- lattice sampling [6], [4], [5] also consider the problem of fast reconstruction of the underlying multidimensional signal based on samples along rank- lattices, i.e., lines. In [4], [5], the coefficients of the multidimensional DFT of the data can be efficiently calculated by applying DFT on samples along suitable lines, provided that the frequencies are known. In particular, in [4], the exact evaluation of the DFT 5 coefficients can be accomplished by calculating the DFT along 5 a single line; such a line is called the reconstructing rank- 52 lattices and can be found for any given sparse frequency 53 set [4]. However, finding a reconstructing rank- lattice is 54 non-trivial when the frequency set is unknown. That unknown 55 frequency case is addressed in [6], at the expense of high 56 complexity due to the complication of finding a reconstructing 57 rank- lattice. Compared with the algorithms of [4], [5], the 58 proposed MARS-SFT does not assume that the underlying 59 frequency set is known. The frequency set as well as the 6 corresponding DFT coefficients are estimated via DFT along 6 lines progressively. Compared with the SFT of [6], MARS- 62 SFT is based on multiple lines of randomized parameters and 63 does not pursue to reconstruct the signal using a single line, 64 which avoids the complication of locating a reconstructing 65 rank- lattice and thus achieves a low complexity. In addition, 66 the rank- lattice-based SFT algorithms assume that samples 67 of the signal can be obtained at any arbitrary location, which 68 is rather difficult to achieve in hardware [6]. In contrast, the 69 MARS-SFT assumes that the samples are extracted from a 7 predefined uniform sampling grid. Hence, MARS-SFT is less 7 restrictive in sampling and more applicable to existing systems, 72 which employ uniform sampling in each dimension. 73 While the MARS-SFT considers the ideal scenario, i.e., 74 frequency-domain sparse data containing frequencies on the 75 DFT grid, in realistic applications, the data is usually noisy 76 and contains off-grid frequencies. One example of such data 77 is the received signal of the digital beamforming automotive 78 radar, which usually employs a frequency modulation contin- 79 uous waveform (FMCW). After demodulation of the returned 8 signal, each radar target can be expressed as a D-D complex 8 sinusoid [7], whose frequency in each dimension is related 82 to target parameters, e.g., range, Doppler and direction of 83 arrival. When the number of targets is much smaller than the 84 number of samples, such signal is sparse in the D-D frequency 85 domain. Due to the continuous nature of those parameters, 86 the frequencies are also continuous and thus are typically off- 87 grid. Meanwhile, the radar signal contains noise, which makes 88 the signal approximately sparse. MARS-SFT suffers from the 89 frequency leakage caused by the off-grid frequencies; also, the 9 frequency localization procedure of MARS-SFT is prone to er- 9 rors since the OFDM-trick is sensitive to noise [2]. We address 92 these issues by extending MARS-SFT to a robust version. 93 Robust MARS-SFT employs a windowing technique to reduce 94 the frequency leakage caused by the off-grid frequencies and 95 a voting based frequency decoding procedure to significantly 96 reduce the localization error stemming from noise. 97 The off-grid frequencies are also addressed in [], where 98 a robust multidimensional SFT algorithm, termed RSFT, is 99 proposed. In RSFT, the computational savings are achieved 2 by folding the input D-D data cube into a much smaller data 2 cube, on which a reduced sized D-D FFT is applied. Although 22 the RSFT is more computationally efficient as compared to the 23 FFT-based methods, its sample complexity is the same as the 24 FFT-based algorithms. Essentially, the high sample complexity 25 of RSFT is due to its two stages of windowing procedures, 26 which are applied to the entire data cube to suppress the 27
3 frequency leakage. In the proposed robust MARS-SFT, instead of applying the multidimensional window on the entire data, the window, while still designed for the full-sized data, is applied on samples along lines only, which does not cause overhead in sample complexity. This paper makes the following contributions. We propose MARS-SFT, a multidimensional, lowcomplexity SFT algorithm that is based on the Fourier projection-slice theorem. Compared to the SFT algorithms of [5], [2], [3] that project multidimensional DFT of data onto deterministic lines, the frequencydomain projections in MARS-SFT are randomized. We show theoretically that the large number of degrees of freedom of frequency projections allows for the recovery of less sparse signals. Although the theoretical results are obtained for uniformly distributed frequencies, empirical evidences suggest that MARS-SFT is also efficient for recovery of clustered frequencies. Also, while the SFT of [5], [2], [3] requires the data to be equal-sized in each dimension, MARS-SFT applies to arbitrary-sized data, which is less restrictive. We extend MARS-SFT to a robust practical extension, to address noisy data containing off-grid frequencies, arising in applications such as digital beamforming automotive radar signal processing. The feasibility of applying robust MARS-SFT in such application is demonstrated via simulations. Preliminary results on MARS-SFT and robust MARS-SFT appeared in [8] and [9], respectively. Here we provide theoretical proofs to the claims of [8], [9], convergence analysis, and establish connections between MARS-SFT and the one-projection theorem [2], i.e., when the two dimensions of the signal are co-prime numbers, the signal can be perfectly reconstructed by MARS-SFT using only one projection. The MATLAB code of the proposed methods is made available at Notation: We use lower-case (upper-case) bold letters to denote vectors (matrices). [ ] T denotes transpose. The N- modulo operation is denoted by [ ] N. [S] refers to the integer set of {,..., S }. The cardinality of set S is denoted as S. The N-point DFT of signal x, normalized by N, is denoted by ˆx; all the DFT is referred to as the normalized version. W, W 2 are the l and l 2 norm of matrix W, respectively. We denote the least common multiple of N, N as LCM(N, N ). The paper is organized as follows. The signal model is provided in Section II. The proposed MARS-SFT is described in Section III and its robust extension is in Section IV. Validation of theoretical results via simulations is provided in Section V and the application of robust MARS-SFT in automotive digital beamforming radar signal processing is presented in Section VI. Concluding remarks are made in Section VII. II. SIGNAL MODEL AND PROBLEM FORMULATION For simplicity, in this section, we will present the ideas for 2-D signals. The generalization to higher dimensions, i.e., D- D cases with D > 2, is straightforward. Let us consider the following 2-D signal model, which is a superposition of K 2-D complex sinusoids, i.e., x(n) (a,ω) S ae jnt ω, n [n, n ] T X [N ] [N ], where N, N denote the sample length of the two dimensions, 265 respectively. (a, ω) represents a 2-D sinusoid whose amplitude 266 is a with a C, a and frequency is ω [ω, ω ] T 267 [2πm /N, 2πm /N ] T with [m, m ] T X. The set S with 268 S = K includes all the 2-D sinusoids. We assume that the 269 signal is sparse in the frequency domain, i.e., K << N 27 N N, and that the frequencies are uniformly distributed. 27 In the above model, the frequencies lie on a grid and there is no noise. A more realistic signal model, addressing continuous-valued frequencies in [, 2π) 2 and also noise, is the following r(n) = y(n) + n(n) = (a,ω) S ae jn () T ω + n(n), n X, (2) where y(n) (a,ω) S ω is the superposition of aejnt 272 K = S continuous-frequency sinusoids; (a, ω) denotes a 273 significant 2-D sinusoid in S, whose complex amplitude and 274 frequency are a, ω [ω, ω ] T [, 2π) 2, respectively, and 275 it holds that < a min a a max. The noise, n(n), 276 is assumed to be i.i.d., circularly symmetric Gaussian, i.e., 277 CN (, σ n ). The SNR of a sinusoid with amplitude a is defined 278 as SNR ( a /σ n ) Conventionally, S can be estimated via a 2-D N N -point 28 DFT applied on signal (2), after windowing the signal with a 2-28 D window w(n), used to suppress frequency leakage generated 282 by off-grid frequencies. Assuming that the peak to side-lobe 283 ratio (PSR) of the window is large enough, such that the side- 284 lobes of each frequency in S can be neglected in the DFT do- 285 main, the signal contribution in the DFT domain can be viewed 286 those corresponding to a set of on-grid frequencies, i.e., S 287 {(a, ω) : ω [2πm /N, 2πm /N ] T, [m, m ] T X } 288 with K < K = S << N. Hence, the sample domain signal 289 component associated with the window w(n) and S can be 29 approximated by (). Note that since the windowing degrades 29 the frequency resolution, each continuous-valued frequency in 292 S is related to a cluster of digital frequencies in S; S can be 293 estimated from the DFT of the signal, and then lead to the 294 frequencies in S via quadratic interpolation [2]. 295 III. THE PROPOSED MARS-SFT ALGORITHM 296 The proposed MARS-SFT is a generalization of SO-SFT. 297 In this section, we first introduce the basics of SO-SFT, then, 298 provide the details of MARS-SFT and analyze its properties. 299 A. SO-SFT 3 In SO-SFT [5], in order to recover S of (), -D DFTs are applied on a subset of columns and rows of the data matrix.
4 4 The N -point DFT of the i-th, i [N ] column of the data equals ĉ i (m) N l [N ] = N = (a,ω) S 2π j N ml x(l, i)e (a,ω) S l [N ] ae j 2π N m i e j 2π N l(m m) 2π j N m ae i δ(m m ), m [N ], ω = [2πm /N, 2πm /N ] T, [m, m ] T X, 3 where δ( ) is the unit impulse function. Hence ĉ i (m ) can 32 be viewed as the summation of modulated amplitudes of 2-33 D sinusoids whose row frequency indices equal m. Hence 34 ĉ i (m), m [N ] is a projection of the 2-D spectrum, 35 ˆx(m, m ), [m, m ] T X, along the column dimension. 36 Similarly, the N -point DFT applied on a row of () is a 37 projection of the 2-D spectrum along the row dimension. 38 Since the signal is sparse in the frequency domain, if 39 ĉ i (m) =, with high probability, there will be only one 3 significant frequency projected to the frequency bin of m; 3 in other words, the frequency bin is -sparse, and ĉ i (m) is reduced to ĉ i (m) = ĉ i (m ) = ae j 2π N m i 32. The amplitude, a, 33 can be determined by the m -th entry of the DFT of the -th 34 column, i.e., a = ĉ (m ), and the other frequency component, 35 m, is coded in the phased difference between the m -th 36 entries of the DFTs of the -th and the -st columns, and can be decoded by m = φ (ĉ (m )/ĉ (m )) N 2π, where φ(x) can be effectively tested by comparing ĉ (m) and ĉ (m) ; 32 ĉ i (m) is -sparse almost for sure when ĉ (m) = ĉ (m). 32 Such frequency decoding technique is referred to as OFDM- 322 trick [4]. The contribution of the recovered 2-D sinusoids is 323 removed from the signal, so that the subsequent processing 324 can be applied on a sparser signal, making the problem easier 325 to solve. 326 A frequency bin that is not -sparse based on column 327 processing may be -sparse based on row processing. Because 328 the removal of frequencies in the column (row) processing 329 may cause bins in the row (column) processing to be - 33 sparse, SO-SFT runs iteratively and alternatively between 33 columns and rows and the algorithm stops after a finite number 332 of iterations. SO-SFT succeeds with high probability only 333 when the frequencies are very sparse, and requires that either 334 a row or a column of the DFT contains a -sparse bin. 335 However, in many applications, the signal frequency exhibits a 336 block sparsity pattern [22], i.e., the significant frequencies are 337 clustered. In those cases, even when the signal is very sparse, 338 -sparse bin may not exist; this is referred to as a deadlock 339 case [5] B. MARS-SFT SO-SFT reduces a 2-D DFT into -D DFTs of the columns and rows of the input data matrix. The columns and the rows can be viewed as -D slices taken along discrete lines with slopes and, respectively. In this section, by proposing (3) MARS-SFT, we reduce the 2-D DFT into -D DFTs of the 345 data slices taken along discrete lines with random slopes. 346 Projection on column m Projection on row m Projection on diagonal Fig.. Demonstration of projection of 2-D frequencies onto -D. The colored blocks mark significant frequencies. The projection onto the column or the row causes collisions, while the projection onto the diagonal creates -sparse bins. The proposed MARS-SFT is an iterative algorithm; each 347 iteration returns a subset of recovered 2-D sinusoids. After 348 T iterations, the MARS-SFT returns a set, Ŝ, which is an 349 estimate of S of (). As in SO-SFT, the sinusoids recovered in 35 previous iterations are passed to the next iteration, and their 35 contributions are removed from the signal in order to create a 352 sparser signal. 353 Within each iteration, the signal of () is sampled along a L-length line, E(α, τ, l), l [L], with α [α, α ] T, τ [τ, τ ] T, which satisfy the following equations where [α l + τ ] N = n, [α l + τ ] N = n, l [L]. (4) n = [ ] α (n τ ) + τ. (5) α N Hence, E(α, τ, l), l [L] is a discrete line segment with slope 354 α /α and offset τ. 355 The sampled signal, representing a slice of the data along E(α, τ, l), l [L], can be expressed as s(α, τ, l) x([α l + τ ] N, [α l + τ ] N ) = (a,ω) S ( ae j2π m [α l+τ ] N ) N + m [α l+τ ] N N, l [L]. Note that a slice can wrap around within x(n), n X due 356 to the modulo operation, and the sampling points along the 357 line are always on the grid of X, since α, τ are on grid. 358 Taking an L-point DFT of the data slice defined in (6), for m [L], we get ŝ(α, τ, m) s(α, τ, l)e j2π lm L L = L (a,ω) S l [L] ( ae j2π m τ N + m τ N ) ( e j2πl m α N + m α N m L l [L] Let us assume that for all m [L] and α, [m, m ] T X, ˆf(m) ( ) e j2πl m α N + m α N m L {, }. L (8) l [L] (6) ). (7)
5 The above assumption holds when mα N + mα N m L is multiple of /L, which can be expressed as [ L m α + L ] m α = m. (9) N N It is clear that L = LCM(N, N ) satisfies (9), since in that case L/N, L/N are integers. Moreover, LCM(N, N ) is the minimum length of a line, L, that satisfies (9) for arbitrary α, [m, m ] T X ; this can be proved using contradiction as follows. Assume that there is length L such that L < LCM(N, N ), then at least either L/N or L/N is not an integer. Without L loss of generality, let us assume that N / Z. Then, the right hand side of (9) equals [L/N ] L / [L] for m =, α =, m =, which is contradictory to the premise that (9) holds for any [m, m ] T, [α, α ] T X. When ˆf(m) =, i.e., [ m α + m α m ] =, [m, m ] T X, () N N L (7) can be simplified as ŝ(α, τ, m) = (a,ω) S L ae j2π ( m τ N + m τ N ), m [L], () which can be viewed as the -D projection of the 2-D frequencies satisfying (). The solutions of () with respect to m are equally spaced points lie on line E([α L/N, α L/N ] T, [m, m ] T, l), l [L ], (2) where [m, m ] T X is one of the solutions of (). The line of (2) is orthogonal to the line of (4). The orthogonality is necessary for the projected 2-D frequencies to be exactly recoverable. Moreover, for certain choices of α, such projection is uniform, and L = N/L. The uniformity of the projection means that the DFT coefficients of N grid locations of the N N -point DFT are uniformly projected to the L entries of the L-point DFT along a line. Compared with a non-uniform projection, the uniform projection creates more -sparse bins, which allows for fewer iterations of MARS-SFT to exactly reconstruct the signal. The condition for orthogonal and uniform projection is stated in the following lemma. Lemma. (Condition for orthogonal and uniform projection): Consider the slice of the signal of (), as defined in (6), with L = LCM(N, N ), α A X, τ X where A {α : α X ; (α, α ), (α, L/N ), (α, L/N ) are co-prime pairs}. Then each entry of () is the projection of samples of ˆx(m), m P m X, where P m, m [L] contains sample locations satisfying (). Moreover, P m = N/L, P m P m = for m m, m, m [L]. Thus, ˆx(m), m X is uniformly projected to (). Proof. Please see Appendix A.. A slice satisfying Lemma is the longest slice that does not contain any duplicated samples. Thus, the L-point DFT along such slice captures the maximum information in the frequency domain with the least number of samples. The set A defined in Lemma contains a large num- 396 ber of elements, providing sufficient randomness for fre- 397 quency projection. For example, when N = N = 4, 398 A = {[, ] T, [, 2] T, [, 3] T, [2, ] T, [2, 3] T, [3, ] T, [3, 2] T }. 399 A This means that X 44% of all the possible values 4 of α yield uniform projections. When N = N = 256, 4 A = and A X 6%. 42 Fig. 2 shows an example of a time domain line designed to 43 allow an orthogonal and uniform projection (red circle). The 44 corresponding frequency domain line satisfying () for m = 45 is also shown (black circle); the two lines are orthogonal to 46 each other and intercept at [, ] T. The lengths of the time 47 domain and the frequency domain lines are 6, 8, respectively. 48 Each line is composed of several line segments due to the 49 modulo operation Sample/DFT grid Time domain line Frequency domain line 5 5 Fig. 2. An orthogonal pair of time and frequency domain lines. N = 6, N = 8, L = 6, α = [, 3] T, τ = [, ] T. Remark. In the L-point DFT of samples along a time- 4 domain line with slope α /α, each entry represents a projec- 42 tion of the 2-D DFT along the line with slope α N /(α N ) 43 in the N N -point DFT domain; the DFT-domain line is 44 orthogonal to the time-domain line. This is closely related 45 to the Fourier projection-slice theorem, which states that the 46 Fourier transform of a projection is a slice of the Fourier 47 transform of the projected object. While the classical projec- 48 tion is in the time domain and the corresponding slice is in 49 the frequency domain, in the MARS-SFT case, the projection 42 is in the DFT domain and the corresponding slice is in 42 the sample (discrete-time) domain. The important difference 422 between the Fourier projection-slice theorem and MARS-SFT 423 is that while the former reconstructs the frequency domain 424 of the signal via interpolation of frequency-domain slices, or 425 reconstructs the time-domain samples by solving a system 426 of linear equations relating the DFT along projections and 427 the time-domain samples, the latter efficiently recovers the 428 significant frequencies of the signal directly based on the 429 DFT of time-domain -D slices, i.e., samples along lines; this 43 involves lower complexity. 43 As it will be explained in the following, the efficiency of 432 MARS-SFT is achieved by exploring sparsity in the frequency 433 domain. 434 Let us assume that the signal is sparse in the frequency domain, i.e., S = O(L) << N. Then, with high probability,
6 ŝ(α, τ, m) = ŝ(α, τ, m) = ŝ(α, τ, m), where τ [[τ + ] N, τ ] T, τ [τ, [τ + ] N ] T. Thus, the m-th bin is -sparse, and it holds that ŝ(α, τ, m) = ae j2π ( m τ N + m τ N ), (a, ω) S. (3) In such case, the 2-D sinusoid, (a, ω), can be decoded as [ (ŝ(α, )] N m = 2π φ τ, m), ŝ(α, τ, m) N [ (ŝ(α, )] N m = 2π φ τ, m) (4) ŝ(α, τ, m) N, a = ŝ(α, τ, m)e j2π(mτ/n+mτ/n). This is the OFDM-trick adapted to MARS-SFT; such design requires sampling along three lines of the same slope but different offsets, allowing the frequency components to be decoded independently in each dimension. In order to recover all the sinusoids in S efficiently, each iteration of MARS-SFT adopts a random choice of line slope from the set of A defined in Lemma. Furthermore, the contribution of the recovered sinusoids in the previous iterations is removed via a construction-subtraction approach so that the signal becomes sparser in subsequent iterations. Specifically, assuming that for the current iteration the line slope and offset parameters are α, τ, respectively, the recovered 2-D frequencies are projected into L frequency bins to construct the DFT of the slice taken along the line of E(α, τ, l), l [L], i.e., ŝ r (α, τ, m) ( (a,ω) I m ae j2π m τ N + m τ N ), m [L], where I m, m [L] represent the subsets of the recovered frequencies, i.e., I m {(a, ω) : ω satisfies ()}, m [L]. Next, the L-point inverse DFT (IDFT), multiplied by L, is applied on ŝ r (α, τ, m), m [L], from which the slice, s r (α, τ, l), l [L], is constructed based on the previously recovered sinusoids. Subsequently, the constructed slice is subtracted from the slice of the current iteration. The pseudocode of the MARS-SFT algorithm can be found in Appendix B. C. Convergence of MARS-SFT In this section, we investigate the convergence of MARS- SFT. First, let us look at a special case, where N, N are co-prime. Theorem. (One-projection theorem of MARS-SFT): Consider the signal model of (), where N, N are co-prime and K N. The exact reconstruction of S via MARS-SFT only takes one iteration. Proof. Please see Appendix A.2. In the Fourier projection-slice theorem, a band-limited signal of size N N can be exactly reconstructed by a single projection in the time domain. In the frequency domain, exact reconstruction is possible based on a single slice, provided that the slope parameters, α, α, of the line, along which the slice is evaluated are co-prime and the equality α m + α m = α m + α m holds only for m = m, m = m when m, m, m, m [N ]; this is referred to as the one-projection theorem [2]. Theorem 476 is the one-projection theorem of MARS-SFT and provides the 477 conditions for exact recovery of a signal with arbitrary sparsity 478 level using only one projection in the frequency domain. 479 Remark 2. The classic one-projection theorem and the one- 48 projection theorem of MARS-SFT establish an unambiguous 48 one-to-one mapping from a 2-D sequence to a -D sequence. 482 Specifically, the former establishes the mapping of 2-D time- 483 domain samples to the projected -D time-domain samples 484 of length N 2 ; each entry of the DFT of the projection can 485 be represented by a weighted summation of the N 2 samples. 486 Hence, exact recovery of the time domain samples requires 487 inverting a linear equation system containing at least N equations. On the other hand, latter establishes the one-to- 489 one mapping from the coefficients of the N N -point DFT 49 of the 2-D data to the coefficients of the N-point DFT of a 49 slice of the 2-D data; such slice can be viewed as an ordering 492 of the 2-D data into an -D sequence. The exact recovery of 493 the N N -point DFT of the data is achieved by the low- 494 complexity OFDM-trick under the framework of MARS-SFT. 495 We should note that if N, N are co-prime, the N N point DFT can also be implement via an N-point DFT based 497 on the Good-Thomas mapping [23], where the unambiguous 498 mapping is achieved via the Chinese Remainder Theorem- 499 based indexing. 5 Next, the following theorem concludes the convergence of 5 MARS-SFT for the general case. Compared to the special case, 52 i.e., Theorem, the general case assumes K/N << and 53 applies for arbitrary N, N. 54 Theorem 2. (Convergence of MARS-SFT): Consider the application of MARS-SFT on the signal model of (). Then the average number of iterations required to recover S, i.e., T, can be found by evaluating the following inequality M i K, (5) i [T ] where M i = Q i K i is the number of the recovered frequencies 55 in the i-th iteration; K i = K k [i] ( Q k), with K = K, 56 is the number of remaining frequencies that have not been 57 recovered in the i-th iteration; Q i = ( K i /N) N/L is the 58 probability of a remaining significant frequency to be projected 59 into a -sparse bin, and thus be recovered in the i-th iteration. 5 Proof. Please see Appendix A.3. 5 Fig. 3 shows the relationship between T and K/L. When 52 K/L is small, e.g., K/L = 3, T is small; this results in a low 53 big-oh overhead [5] of the algorithm. However, T grows 54 super-linearly with K/L; the growth rate increases as K/N 55 increases. In a non-sparse scenario, i.e., as K/N approaches 56, T is too large for MARS-SFT to be applicable. With 57 the exception of the scenario in which N, N are co-prime, 58 MARS-SFT can fail in a non-spare scenario, in which none 59 of the projection creates a -sparse bin. 52 Although in order to prove Theorem 2 we assume that the 52 frequency distribution in the signal model of () is uniform, 522 i.e., in the average case [5], as we will see in Section V-D, 523
7 K/N = K/N =.5 2 K/N = Fig. 3. Number of iterations of MARS-SFT versus K/L. Number of iterations K/N =.3 numerical results show no significant difference in the convergence of MARS-SFT when the frequencies are clustered. This is because the multidimensional clustered frequencies are uniformly projected to one dimension due to the randomly generated line slopes of MARS-SFT. D. Complexity analysis of MARS-SFT MARS-SFT executes T iterations; in the 2-D case, each iteration uses 3L samples, since 3 L-length slices, with L = LCM(N, N ) are extracted in order to decode the two frequency components of a 2-D sinusoid (see (4)). Hence, the sample complexity is O(3T L) = O(T L). The core processing is the L-point -D DFT, which can be implemented via an FFT with the computational complexity of O(L log L). The L-point IDFT in the construction-subtraction procedure can also be implemented via an FFT. In addition to the FFT, each iteration needs to evaluate up to L frequencies. Hence the computational complexity of MARS-SFT is O(T (L log L + L)) = O(T L log L). If we let T equal to T max N, which is a sufficiently large constant to allow convergence for a given signal size and a range of K, then, the sample and computational complexity become O(L) and O(L log L), respectively. For K = O(L), MARS-SFT achieves the lowest sample and computational complexity, i.e., O(K) and O(K log K), respectively, of all known SFT algorithms [5], [7]. In general, in the D-D case, according to the multidimensional extension [8], it is easy to see that the sample and computational complexity of MARS-SFT are O(DK) and O(DK log(dk)), respectively when K = O(L). IV. A ROBUST EXTENSION OF MARS-SFT MARS-SFT is developed for the signal model of (), where the data is exactly sparse in the frequency domain and the frequencies are assumed to be on the DFT grid. However, in realworld applications, the data usually contains noise and thus is only approximately sparse, i.e., dominated by a few significant frequencies. Also, the significant frequencies are typically offgrid. In what follows, we propose the robust extension of MARS-SFT for signals that follow the signal model of (2). The extension, robust MARS-SFT, works within the framework of MARS-SFT and employs a windowing technique to reduce frequency leakage due to the off-grid frequencies and a votingbased frequency localization to reduce the frequency decoding error due to noise. The pseudo-code of the robust MARS-SFT 565 algorithm can be found in Appendix B. 566 A. Windowing 567 To address the issue of off-grid frequencies, we apply 568 a window w(n), n X on the signal of (2). The PSR 569 of the window, ρ w, is designed such that the side-lobes of 57 the strongest frequency are below the noise level, hence the 57 leakage of the significant frequencies can be neglected and the 572 sparsity of the signal in the frequency domain can be preserved 573 to some extent. Lemma 2 reflects the relationship between ρ w 574 and the maximum SNR of the signal. 575 Lemma 2. (Window design for the robust MARS-SFT): Consider ˆr(m), which is the N N -point DFT of the windowed signal of (2). Let W R N N be the matrix generated by the window function of w(n), n X. In order to achieve a sufficient suppression of frequency leakage, the PSR of the window, ρ w, should satisfy ρ w > 2 W SNRmax, (6) π W 2 Where SNR max a 2 max/σ 2 n, and a max, σ n is defined in (2). 576 Proof. Please see the proof in Appendix A Note that while the window is designed for the entire data 578 cube, the windowing is applied only to the sampled locations, 579 which does not increase the sample complexity of the robust 58 MARS-SFT. 58 B. Voting-based frequency decoding 582 When the signal is approximately sparse, the frequencies 583 decoded by (4) are not integers. Since we aim to recover 584 the gridded frequencies, i.e., S of (), the recovered frequency 585 indices are rounded to the nearest integers. When the SNR 586 is low, the frequency decoding could result in false frequen- 587 cies; those false frequencies enter the future iterations and 588 generate more false frequencies. To suppress the effect of 589 false frequencies, motivated by the classical m-out-of-n radar 59 signal detector [24], the robust MARS-SFT adopts an n d -out- 59 of-n s voting procedure in each iteration. Specifically, within 592 each iteration, n s sub-iterations are applied; each sub-iteration 593 adopts randomly generated line slope and offset parameters 594 and recovers a subset of frequencies, S i, i [n s ]. Within 595 those frequency sets, a given frequency could be recovered 596 by n out of n s sub-iterations. For a true significant frequency, 597 n is typically larger than that of a false frequency, thus only 598 those frequencies with n n d are retained as the recovered 599 frequencies of the current iteration. When (n s, n d ) is properly 6 designed, the false frequencies can be reduced significantly. 6 C. Lower bound of the probability of correct localization and 62 convergence of robust MARS-SFT 63 The probability of decoding error is related to the SNR, 64 signal sparsity and also the parameters (n s, n d ) of the robust 65 MARS-SFT. In the following, we provide a lower bound 66 for the probability of correct localization of the significant 67
8 frequencies for each iteration, from which one can study convergence of robust MARS-SFT, i.e., the number of iterations needed in order to recover all the significant frequencies with sufficient SNR. From Section II, a 2-D continuous-valued sinusoid (a, ω) S of (2) is associated with a cluster of 2-D on-grid sinusoids S S of (). Let us assume that the sinusoid (a d, 2π[m /N, m /N ] T ) S with [m, m ] T X has the largest absolute amplitude among all members of S. In addition, let us assume that the SNR of (a, ω) is sufficiently high. Then the probability of correctly localizing (a d, 2π[m /N, m /N ] T ) in each iteration is lower bounded by P d n s n d =n d ( ns n d ) (P P w ) n d ( P P w ) ns n d, (7) 62 where P ( S /N) N/L with L = LCM(N, N ) is the probability of a frequency in S 63 being projected to a -sparse bin, and S with S 64 S represents the remaining 65 frequencies to be recovered in subsequent iterations; P w 66 ( P u )( P v ) is the lower bound of the probability of 67 correct localization for a 2-D frequency that is projected into 68 an -sparse bin in one sub-iteration; P u, P v are the upper 69 bounds of the probability of localization error for the two fre- 62 quency components, m, m, respectively, which are defined as P u ( σ p ( f an (δ u )) ) 2, Pv ( σ p ( f an (δ v )) ) 2 62, 622 where δ u aπ W /(2NN ), δ v aπ W /(2NN ), 623 with W R N N denoting the window applied on the data; σ p, with 2 σ p 2π is the parameter determined 625 by the phases of the error vectors contained in the -sparse 626 bin; f an (x) is the cumulative distribution function of the 627 Rayleigh distribution, which is defined as f an (x) e x2 /(2σ 2 a ) n, x >, where σa 2 σ2 n n W 2 2/(2NL). The proof of (7) can be found in Appendix A.5. Essentially, (7) represents the complementary cumulative binomial probability resulted from the n d -out-of-n s voting procedure, where the success probability of each experiment, i.e., localizing (a d, 2π[m /N, m /N ] T ) in each sub-iteration of robust MARS-SFT is P P w. When K = S is known, (7) can be applied to estimate the largest number of iterations (the upper bound) of robust MARS-SFT in order to recover all the frequencies in S since the least number of recovered frequencies in each iteration can be estimated by S P d. V. NUMERICAL RESULTS A. Comparison between MARS-SFT and SO-SFT 649 We compare the performance of SO-SFT and the proposed 65 MARS-SFT for the 2-D case. For MARS-SFT the line length 65 is set to L = N, and each iteration uses 3N samples. We 652 limit the number of iterations to T max = N/(3L) 85, 653 which corresponds to roughly % samples of the input data. 654 Fig. 4(a) shows the probability of exact recovery versus the 655 level of sparsity for the two methods. When the signal is 656 very sparse, i.e., K < N /2, SO-SFT has high probability 657 of exact recovery, while for moderate level of sparsity, i.e., 658 K > N, SO-SFT fails with high probability. Moreover, SO- 659 SFT only works for the scenario in which the frequencies are 66 distributed uniformly, while it fails when there exists even a 66 single frequency cluster. On the contrary, MARS-SFT applies 662 to signals with a wide range of sparsity levels. For instance, its 663 success rate is almost 97% when K = 5N. In all cases, the 664 success rates drop to when K = 6N, since then, the exact 665 recovery needs roughly iterations, which exceeds T max. 666 Fig. 4 (b) shows the ratio of samples used by the MARS- 667 SFT and SO-SFT for exact recovery over the total number 668 of data samples, N. The figure shows that the sparser the 669 signal, the fewer samples are required by MARS-SFT; for 67 example, when K = N, only 5.9% of the signal samples 67 are required. SO-SFT only needs.6% of signal samples 672 in very sparse scenarios, while it fails in less sparse, or 673 non-uniformly distributed frequency cases. The performance 674 of MARS-SFT is similar for both uniformly-distributed and 675 clustered frequency cases at the same sparsity level; this is 676 due to the randomized projections that can effectively isolate 677 the 2-D frequencies into -sparse bins, even when the signal is 678 less sparse (K is large) and the frequencies are clustered. Note 679 that the supper linearity in the growth of the ratio of samples 68 with K arises because of the super-linear of the number of 68 iterations of MARS-SFT with K. 682 Probability of exact recovery MARS-SFT, uniform.2 MARS-SFT, 9-cluster MARS-SFT, 25-cluster SO-SFT, uniform (a) Ratio of samples MARS-SFT, uniform MARS-SFT, 9-cluster MARS-SFT, 25-cluster SO-SFT, uniform Fig. 4. Comparison between MARS-SFT and SO-SFT. (a) Probability of exact recovery versus sparsity level, K. (b) Ratio of samples needed for exact recovery versus K. (b) In this section, we provide some numerical results to verify the theoretical findings related to the proposed MARS-SFT and its robust extension. Unless stated otherwise, the size of the test data is set equal to N = N = 256. We simulate cases for frequencies which are uniformly distributed and also for frequencies which are clustered; for clustered cases, we consider clusters of 9 and 25 frequencies. The experimental results are averaged over iterations of Monte Carlo simulation. B. Comparison between MARS-SFT and the SFT of [2], [3] 683 We compare MARS-SFT and its robust extension with the 684 SFT of [2], [3] in 2-D cases. The main difference between 685 the SFT of [2] and the SFT of [3] is that the former takes the 686 slices only from the borders and the diagonals from the input 687 data matrix, while the latter also takes slices along many lines 688 with predefined slopes; this increases the degrees of freedom 689 of projecting 2-D frequencies onto -D lines. 69
9 Fig. 5 (a) shows the frequency localization performance of SFT of [2], [3] versus K in noiseless cases. Compared to MARS-SFT, the SFT of [2], [3] only succeeds in very sparse scenarios. For instance, when K = 5, the best success rate that the SFT of [3] can achieve is approximately 67%, while the success rate of MARS-SFT is %. One way to increase the success rate of SFT of [3] is to use a larger T at the expense of increasing complexity. However, the the success rate saturates when T grows beyond a certain value. For instance, the success rates corresponding to T = 2 and T = 3 are similar. is created by randomly picking a subset of A with a specific 732 size of support. The figure shows that the less sparse the 733 signal (the larger the K), the larger size of A is needed 734 to achieve a high probability of exact recovery. A should 735 be large enough so that for each iteration of MARS-SFT, a 736 distinct slope can be obtained from A with high probability. 737 Compared to the uniformly distributed frequency cases, the 738 clustered frequency cases require a larger A, since the latter 739 requires larger degrees of freedom than the former in order to 74 isolate the clustered frequencies by randomly projecting those 74 frequencies to distinct -sparse bins of the DFT along lines (a) Fig. 5. Comparison between MARS-SFT and SFT of [2], [3]. The robust MARS-SFT is adopted in noisy cases. (a) Localization success rate versus K in noiseless cases. (b) Localization success rate versus K in noisy cases. Frequencies are on the grid. (b) Number of iterations (a) Fig. 6. Effect of the line slope to the MARS-SFT. (a) Number of iterations of exact recovery versus sparsity. (b) Effect of the size of the slope parameter set ( A ) to the exact recovery probability. Probability of exact recovery (b) Aside from sensitivity to sparsity level, the SFT of [2], [3] is more robust to noise than the proposed robust MARS-SFT. For instance, in Fig. 5 (b), one can see that when K < 3, the success rate of the SFT of [3] for SNR equal to 5dB is similar to that of robust MARS-SFT applied to SNR of 9dB. However, when the SNR is greater than db, the success rate of robust MARS-SFT approaches %. The computation of the SFT of [2], [3] is significantly slower as compared to that of robust MARS-SFT, as the computation complexity of the former is O((N + K 3 ) log N) [3] in the 2-D case; this is even greater than that of the FFT. C. Line slope of MARS-SFT From Lemma, choosing the line slope parameters α randomly from the set A for each iteration of MARS-SFT, as opposed to choosing them at random from the set X, results in reduced number of iterations for exact recovery. This is because as compared to the latter case, in the former case, more -sparse bins are likely to be created in each iteration due to the uniformity of projections. In Fig. 6 (a), we compare the number of iterations of MARS-SFT when α is chosen from A and X. The figure confirms the expectation from Lemma. The high probability of exact recovery of MARS-SFT in less sparse cases is due to the abundance of degrees of freedom in frequency projection, which requires a sufficiently large A. When N = N = 256, it is easy to verify that A = Fig. 6 (b) shows the probability of exact recovery versus sparsity when we use subsets of A of different support sizes. The slope parameter set, A, in each experiment, D. Convergence of MARS-SFT 743 We verify the average number of iterations of MARS-SFT 744 in order to exactly recover the signal (see Theorem 2). The 745 relationship between the number of iterations, T, and sparsity 746 level, K, for different data dimensions of the same number 747 of samples (i.e., different N, N but the same N) are shown 748 in Fig. 7 (a); the figure shows the theoretical values based on 749 Theorem 2 and also the results of simulations. As expected, 75 for all cases, T increases as K increases; the rate of increase 75 in the cases of N = N = 256 is greater than the cases 752 when N = 52, N = 28 and N = 24, N = Also, the former case requires more iterations than the latter 754 two cases. This is because for the three cases the the line 755 length, L, equals 256, 52 and 24, respectively. A larger 756 L leads to a higher probability of creating more -sparse 757 bins in each iteration of MARS-SFT, which results in faster 758 convergence of the algorithm. The clustered frequencies do 759 not require larger T as compared to the uniformly distributed 76 frequencies, which suggests that MARS-SFT is efficient in 76 solving non-uniformly distributed frequencies. The number 762 of samples used by MARS-SFT depends both on T and L. 763 Fig. 7 (b) shows that when the signal is very sparse, i.e., 764 K < 64, the equal-length case (N = N ) uses the least 765 number of samples, while for less sparse cases, the number 766 of samples required by MARS-SFT is less in the cases when 767 N = 52, N = 256 and N = 24, N = 64 than the case 768 when N = 256, N = According to Theorem, when N, N are co-prime, the 77 set S of the signal model () can be reconstructed exactly 77 based on only one iteration of MARS-SFT. Fig. 8 shows that 772
FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM
FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering
More informationFPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM
FPS-SFT: A MULTI-DIMENSIONAL SPARSE FOURIER TRANSFORM BASED ON THE FOURIER PROJECTION-SLICE THEOREM Shaogang Wang, Vishal M. Patel and Athina Petropulu Department of Electrical and Computer Engineering
More informationACCORDING to Shannon s sampling theorem, an analog
554 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 2, FEBRUARY 2011 Segmented Compressed Sampling for Analog-to-Information Conversion: Method and Performance Analysis Omid Taheri, Student Member,
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER On the Performance of Sparse Recovery
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 11, NOVEMBER 2011 7255 On the Performance of Sparse Recovery Via `p-minimization (0 p 1) Meng Wang, Student Member, IEEE, Weiyu Xu, and Ao Tang, Senior
More informationLecture Notes 5: Multiresolution Analysis
Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and
More informationA FFAST framework for computing a k-sparse DFT in O(k log k) time using sparse-graph alias codes
A FFAST framework for computing a k-sparse DFT in O(k log k) time using sparse-graph alias codes Sameer Pawar and Kannan Ramchandran Dept. of Electrical Engineering and Computer Sciences University of
More informationSparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach
Sparse Sensing in Colocated MIMO Radar: A Matrix Completion Approach Athina P. Petropulu Department of Electrical and Computer Engineering Rutgers, the State University of New Jersey Acknowledgments Shunqiao
More informationAn algebraic perspective on integer sparse recovery
An algebraic perspective on integer sparse recovery Lenny Fukshansky Claremont McKenna College (joint work with Deanna Needell and Benny Sudakov) Combinatorics Seminar USC October 31, 2018 From Wikipedia:
More informationA robust sub-linear time R-FFAST algorithm for computing a sparse DFT
A robust sub-linear time R-FFAST algorithm for computing a sparse DFT 1 arxiv:1501.00320v1 [cs.it] 1 Jan 2015 Sameer Pawar and Kannan Ramchandran Dept. of Electrical Engineering and Computer Sciences University
More informationk-sparse n-length DFT in O(k log n) sample
A robust R-FFAST framework for computing a k-sparse n-length DFT in O(k log n) sample complexity using sparse-graph codes Sameer Pawar and Kannan Ramchandran Dept. of Electrical Engineering and Computer
More informationBinary Compressive Sensing via Analog. Fountain Coding
Binary Compressive Sensing via Analog 1 Fountain Coding Mahyar Shirvanimoghaddam, Member, IEEE, Yonghui Li, Senior Member, IEEE, Branka Vucetic, Fellow, IEEE, and Jinhong Yuan, Senior Member, IEEE, arxiv:1508.03401v1
More informationPHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION. A Thesis MELTEM APAYDIN
PHASE RETRIEVAL OF SPARSE SIGNALS FROM MAGNITUDE INFORMATION A Thesis by MELTEM APAYDIN Submitted to the Office of Graduate and Professional Studies of Texas A&M University in partial fulfillment of the
More informationRandomness-in-Structured Ensembles for Compressed Sensing of Images
Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder
More informationSPARSE signal representations have gained popularity in recent
6958 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 10, OCTOBER 2011 Blind Compressed Sensing Sivan Gleichman and Yonina C. Eldar, Senior Member, IEEE Abstract The fundamental principle underlying
More informationLow-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising
Low-rank Promoting Transformations and Tensor Interpolation - Applications to Seismic Data Denoising Curt Da Silva and Felix J. Herrmann 2 Dept. of Mathematics 2 Dept. of Earth and Ocean Sciences, University
More informationsine wave fit algorithm
TECHNICAL REPORT IR-S3-SB-9 1 Properties of the IEEE-STD-57 four parameter sine wave fit algorithm Peter Händel, Senior Member, IEEE Abstract The IEEE Standard 57 (IEEE-STD-57) provides algorithms for
More informationAnalog-to-Information Conversion
Analog-to-Information Conversion Sergiy A. Vorobyov Dept. Signal Processing and Acoustics, Aalto University February 2013 Winter School on Compressed Sensing, Ruka 1/55 Outline 1 Compressed Sampling (CS)
More informationCombining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation
UIUC CSL Mar. 24 Combining Sparsity with Physically-Meaningful Constraints in Sparse Parameter Estimation Yuejie Chi Department of ECE and BMI Ohio State University Joint work with Yuxin Chen (Stanford).
More informationLecture Notes 9: Constrained Optimization
Optimization-based data analysis Fall 017 Lecture Notes 9: Constrained Optimization 1 Compressed sensing 1.1 Underdetermined linear inverse problems Linear inverse problems model measurements of the form
More informationLecture 16 Oct. 26, 2017
Sketching Algorithms for Big Data Fall 2017 Prof. Piotr Indyk Lecture 16 Oct. 26, 2017 Scribe: Chi-Ning Chou 1 Overview In the last lecture we constructed sparse RIP 1 matrix via expander and showed that
More informationTwo-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling
Two-Dimensional Sparse Arrays with Hole-Free Coarray and Reduced Mutual Coupling Chun-Lin Liu and P. P. Vaidyanathan Dept. of Electrical Engineering, 136-93 California Institute of Technology, Pasadena,
More informationCompressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery
Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad
More informationIntroduction How it works Theory behind Compressed Sensing. Compressed Sensing. Huichao Xue. CS3750 Fall 2011
Compressed Sensing Huichao Xue CS3750 Fall 2011 Table of Contents Introduction From News Reports Abstract Definition How it works A review of L 1 norm The Algorithm Backgrounds for underdetermined linear
More informationTowards a Mathematical Theory of Super-resolution
Towards a Mathematical Theory of Super-resolution Carlos Fernandez-Granda www.stanford.edu/~cfgranda/ Information Theory Forum, Information Systems Laboratory, Stanford 10/18/2013 Acknowledgements This
More informationFaster Johnson-Lindenstrauss style reductions
Faster Johnson-Lindenstrauss style reductions Aditya Menon August 23, 2007 Outline 1 Introduction Dimensionality reduction The Johnson-Lindenstrauss Lemma Speeding up computation 2 The Fast Johnson-Lindenstrauss
More informationLecture 22: More On Compressed Sensing
Lecture 22: More On Compressed Sensing Scribed by Eric Lee, Chengrun Yang, and Sebastian Ament Nov. 2, 207 Recap and Introduction Basis pursuit was the method of recovering the sparsest solution to an
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 54, NO. 2, FEBRUARY
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 54, NO 2, FEBRUARY 2006 423 Underdetermined Blind Source Separation Based on Sparse Representation Yuanqing Li, Shun-Ichi Amari, Fellow, IEEE, Andrzej Cichocki,
More informationConditions for Robust Principal Component Analysis
Rose-Hulman Undergraduate Mathematics Journal Volume 12 Issue 2 Article 9 Conditions for Robust Principal Component Analysis Michael Hornstein Stanford University, mdhornstein@gmail.com Follow this and
More informationCompressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes
Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering
More informationChapter 4 Discrete Fourier Transform (DFT) And Signal Spectrum
Chapter 4 Discrete Fourier Transform (DFT) And Signal Spectrum CEN352, DR. Nassim Ammour, King Saud University 1 Fourier Transform History Born 21 March 1768 ( Auxerre ). Died 16 May 1830 ( Paris ) French
More informationCompressed Sensing and Robust Recovery of Low Rank Matrices
Compressed Sensing and Robust Recovery of Low Rank Matrices M. Fazel, E. Candès, B. Recht, P. Parrilo Electrical Engineering, University of Washington Applied and Computational Mathematics Dept., Caltech
More informationThe New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the Binary LDPC Codes With Better Cycle-Structures
HE et al.: THE MM-PEGA/MM-QC-PEGA DESIGN THE LDPC CODES WITH BETTER CYCLE-STRUCTURES 1 arxiv:1605.05123v1 [cs.it] 17 May 2016 The New Multi-Edge Metric-Constrained PEG/QC-PEG Algorithms for Designing the
More informationCompressed Sensing: Extending CLEAN and NNLS
Compressed Sensing: Extending CLEAN and NNLS Ludwig Schwardt SKA South Africa (KAT Project) Calibration & Imaging Workshop Socorro, NM, USA 31 March 2009 Outline 1 Compressed Sensing (CS) Introduction
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationLarge-Scale L1-Related Minimization in Compressive Sensing and Beyond
Large-Scale L1-Related Minimization in Compressive Sensing and Beyond Yin Zhang Department of Computational and Applied Mathematics Rice University, Houston, Texas, U.S.A. Arizona State University March
More informationFast and Robust Phase Retrieval
Fast and Robust Phase Retrieval Aditya Viswanathan aditya@math.msu.edu CCAM Lunch Seminar Purdue University April 18 2014 0 / 27 Joint work with Yang Wang Mark Iwen Research supported in part by National
More informationRecovery of Low Rank and Jointly Sparse. Matrices with Two Sampling Matrices
Recovery of Low Rank and Jointly Sparse 1 Matrices with Two Sampling Matrices Sampurna Biswas, Hema K. Achanta, Mathews Jacob, Soura Dasgupta, and Raghuraman Mudumbai Abstract We provide a two-step approach
More informationSensitivity Considerations in Compressed Sensing
Sensitivity Considerations in Compressed Sensing Louis L. Scharf, 1 Edwin K. P. Chong, 1,2 Ali Pezeshki, 2 and J. Rockey Luo 2 1 Department of Mathematics, Colorado State University Fort Collins, CO 8523,
More informationCommunications and Signal Processing Spring 2017 MSE Exam
Communications and Signal Processing Spring 2017 MSE Exam Please obtain your Test ID from the following table. You must write your Test ID and name on each of the pages of this exam. A page with missing
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationSTA414/2104 Statistical Methods for Machine Learning II
STA414/2104 Statistical Methods for Machine Learning II Murat A. Erdogdu & David Duvenaud Department of Computer Science Department of Statistical Sciences Lecture 3 Slide credits: Russ Salakhutdinov Announcements
More informationA hybrid DFT-LDPC framework for fast, efficient and robust compressive sensing
A hybrid DFT-LDPC framework for fast, efficient and robust compressive sensing Sameer Pawar and Kannan Ramchandran, Fellow, IEEE Dept. of Electrical Engineering and Computer Sciences University of California,
More informationImproved PARAFAC based Blind MIMO System Estimation
Improved PARAFAC based Blind MIMO System Estimation Yuanning Yu, Athina P. Petropulu Department of Electrical and Computer Engineering Drexel University, Philadelphia, PA, 19104, USA This work has been
More informationSparse filter models for solving permutation indeterminacy in convolutive blind source separation
Sparse filter models for solving permutation indeterminacy in convolutive blind source separation Prasad Sudhakar, Rémi Gribonval To cite this version: Prasad Sudhakar, Rémi Gribonval. Sparse filter models
More informationRandom projections. 1 Introduction. 2 Dimensionality reduction. Lecture notes 5 February 29, 2016
Lecture notes 5 February 9, 016 1 Introduction Random projections Random projections are a useful tool in the analysis and processing of high-dimensional data. We will analyze two applications that use
More informationDISCRETE FOURIER TRANSFORM
DISCRETE FOURIER TRANSFORM 1. Introduction The sampled discrete-time fourier transform (DTFT) of a finite length, discrete-time signal is known as the discrete Fourier transform (DFT). The DFT contains
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationIntroduction to Compressed Sensing
Introduction to Compressed Sensing Alejandro Parada, Gonzalo Arce University of Delaware August 25, 2016 Motivation: Classical Sampling 1 Motivation: Classical Sampling Issues Some applications Radar Spectral
More informationSelf-Calibration and Biconvex Compressive Sensing
Self-Calibration and Biconvex Compressive Sensing Shuyang Ling Department of Mathematics, UC Davis July 12, 2017 Shuyang Ling (UC Davis) SIAM Annual Meeting, 2017, Pittsburgh July 12, 2017 1 / 22 Acknowledgements
More informationSparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery
Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:
More informationA Randomized Algorithm for the Approximation of Matrices
A Randomized Algorithm for the Approximation of Matrices Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert Technical Report YALEU/DCS/TR-36 June 29, 2006 Abstract Given an m n matrix A and a positive
More informationA Non-sparse Tutorial on Sparse FFTs
A Non-sparse Tutorial on Sparse FFTs Mark Iwen Michigan State University April 8, 214 M.A. Iwen (MSU) Fast Sparse FFTs April 8, 214 1 / 34 Problem Setup Recover f : [, 2π] C consisting of k trigonometric
More informationSuper-resolution via Convex Programming
Super-resolution via Convex Programming Carlos Fernandez-Granda (Joint work with Emmanuel Candès) Structure and Randomness in System Identication and Learning, IPAM 1/17/2013 1/17/2013 1 / 44 Index 1 Motivation
More informationFourier Methods in Array Processing
Cambridge, Massachusetts! Fourier Methods in Array Processing Petros Boufounos MERL 2/8/23 Array Processing Problem Sources Sensors MERL 2/8/23 A number of sensors are sensing a scene. A number of sources
More informationElaine T. Hale, Wotao Yin, Yin Zhang
, Wotao Yin, Yin Zhang Department of Computational and Applied Mathematics Rice University McMaster University, ICCOPT II-MOPTA 2007 August 13, 2007 1 with Noise 2 3 4 1 with Noise 2 3 4 1 with Noise 2
More informationCombinatorial Sublinear-Time Fourier Algorithms
Combinatorial Sublinear-Time Fourier Algorithms M. A. Iwen Institute for Mathematics and its Applications (IMA University of Minnesota iwen@ima.umn.edu July 29, 2009 Abstract We study the problem of estimating
More informationThe Pros and Cons of Compressive Sensing
The Pros and Cons of Compressive Sensing Mark A. Davenport Stanford University Department of Statistics Compressive Sensing Replace samples with general linear measurements measurements sampled signal
More informationRobust Sparse Recovery via Non-Convex Optimization
Robust Sparse Recovery via Non-Convex Optimization Laming Chen and Yuantao Gu Department of Electronic Engineering, Tsinghua University Homepage: http://gu.ee.tsinghua.edu.cn/ Email: gyt@tsinghua.edu.cn
More informationA Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes
A Piggybacing Design Framewor for Read-and Download-efficient Distributed Storage Codes K V Rashmi, Nihar B Shah, Kannan Ramchandran, Fellow, IEEE Department of Electrical Engineering and Computer Sciences
More informationA proof of the existence of good nested lattices
A proof of the existence of good nested lattices Dinesh Krithivasan and S. Sandeep Pradhan July 24, 2007 1 Introduction We show the existence of a sequence of nested lattices (Λ (n) 1, Λ(n) ) with Λ (n)
More informationA fast randomized algorithm for approximating an SVD of a matrix
A fast randomized algorithm for approximating an SVD of a matrix Joint work with Franco Woolfe, Edo Liberty, and Vladimir Rokhlin Mark Tygert Program in Applied Mathematics Yale University Place July 17,
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationCombining geometry and combinatorics
Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss
More informationLarge Zero Autocorrelation Zone of Golay Sequences and 4 q -QAM Golay Complementary Sequences
Large Zero Autocorrelation Zone of Golay Sequences and 4 q -QAM Golay Complementary Sequences Guang Gong 1 Fei Huo 1 and Yang Yang 2,3 1 Department of Electrical and Computer Engineering University of
More informationDiscrete Fourier Transform
Discrete Fourier Transform Virtually all practical signals have finite length (e.g., sensor data, audio records, digital images, stock values, etc). Rather than considering such signals to be zero-padded
More informationCompressed Statistical Testing and Application to Radar
Compressed Statistical Testing and Application to Radar Hsieh-Chung Chen, H. T. Kung, and Michael C. Wicks Harvard University, Cambridge, MA, USA University of Dayton, Dayton, OH, USA Abstract We present
More informationSTAT 200C: High-dimensional Statistics
STAT 200C: High-dimensional Statistics Arash A. Amini May 30, 2018 1 / 57 Table of Contents 1 Sparse linear models Basis Pursuit and restricted null space property Sufficient conditions for RNS 2 / 57
More informationSparse Fourier Transform (lecture 1)
1 / 73 Sparse Fourier Transform (lecture 1) Michael Kapralov 1 1 IBM Watson EPFL St. Petersburg CS Club November 2015 2 / 73 Given x C n, compute the Discrete Fourier Transform (DFT) of x: x i = 1 x n
More informationA Polynomial-Time Algorithm for Pliable Index Coding
1 A Polynomial-Time Algorithm for Pliable Index Coding Linqi Song and Christina Fragouli arxiv:1610.06845v [cs.it] 9 Aug 017 Abstract In pliable index coding, we consider a server with m messages and n
More informationROBUST BLIND SPIKES DECONVOLUTION. Yuejie Chi. Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 43210
ROBUST BLIND SPIKES DECONVOLUTION Yuejie Chi Department of ECE and Department of BMI The Ohio State University, Columbus, Ohio 4 ABSTRACT Blind spikes deconvolution, or blind super-resolution, deals with
More informationA new method on deterministic construction of the measurement matrix in compressed sensing
A new method on deterministic construction of the measurement matrix in compressed sensing Qun Mo 1 arxiv:1503.01250v1 [cs.it] 4 Mar 2015 Abstract Construction on the measurement matrix A is a central
More informationIntroduction to Sparsity in Signal Processing
1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system
More informationGuess & Check Codes for Deletions, Insertions, and Synchronization
Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3
More informationLattice Cryptography
CSE 06A: Lattice Algorithms and Applications Winter 01 Instructor: Daniele Micciancio Lattice Cryptography UCSD CSE Many problems on point lattices are computationally hard. One of the most important hard
More information1 Regression with High Dimensional Data
6.883 Learning with Combinatorial Structure ote for Lecture 11 Instructor: Prof. Stefanie Jegelka Scribe: Xuhong Zhang 1 Regression with High Dimensional Data Consider the following regression problem:
More informationMAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing
MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very
More informationto be more efficient on enormous scale, in a stream, or in distributed settings.
16 Matrix Sketching The singular value decomposition (SVD) can be interpreted as finding the most dominant directions in an (n d) matrix A (or n points in R d ). Typically n > d. It is typically easy to
More informationANGLE OF ARRIVAL DETECTION USING COMPRESSIVE SENSING
18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, 2010 ANGLE OF ARRIVAL DETECTION USING COMPRESSIVE SENSING T. Justin Shaw and George C. Valley The Aerospace Corporation,
More informationA Nonuniform Quantization Scheme for High Speed SAR ADC Architecture
A Nonuniform Quantization Scheme for High Speed SAR ADC Architecture Youngchun Kim Electrical and Computer Engineering The University of Texas Wenjuan Guo Intel Corporation Ahmed H Tewfik Electrical and
More informationTransforms and Orthogonal Bases
Orthogonal Bases Transforms and Orthogonal Bases We now turn back to linear algebra to understand transforms, which map signals between different domains Recall that signals can be interpreted as vectors
More informationChirp Transform for FFT
Chirp Transform for FFT Since the FFT is an implementation of the DFT, it provides a frequency resolution of 2π/N, where N is the length of the input sequence. If this resolution is not sufficient in a
More information/16/$ IEEE 1728
Extension of the Semi-Algebraic Framework for Approximate CP Decompositions via Simultaneous Matrix Diagonalization to the Efficient Calculation of Coupled CP Decompositions Kristina Naskovska and Martin
More informationRecovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations
Recovery Based on Kolmogorov Complexity in Underdetermined Systems of Linear Equations David Donoho Department of Statistics Stanford University Email: donoho@stanfordedu Hossein Kakavand, James Mammen
More informationMultirate Digital Signal Processing
Multirate Digital Signal Processing Basic Sampling Rate Alteration Devices Up-sampler - Used to increase the sampling rate by an integer factor Down-sampler - Used to decrease the sampling rate by an integer
More informationof Orthogonal Matching Pursuit
A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement
More information5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE
5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years
More informationInteger Least Squares: Sphere Decoding and the LLL Algorithm
Integer Least Squares: Sphere Decoding and the LLL Algorithm Sanzheng Qiao Department of Computing and Software McMaster University 28 Main St. West Hamilton Ontario L8S 4L7 Canada. ABSTRACT This paper
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More informationDesign Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation
CENTER FOR COMPUTER RESEARCH IN MUSIC AND ACOUSTICS DEPARTMENT OF MUSIC, STANFORD UNIVERSITY REPORT NO. STAN-M-4 Design Criteria for the Quadratically Interpolated FFT Method (I): Bias due to Interpolation
More informationTowards control over fading channels
Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti
More informationIntroduction to Sparsity in Signal Processing
1 Introduction to Sparsity in Signal Processing Ivan Selesnick Polytechnic Institute of New York University Brooklyn, New York selesi@poly.edu 212 2 Under-determined linear equations Consider a system
More informationOptimization-based sparse recovery: Compressed sensing vs. super-resolution
Optimization-based sparse recovery: Compressed sensing vs. super-resolution Carlos Fernandez-Granda, Google Computational Photography and Intelligent Cameras, IPAM 2/5/2014 This work was supported by a
More informationLecture 12: Randomized Least-squares Approximation in Practice, Cont. 12 Randomized Least-squares Approximation in Practice, Cont.
Stat60/CS94: Randomized Algorithms for Matrices and Data Lecture 1-10/14/013 Lecture 1: Randomized Least-squares Approximation in Practice, Cont. Lecturer: Michael Mahoney Scribe: Michael Mahoney Warning:
More informationLecture 6: Modeling of MIMO Channels Theoretical Foundations of Wireless Communications 1
Fading : Theoretical Foundations of Wireless Communications 1 Thursday, May 3, 2018 9:30-12:00, Conference Room SIP 1 Textbook: D. Tse and P. Viswanath, Fundamentals of Wireless Communication 1 / 23 Overview
More informationRecoverabilty Conditions for Rankings Under Partial Information
Recoverabilty Conditions for Rankings Under Partial Information Srikanth Jagabathula Devavrat Shah Abstract We consider the problem of exact recovery of a function, defined on the space of permutations
More informationSFT-BASED DOA DETECTION ON CO-PRIME ARRAY
SFT-BASED DOA DETECTION ON CO-PRIME ARRAY BY GUANJIE HUANG A thesis submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements
More informationLeast squares regularized or constrained by L0: relationship between their global minimizers. Mila Nikolova
Least squares regularized or constrained by L0: relationship between their global minimizers Mila Nikolova CMLA, CNRS, ENS Cachan, Université Paris-Saclay, France nikolova@cmla.ens-cachan.fr SIAM Minisymposium
More informationDesigning Information Devices and Systems I Spring 2016 Elad Alon, Babak Ayazifar Homework 11
EECS 6A Designing Information Devices and Systems I Spring 206 Elad Alon, Babak Ayazifar Homework This homework is due April 9, 206, at Noon.. Homework process and study group Who else did you work with
More informationApproximately achieving the feedback interference channel capacity with point-to-point codes
Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used
More information