On the theoretical analysis of cross validation in compressive sensing

Size: px
Start display at page:

Download "On the theoretical analysis of cross validation in compressive sensing"

Transcription

1 MITSUBISHI ELECTRIC RESEARCH LABORATORIES On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR May 2014 Abstract Copressive sensing (CS) is a data acquisition technique that easures sparse or copressible signals at a sapling rate lower than their Nyquist rate. Results show that sparse signals can be reconstructed using greedy algoriths, often requiring prior knowledge such as the signal sparsity or the noise level. As a substitute to prior knowledge, cross validation (CV), a statistical ethod that exaines whether a odel overfits its data, has been proposed to deterine the stopping condition of greedy algoriths. This paper analyses cross validation in a general copressive sensing fraework. Furtherore, we provide both theoretical analysis and nuerical siulations for a cross-validation odification of orthogonal atching pursuit, referred to as OMP-CV, which has good perforance in sparse recovery. ICASSP 2014 This work ay not be copied or reproduced in whole or in part for any coercial purpose. Perission to copy in whole or in part without payent of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by perission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgent of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payent of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c Mitsubishi Electric Research Laboratories, Inc., Broadway, Cabridge, Massachusetts 02139

2

3 ON THE THEORETICAL ANALYSIS OF CROSS VALIDATION IN COMPRESSIVE SENSING Jinye Zhang Laing Chen Petros T. Boufounos Yuantao Gu State Key Laboratory on Microwave and Digital Counications Tsinghua National Laboratory for Inforation Science and Technology Departent of Electronic Engineering, Tsinghua University, Beijing , CHINA Mitsubishi Electric Research Laboratories, Cabridge, MA 02139, USA ABSTRACT Copressive sensing (CS) is a data acquisition technique that easures sparse or copressible signals at a sapling rate lower than their Nyquist rate. Results show that sparse signals can be reconstructed using greedy algoriths, often requiring prior knowledge such as the signal sparsity or the noise level. As a substitute to prior knowledge, cross validation (CV), a statistical ethod that exaines whether a odel overfits its data, has been proposed to deterine the stopping condition of greedy algoriths. This paper analyses cross validation in a general copressive sensing fraework. Furtherore, we provide both theoretical analysis and nuerical siulations for a cross-validation odification of orthogonal atching pursuit, referred to as OMP-CV, which has good perforance in sparse recovery. Index Ters Copressed sensing, signal reconstruction, cross validation, orthogonal atching pursuit 1. INTRODUCTION Copressive sensing (CS) is a data acquisition technique that easures sparse or copressible signals at sapling rate close to their intrinsic inforation rate rather than the Nyquist rate [2,3]. Broadly speaking, it consists of two ain building blocks: encoding an N- diensional k-sparse signal x by coputing its M N linear projections and decoding the signal using various sparse recovery ethods, such as linear prograing [4 6] or greedy algoriths [7 11]. While greedy algoriths are often preferred for their low coputational coplexity, ost of the require prior inforation such as signal sparsity or noise level for accurate reconstruction, without which the reconstructed signal ay fail to copletely reconstruct the signal or overfit the noise [17]. Such inforation not available in practice, cross validation (CV) [12 16], a statistical ethod that exaines whether a odel overfits the data, has been proposed as an alternative solution [17], where, in general, easureents are separated into reconstruction easureents and CV easureents; the forer are used to reconstruct the signal via a greedy algorith, while the latter to copute the stopping criterion. It has also been shown that CV can be used to estiate the recovery error. Specifically, using the Johnson-Lindenstrauss lea, we can derive theo- More details including proofs for the leas and theores in this paper can be found in [1]. This work was partially supported by the National Progra on Key Basic Research Project (973 Progra 2013CB329201) and the National Natural Science Foundation of China (NSFC ). Petros T. Boufounos is exclusively supported by Mitsubishi Electric Research Laboratories. The corresponding author of this paper is Yuantao Gu (gyt@tsinghua.edu.cn). retical bounds on the estiation error, quantifying how the nuber of CV easureents cv influences the estiation quality [18]. In this paper we provide an analysis of CV perforance in general CS probles. Furtherore, we analyse a variation of the orthogonal atching pursuit (OMP) algorith, called OMP-CV, which exploits CV to deterine the recovered signal sparsity. To analyse CV in general CS probles, we need to address the following two probles. First, given a recovered signal, we are interested in deterining how close it is to the original input signal. Second, given two candidate recovered signals, we are interested in deterining which of the two has lower recovery error. Both questions could be answered using CV. Given a signal, the recovery error ε x can be estiated by its CV residual ϵ cv; we attept to deterine the accuracy and probability the CV residual could provide bounds on the recovery error. Furtherore, given two signals, the coparison of two recovery errors can be estiated by their respective CV residuals; we are interested in the probability the coparison of the error and the CV residual are consistent with each other. In other words, these probles could be forulated as: 1. (Recovery error estiation) Given a recovered signal, with what accuracy and what probability could its CV residual provide bounds on its recovery error? 2. (Recovery error coparison) Given two recovered signals, with what probability could the coparison of their CV residuals correctly reflect their recovery errors? These probles are referred to as general CV probles in the reainder of the article. To analyse both probles, we first calculate the probability distribution of CV residuals. By transforing this distribution into inequalities that hold with certain probability, we directly answer the questions above. Our results for both probles are given in Section 3. We also analyse the OMP-CV algorith, described in Table 1. In this analysis we refer to the algorith output, which is the recovered signal with the sallest CV residual, as the OMP-CV output. The recovered signal with the sallest recovery error is referred to as the oracle optiu. Our ain result shows that the recovery error of the OMP-CV output is very close to that of the oracle optiu with high probability given that the oracle optiu recovers all indices in the support set of the signal. Note that OMP-CV includes two alost separate procedures: one is reconstructing the signal using OMP, the other is estiating the recovery error by CV. To achieve the ain result, we first analyse the internal structure of two recovered signals acquired in different iterations of OMP. We then study how their CV residuals correspond to their recovery errors using the techniques we developed for the general CV proble. Finally we generalize the recovery error coparison between two recovered signals, as generated

4 by the OMP iterations, to the coparison of all recovered signals. Thus we can estiate how close is the OMP-CV output to the oracle optiu. The reinder of the paper is organized as follows. Section 2 forulates the proble and describes the OMP-CV algorith. Section 3 gives the analysis of CV, while Section 4 gives the analysis of OMP-CV. Nuerical siulations are given in Section 5. Finally, Section 6 discusses our findings and concludes. 2. BACKGROUND 2.1. Notation and Proble Forulation We consider an unknown k-sparse signal x R N observed using M linear easureents corrupted by additive noise. Let T be its support set, using T = k to denote the size of T. The vector x T contains the coefficients of x indexed by T. To ipleent the CVbased odification, we separate the M by N sapling atrix to a reconstruction atrix A R N and a CV atrix A cv R cv N. Measureents are also separated accordingly, to reconstruction easureents y R and CV easureents y cv R cv. Our analysis only considers Gaussian sapling atrices and additive Gaussian noise, i.e., y = Ax + n, n = σ na n, y cv = A cvx + n cv, n cv = σ na cv,n, where eleents of A, A cv, a n, and a cv,n are i.i.d norally distributed with ean zero and variance 1/ so that the sensing atrix will have unit colun nor. For notational siplicity, we also define: Definition 1. (Generalized sapling atrix and input signal): Let A g [A, a n] and x g [x,σ n], then y = Ax + n = A gx g, (1) where A g is called the generalized sapling atrix, and x g is called the generalized input signal. In describing the OMP algorith, we use ˆx p to denote the recovered signal in the p-th iteration, and T p to denote its support set. The difference between the recovered signal ˆx p and the input signal x is denoted using x p and the corresponding recovery error using ε p x. The generalized versions of x p and ε p x are x p g and ε p g, respectively, and ϵ p cv denotes the CV residual of ˆx p. In other words, x p x ˆx p, ε p x x p 2 2, x p g [( x p ),σ n], ε p g x p g 2 2, ϵ p cv y cv A cvˆx p 2 2. To ake the analysis ore clear, we ephasize that in this paper, the input signal is considered as deterinistic while the sapling atrix and noise are rando. Without loss of clarity and for notational siplicity, the rando variables and their realizations are denoted by sae notation OMP-CV: Algorith Description The OMP-CV algorith, described in Table 1, is a noise- and sparsity-robust greedy recovery algorith that cobines OMP and CV [17]. The path it follows can be interpreted intuitively: as the signal estiate iproves, the recovery error and CV residual decrease; when the recovered signal starts to overfit the noise, the Table 1. OMP-CV Algorith Input: Reconstruction atrix A, CV atrix A cv, reconstruction easureents y, CV easureents y cv, iteration ties d; Output: The recovered signal ˆx. Initialization: Set p =1, ϵ 0 cv = y cv 2 2; Repeat: Copute ˆx p using an OMP iteration; Copute ϵ p cv = A cvˆx p y cv 2 2; Increent p by 1; Until: p d Copute o cv = argin ϵ p cv; p Return: ˆx = ˆx ocv. recovery error increases. The CV residual detects the change by starting to increase siultaneously. That is, CV residuals provide a good estiate for recovery errors and, further, the OMP-CV output is close to the oracle optiu. There are several advantages of OMP-CV. First, it does not require prior inforation such as noise level or sparsity. Instead, only the axiu nuber of iterations, d, is required as input 1. This can be set two or three ties larger than the sparsity k without hurting recovery perforance. Second, the algorith provides an estiate of the recovery error in its residual. Additionally, by properly setting cv, the recovery perforance of OMP-CV copetes with that of OMP when cobined with accurate inforation on the noise level. Earlier work has provided experiental support on the recovery perforance of OMP-CV [17, 18]. However, to the best of our knowledge, theoretical analysis on this algorith does not exist. Our work presents such analysis in Section 4. In addition, we also provide nuerical siulations in Section 5 to support our theoretical results. 3. CROSS VALIDATION IN COMPRESSIVE SENSING This section describes our results for general CV probles, as described in Section 1. We start with calculating the probability distribution of ϵ cv, i.e. Lea 1. Let ˆx be a recovered signal and ε x be its recovery error. Provided that cv is sufficiently large 2, then ϵ cv = y cv A cvˆx 2 2 N(µ, σ 2 ), (2) where µ = cv (εx + σ2 n), and σ 2 = 2cv (ε 2 x + σn) 2 2. The condition provided that cv is sufficiently large is used in one approxiation step of the proof which uses the Central Liit Theore (CLT). The actual probability distribution of ϵ cv converges absolutely to (2) as cv increases and the approxiation error becoes negligible very fast. In practical cases the nuber of CV easureents cv is often tens or hundreds and the approxiation is very good. The precision of the approxiation is also supported by the siulation result in Section 5.1. The sae condition is also 1 Note that d cannot be greater than, since regular OMP will produce zero residual after that and conclude. 2 An cv exceeding tens, which is coon in CS, eets the requireent.

5 required in Lea 2, Theore 1, and Theore 2 for the sae reason. One iediate consequence of Lea 1 is that ϵ cv can be used to provide an estiate of ε x in the for of an inequality that holds with certain probability. In particular: Theore 1. (Recovery error estiation): Provided that cv is sufficiently large, with probability erf( λ 2 ) the following holds h(λ, +)ϵ cv σ 2 n ε x h(λ, )ϵ cv σ 2 n, (3) where h(λ, ±) is a function related to λ defined as h(λ, ±) 1, (4) cv 2 1 ± λ cv and erf(u) is the error function of noral distribution, erf(u) 1 π u u e t2 dt. (5) Theore 1 provides an answer to Prob. 1 bounding the recovery error ε x by the interval [ h(λ, +)ϵcv σn,h(λ, 2 )ϵ cv σn 2 ]. (6) The difference of the upper bound and the lower bound, 2λ 2 ϵ cv, (7) cv cv 2λ2 cv is roughly proportional to 1/ 2/3 cv and becoes tighter as cv increases. For Prob. 2, we copute the distribution of ϵ cv = ϵ p cv ϵ q cv. Lea 2. Let ˆx p and ˆx q be two recovered signals, ε p g and ε q g be the their generalized recovery error, and ρ g xp g, x q g x p g 2 x q g 2. (8) Provided that cv is sufficiently large, then where µ = cv ϵ cv = ϵ p cv ϵ q cv N(µ, σ 2 ), (9) (εp g ε q g), and σ 2 = 2cv [(ε p g) 2 +(ε q g) 2 2ρ 2 gε p gε q g]. 2 Lea 2 brings us closer to deterining the probability that ϵ p cv >ϵ q cv given that ε p x >ε q x. In particular, we obtain: Theore 2. (Recovery error coparison) Let ˆx p and ˆx q be two recovered signals. If ε p x ε q x, it holds with probability Φ(λ) that ϵ p cv ϵ q cv, where λ is given by 1 λ = 2 2 cv [ 1+2(1 ρ 2 ε p gε q g g) (ε p g ε q g) 2 ], (10) and Φ(u) is the cuulative distribution function (CDF) of standard noral distribution, Φ(u) 1 2π u e t2 2 dt. (11) Theore 2 answers Prob. 2 by giving the probability that the coparison of CV residuals correctly reflect the coparison of the recovery errors. This probability increases with cv, ρ 2 g, and ε p g/ε q g. In other words, as the nuber of CV easureents increases, as recovered signals are ore correlated, and as the recovery error difference increases, this probability also increases. 4. SPARSITY- AND NOISE- ROBUST OMP This section provides our analysis on OMP-CV. Before stating our ain theore, we first define: Definition 2. (Ratio of unrecovered signal and noise): The ratio α p R, defined as α p x T \T p 2 σ n, (12) easures to what extent the signal ˆx has not been recovered by ˆx p. Next, we provide our ain theore for OMP-CV: Theore 3. In OMP-CV, assue that the oracle optiu is ˆx o and T T o. For any recovered signal ˆx p other than ˆx o : if T \T p, then ϵ o cv <ϵ p cv with probability Φ(λ), where λ cv 2 1 g(αp ); (13) if T \T p = and if ˆx p is the OMP-CV output, then with probability greater than {1 (d k)[1 Φ(λ 0)]} we have ε p g C 1ε o g, (14) where g(α p ) is roughly proportional to 1/(α p ) 2, λ 0 is a constant chosen to decide the probability with which (14) holds, and C 1 is only related to λ 0 and cv. Theore 3 supports the recovery perforance of OMP-CV. It divides recovered signals into two categories. For ˆx p with T \T p, the probability [1 Φ(λ)] decays sharply as α p increases. Since this is the probability that ϵ p cv <ϵ o cv, it is nearly ipossible for such ˆx p to be the OMP-CV output. For exaple, [1 Φ(λ)] would be less than 0.5% with α p = 1, and further drop to % with α p =2. As a result, the OMP-CV output recovers all indices of the support set with overwheling probability. Further, if the OMP-CV output recovers all indices of the support set, its recovery error can be bounded by ε o g with high probability showing that the recovery error of OMP-CV output is very close to that of the oracle optiu. Reark 1. Paraeter details of Theore 3 are: g(α p β 1(α p ) 2 + β 2 )= β 1(α p ) 2 + β 2 +ax((α p ) 2 β 3α p β 4, 0) 2 β [ 1/ (α p ) 2 ] + β 1, C 1 =2C C0 2 + C0, C 0 β 5λ 2 0/( cv 2λ 2 0), where betas are decided by RIP constant [4] of the sapling atrix. E.g., if δ d < 0.1, the values of betas are: β 1 =2.08, β 2 = β 3 = 0.03, β 4 =0.02, and β 5 = Further, if, e.g., (d k) =100, setting cv =48and λ 0 =4, produces a nuerical for of (14): with probability 99.7% we have ε p g 1.47ε o g (15) Apart fro the perforance analysis on OMP-CV, we also note that the recovery error of OMP-CV can be estiated using its CV residual by directly applying Theore 1.

6 1.5 Siulation result Theoretical result (a) average recovery error / db OMP CV OMP Recovery error with = Siulation result Theoretical result (b) Fig. 1. Validation for Lea 1 and Lea 2. Fig.1(a) validates Lea 1 by plotting the siulation result of probability distribution of ϵ cv, while Fig.1(b) validates Lea 2 by plotting that of ϵ cv (red curve). The siulation results agree well with the theoretical results, which are plotted in black as reference. The Kullback-Leibler divergences of the theoretical results fro the siulation results are and for Fig. 1(a) and Fig. 1(b) respectively. 5. NUMERICAL SIMULATION This section gives our siulation results. 3 Section 5.1 siulates the probability distribution of rando variables described in Lea 1 and Lea 2 to support both leas. Section 5.2 provides siulations on the perforance of OMP-CV and OMP as a function of cv. The siulations deonstrate the recovery perforance of OMP-CV. Throughout this section, Gaussian signals are used where non-zero entries are generated following the standard Gaussian distribution Validation for Lea 1 and Lea 2 As previously noted, CLT is used to approxiate the probability distribution of ϵ cv in Lea 1 and ϵ cv in Lea 2. The siulations in this section attept to validate this approxiation. In both siulations, paraeters are set as N = 512, = 96, cv = 48, and k =50. With recovered signals (ˆx for Lea 1; ˆx p and ˆx q for Lea 2) fixed, the rando CV atrix along with its noise is realized 1E5 ties and the probability distributions of rando variables (ϵ cv for Lea 1; ϵ cv for Lea 2) are calculated. The experient results, shown in Fig.1, indicate that the siulation results agree well with the theoretical prediction. This validates our approxiation and supports both leas Recovery Perforance of OMP-CV with Variation of CV Measureents Nuber Given a fixed total nuber of easureents M, there is a tradeoff between and cv [17]. On one hand, increasing will reduce 3 The code for siulation is available at Fig. 2. The trade Off between and cv. Paraeters are fixed as N =1000, k =50, σn 2 =0.1, and M =400. cv varies fro 100 to 10 with step 10 while = M cv. OMP-CV outperfors OMP except when cv is too sall. In addition, using sae nuber of easureents, OMP-CV has recovery perforance siilar to OMP (recovery error with = 400) with paraeters appropriately set even though the prior knowledge is required for OMP. the reconstruction error. On the other hand, increasing cv will iprove the CV estiate, and thus ake the OMP-CV output closer to the oracle optiu. This siulation epirically investigates the recovery perforance of OMP-CV as cv varies. In this experient, we set N =1000, k =50, σn 2 =0.1, and M =400. cv varies fro 100 to 10 with step 10 and we have = M cv. For OMP-CV, easureents are used for reconstruction, while cv easureents are used for CV. For OMP, easureents are used for reconstruction and the terination is based on residual with the accurate noise level given. In addition, the recovery perforance, where all M easureents are used for reconstruction using OMP, is given for coparison. We average 1000 repetitions for experients of each paraeter setting. The experient result, plotted in Fig.2, shows the best perforance of OMP-CV lies in the region where cv is neither too sall nor too large. OMP-CV outperfors OMP except when cv is very sall, indicating that CV-based terination is better than residual-based terination, even if the latter uses an exact noise level. Additionally, note that with the sae nuber of easureents at hand (M easureents for both OMP-CV and OMP), OMP-CV can achieve recovery perforance siilar to OMP with paraeters appropriately set, even though prior knowledge is required for OMP. In this sense, OMP-CV outperfors OMP. 6. CONCLUSION This paper presents a theoretical study of CV in copressive sensing, providing analysis of general CV probles as well as analysis of the OMP-CV algorith. As a highly practical algorith, OMP- CV could reconstruct the signal without prior knowledge such as sparsity or noise level; its perforance is supported in this paper both theoretically and epirically. Additionally, our results on general CV probles could also be used to apply CV to other CS-based reconstruction algoriths. CV sacrifices a sall aount of easureents to estiate the reconstruction error. In a nutshell, this technique akes it possible for greedy algoriths to reconstruct the signal without prior knowledge like the sparsity or noise level. In future work, we would like to extend our analysis on CV by studying the use of CV in other greedy sparse recovery algoriths.

7 7. REFERENCES [1] Jinye Zhang, Laing Chen, Petros T. Boufounos, and Yuantao Gu, Cross validation in copressive sensing and its application on op-cv algorith, subitted to IEEE for possible publication, available at [2] David L Donoho, Copressed sensing, Inforation Theory, IEEE Transactions on, vol. 52, no. 4, pp , [3] Eanuel J Candès, Copressive sapling, in Proceedings oh the International Congress of Matheaticians: Madrid, August 22-30, 2006: invited lectures, 2006, pp [4] Eanuel J Candes and Terence Tao, Decoding by linear prograing, Inforation Theory, IEEE Transactions on, vol. 51, no. 12, pp , [5] Eanuel Candes, Mark Rudelson, Terence Tao, and Roan Vershynin, Error correction via linear prograing, in Foundations of Coputer Science, FOCS th Annual IEEE Syposiu on. IEEE, 2005, pp [6] Eanuel J Candès, Justin Roberg, and Terence Tao, Robust uncertainty principles: Exact signal reconstruction fro highly incoplete frequency inforation, Inforation Theory, IEEE Transactions on, vol. 52, no. 2, pp , [7] Joel A Tropp and Anna C Gilbert, Signal recovery fro rando easureents via orthogonal atching pursuit, Inforation Theory, IEEE Transactions on, vol. 53, no. 12, pp , [8] Deanna Needell and Roan Vershynin, Unifor uncertainty principle and signal recovery via regularized orthogonal atching pursuit, Foundations of coputational atheatics, vol. 9, no. 3, pp , [9] David L Donoho, Yaakov Tsaig, Iddo Drori, and J-L Starck, Sparse solution of underdeterined systes of linear equations by stagewise orthogonal atching pursuit, Inforation Theory, IEEE Transactions on, vol. 58, no. 2, pp , [10] Wei Dai and Olgica Milenkovic, Subspace pursuit for copressive sensing signal reconstruction, Inforation Theory, IEEE Transactions on, vol. 55, no. 5, pp , [11] Deanna Needell and Joel A Tropp, Cosap: Iterative signal recovery fro incoplete and inaccurate saples, Applied and Coputational Haronic Analysis, vol. 26, no. 3, pp , [12] Pierre A Devijver and Josef Kittler, Pattern recognition: A statistical approach, Prentice/Hall International Englewood Cliffs, NJ, [13] Seyour Geisser, Predictive interference: an introduction, vol. 55, CRC Press, [14] Ron Kohavi et al., A study of cross-validation and bootstrap for accuracy estiation and odel selection, in IJCAI, 1995, vol. 14, pp [15] Richard R Picard and R Dennis Cook, Cross-validation of regression odels, Journal of the Aerican Statistical Association, vol. 79, no. 387, pp , [16] Jun Shao, Linear odel selection by cross-validation, Journal of the Aerican statistical Association, vol. 88, no. 422, pp , [17] Petros Boufounos, Marco F Duarte, and Richard G Baraniuk, Sparse signal reconstruction fro noisy copressive easureents using cross validation, in Statistical Signal Processing, SSP 07. IEEE/SP 14th Workshop on. IEEE, 2007, pp [18] Rachel Ward, Copressed sensing with cross validation, Inforation Theory, IEEE Transactions on, vol. 55, no. 12, pp , 2009.

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

An RIP-based approach to Σ quantization for compressed sensing

An RIP-based approach to Σ quantization for compressed sensing An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Compressive Sensing Over Networks

Compressive Sensing Over Networks Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Recovery of Sparsely Corrupted Signals

Recovery of Sparsely Corrupted Signals TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Efficient Algorithms for Robust One-bit Compressive Sensing

Efficient Algorithms for Robust One-bit Compressive Sensing Efficient Algoriths for Robust One-bit Copressive Sensing Lijun Zhang ZHANGLJ@LAMDA.NJU.EDU.CN National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 003, China Jinfeng Yi JINFENGY@US.IBM.COM

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

Statistical Logic Cell Delay Analysis Using a Current-based Model

Statistical Logic Cell Delay Analysis Using a Current-based Model Statistical Logic Cell Delay Analysis Using a Current-based Model Hanif Fatei Shahin Nazarian Massoud Pedra Dept. of EE-Systes, University of Southern California, Los Angeles, CA 90089 {fatei, shahin,

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Genetic Quantum Algorithm and its Application to Combinatorial Optimization Problem

Genetic Quantum Algorithm and its Application to Combinatorial Optimization Problem Genetic Quantu Algorith and its Application to Cobinatorial Optiization Proble Kuk-Hyun Han Dept. of Electrical Engineering, KAIST, 373-, Kusong-dong Yusong-gu Taejon, 305-70, Republic of Korea khhan@vivaldi.kaist.ac.kr

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Transformation-invariant Collaborative Sub-representation

Transformation-invariant Collaborative Sub-representation Transforation-invariant Collaborative Sub-representation Yeqing Li, Chen Chen, Jungzhou Huang Departent of Coputer Science and Engineering University of Texas at Arlington, Texas 769, USA. Eail: yeqing.li@avs.uta.edu,

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE

MSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Stochastic Subgradient Methods

Stochastic Subgradient Methods Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods

More information

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo

More information

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China

Department of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China 6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits

Correlated Bayesian Model Fusion: Efficient Performance Modeling of Large-Scale Tunable Analog/RF Integrated Circuits Correlated Bayesian odel Fusion: Efficient Perforance odeling of Large-Scale unable Analog/RF Integrated Circuits Fa Wang and Xin Li ECE Departent, Carnegie ellon University, Pittsburgh, PA 53 {fwang,

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models 2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals Estiation of ADC Nonlinearities fro the Measureent in Input Voltage Intervals M. Godla, L. Michaeli, 3 J. Šaliga, 4 R. Palenčár,,3 Deptartent of Electronics and Multiedia Counications, FEI TU of Košice,

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Testing equality of variances for multiple univariate normal populations

Testing equality of variances for multiple univariate normal populations University of Wollongong Research Online Centre for Statistical & Survey Methodology Working Paper Series Faculty of Engineering and Inforation Sciences 0 esting equality of variances for ultiple univariate

More information

Bayesian Approach for Fatigue Life Prediction from Field Inspection

Bayesian Approach for Fatigue Life Prediction from Field Inspection Bayesian Approach for Fatigue Life Prediction fro Field Inspection Dawn An and Jooho Choi School of Aerospace & Mechanical Engineering, Korea Aerospace University, Goyang, Seoul, Korea Srira Pattabhiraan

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Hermite s Rule Surpasses Simpson s: in Mathematics Curricula Simpson s Rule. Should be Replaced by Hermite s

Hermite s Rule Surpasses Simpson s: in Mathematics Curricula Simpson s Rule. Should be Replaced by Hermite s International Matheatical Foru, 4, 9, no. 34, 663-686 Herite s Rule Surpasses Sipson s: in Matheatics Curricula Sipson s Rule Should be Replaced by Herite s Vito Lapret University of Lublana Faculty of

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

Matrix Inversion-Less Signal Detection Using SOR Method for Uplink Large-Scale MIMO Systems

Matrix Inversion-Less Signal Detection Using SOR Method for Uplink Large-Scale MIMO Systems Matrix Inversion-Less Signal Detection Using SOR Method for Uplin Large-Scale MIMO Systes Xinyu Gao, Linglong Dai, Yuting Hu, Zhongxu Wang, and Zhaocheng Wang Tsinghua National Laboratory for Inforation

More information

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE

Robust Spectral Compressed Sensing via Structured Matrix Completion Yuxin Chen, Student Member, IEEE, and Yuejie Chi, Member, IEEE 6576 IEEE TRANSACTIONS ON INORMATION THEORY, VOL 60, NO 0, OCTOBER 04 Robust Spectral Copressed Sensing via Structured Matrix Copletion Yuxin Chen, Student Meber, IEEE, and Yuejie Chi, Meber, IEEE Abstract

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Novel Realization of Adaptive Sparse Sensing With Sparse Least Mean Fourth Algorithms

Novel Realization of Adaptive Sparse Sensing With Sparse Least Mean Fourth Algorithms ovel Realization of Adaptive Sparse Sensing With Sparse Least ean Fourth Algoriths Guan Gui Li Xu Xiao-ei Zhu and Zhang-in Chen 3. Dept. of Electronics and Inforation Systes Akita Prefectural University

More information

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space

Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing

More information

Ensemble Based on Data Envelopment Analysis

Ensemble Based on Data Envelopment Analysis Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807

More information

An Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles

An Inverse Interpolation Method Utilizing In-Flight Strain Measurements for Determining Loads and Structural Response of Aerospace Vehicles An Inverse Interpolation Method Utilizing In-Flight Strain Measureents for Deterining Loads and Structural Response of Aerospace Vehicles S. Shkarayev and R. Krashantisa University of Arizona, Tucson,

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

A Smoothed Boosting Algorithm Using Probabilistic Output Codes

A Smoothed Boosting Algorithm Using Probabilistic Output Codes A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations

Randomized Accuracy-Aware Program Transformations For Efficient Approximate Computations Randoized Accuracy-Aware Progra Transforations For Efficient Approxiate Coputations Zeyuan Allen Zhu Sasa Misailovic Jonathan A. Kelner Martin Rinard MIT CSAIL zeyuan@csail.it.edu isailo@it.edu kelner@it.edu

More information

On Hyper-Parameter Estimation in Empirical Bayes: A Revisit of the MacKay Algorithm

On Hyper-Parameter Estimation in Empirical Bayes: A Revisit of the MacKay Algorithm On Hyper-Paraeter Estiation in Epirical Bayes: A Revisit of the MacKay Algorith Chune Li 1, Yongyi Mao, Richong Zhang 1 and Jinpeng Huai 1 1 School of Coputer Science and Engineering, Beihang University,

More information

IN modern society that various systems have become more

IN modern society that various systems have become more Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto

More information

Testing Properties of Collections of Distributions

Testing Properties of Collections of Distributions Testing Properties of Collections of Distributions Reut Levi Dana Ron Ronitt Rubinfeld April 9, 0 Abstract We propose a fraework for studying property testing of collections of distributions, where the

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

Device-to-Device Collaboration through Distributed Storage

Device-to-Device Collaboration through Distributed Storage Device-to-Device Collaboration through Distributed Storage Negin Golrezaei, Student Meber, IEEE, Alexandros G Diakis, Meber, IEEE, Andreas F Molisch, Fellow, IEEE Dept of Electrical Eng University of Southern

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER

MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER Pingfan Song João F. C. Mota Nikos Deligiannis Miguel Raul Dias Rodrigues Electronic and Electrical Engineering Departent,

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

Sampling How Big a Sample?

Sampling How Big a Sample? C. G. G. Aitken, 1 Ph.D. Sapling How Big a Saple? REFERENCE: Aitken CGG. Sapling how big a saple? J Forensic Sci 1999;44(4):750 760. ABSTRACT: It is thought that, in a consignent of discrete units, a certain

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a ournal published by Elsevier. The attached copy is furnished to the author for internal non-coercial research and education use, including for instruction at the authors institution

More information