Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing
|
|
- Edith Craig
- 5 years ago
- Views:
Transcription
1 Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing Jeremy Vila an Philip Schniter Dept. of ECE, The Ohio State University, Columbus, OH Abstract The approximate message passing AMP algorithm originally propose by Donoho, Maleki, an Montanari yiels a computationally attractive solution to the usual l -regularize least-squares problem face in compresse sensing, whose solution is known to be robust to the signal istribution. When the signal is rawn i.i. from a marginal istribution that is not least-favorable, better performance can be attaine using a Bayesian variation of AMP. The latter, however, assumes that the istribution is perfectly known. In this paper, we navigate the space between these two extremes by moeling the signal as i.i. Bernoulli-Gaussian BG with unknown prior sparsity, mean, an variance, an the noise as zero-mean Gaussian with unknown variance, an we simultaneously reconstruct the signal while learning the prior signal an noise parameters. To accomplish this task, we embe the BG-AMP algorithm within an expectationmaximization EM framework. umerical experiments confirm the excellent performance of our propose EM-BG-AMP on a range of signal types. 2 I. ITRODUCTIO We consier the problem of recovering a signal x R from noisy linear measurements y Ax + w R M in the unersample regime where M <. Roughly speaking, when x is sufficiently sparse or compressible an when the matrix A is sufficiently well-conitione, accurate signal recovery is possible with polynomial-complexity algorithms. One of the best-known approaches to this problem is known as Lasso [], which minimizes the convex criterion ˆx lasso argmin ˆx y Aˆx 2 2 +λ lasso ˆx, with λ lasso a tuning parameter. When A is constructe from i.i. Gaussian entries, the so-calle phase transition curve PTC gives a sharp characterization of Lasso performance for K-sparse x in the large system limit, i.e., as K,M, with fixe unersampling ratio an sparsity ratio K/M [2]. The Lasso PTC is illustrate in Figs. -3. For noiseless observations, the PTC partitions the -versus-k/m plane into two regions: one where Lasso reconstructs the signal perfectly with high probability, an one where it oes not. For noisy observations, the same curve inicates whether the noise sensitivity i.e., the ratio of estimation-error power to measurement-noise power uner the worst-case signal istribution of Lasso remains boune [3]. One remarkable feature of the noiseless Lasso PTC is that it is invariant to signal istribution. In other wors, if we aopt a This work has been supporte in part by SF-I/UCRC grant IIP , by SF grant CCF-08368, an by DARPA/OR grant Portions of this work were presente in a poster at the Duke Workshop on Sensing an Analysis of High-Dimensional Data, July 20. probabilistic viewpoint where the elements of x are rawn i.i. from the marginal pf p X x λfx+ λδx, for Dirac elta δx, active-coefficient pf fx, an λ K/, then the Lasso PTC is not affecte by f. This PTC invariance implies that Lasso is robust, but that it cannot benefit from the restriction of x to an easier signal class. For example, if the coefficients in x are known to be non-negative, then there exists a polynomial-complexity algorithm whose PTC is better than that of Lasso [2]. Although, in some applications, robustness to worst-case signals may be the ominant concern, in many other applications the goal is to maximize average-case performance over an anticipate signal class. When the signal x is rawn i.i. from an arbitrary known marginal istribution p X an the noise w is i.i. Gaussian with known variance, there exist very-low-complexity iterative Bayesian algorithms to generate approximately maximum a posteriori MAP an minimum mean-square error MMSE signal estimates, notably the Bayesian version of Donoho, Maleki, an Montanari s approximate message passing AMP algorithm [4]. AMP is formulate from a loopy-belief-propagation perspective, leveraging central-limit-theorem approximations that hol in the large system limit for suitably ense A, an amits a rigorous analysis in the large-system limit [5]. Meanwhile, AMP s complexity is remarkably low, ominate by one application of A an A T per iteration which is especially cheap if A is an FFT or other fast operation, with typically < 50 iterations to convergence. More recently, a generalize AMP GAMP algorithm [6] was propose that relaxes the requirements on the noise istribution an on the sensing matrixa. See Table I for a summary. Given that it is rare to know the signal an noise istributions perfectly, we take the approach of assuming signal an noise istributions that are known up to some statistical parameters, an then learning those unknown parameters while simultaneously recovering the signal. Examples of this empirical Bayesian approach inclue several algorithms base on Tipping s relevance vector machine [7] [9]. Although the average-case performance of those algorithms is often quite goo epening on the signal class, of course, their complexities are generally much larger than that of GAMP. In this paper, we propose a GAMP-base empirical- Bayesian algorithm. In particular, we treat the signal as Bernoulli-Gaussian BG signal with unknown sparsity, mean, an variance, an the noise as Gaussian with unknown variance, an then we then learn these statistical parameters using
2 an expectation-maximization EM approach [0] that calls BG-GAMP once per EM-iteration. II. BEROULLI-GAUSSIA GAMP A core component of our propose metho is the Bernoulli- Gaussian BG GAMP algorithm, which we now review. For BG-GAMP, the signal x [x,...,x ] T is assume to be i.i. BG, i.e., to have marginal pf p X x;λ,θ,φ λδx+λx;θ,φ, 2 where δ enotes the Dirac elta, λ the sparsity rate, θ the active-coefficient mean, an φ the active-coefficient variance. The noise w is assume to be inepenent of x an i.i. zeromean Gaussian with variance ψ: p W w;ψ w;0,ψ 3 In our approach, the parameters q [λ,θ,φ,ψ] that efine these prior istributions are treate as eterministic unknowns, an learne through the EM algorithm, as etaile in Section III. Although above an in the sequel we assume realvalue Gaussians, all expressions can be converte to the circular-complex-gaussian case by replacing all with C an removing all 2 s. GAMP can hanle an arbitrary probabilistic relationship p Y Z y m z m between the observe output y m an the noiseless output z m a T mx, where a T m is the m th row of A. Our aitive Gaussian noise assumption implies p Y Z y z y; z, ψ. To complete our escription, we nee only to specify g in, g in, g out, an g out in Table I. Using straightforwar manipulations, our p Y Z yiels [6] g out y,ẑ,µ z ;q y ẑ µ z +ψ 4 g outy,ẑ,µ z ;q µ z +ψ, 5 an our BG signal prior 2 yiels g in ˆr,µ r ;q πˆr,µ r ;qγˆr,µ r ;q 6 µ r g in ˆr,µr ;q πˆr,µ r ;q νˆr,µ r ;q+ γˆr,µ r ;q 2 where πˆr,µ r ;q πˆr,µ r ;q 2 γˆr,µ r ;q 2, 7 + λ λ γˆr,µ r ;q ˆr/µr +θ/φ νˆr,µ r ;q ˆr;θ,φ+µ r ˆr;0,µ r 8 9 /µ r +/φ /µ r +/φ. 0 Table I implies that BG-GAMP s marginal posteriors are p y;q C n p X ;q ;ˆr n,µ r n C n λδxn +λ ;θ,φ ;ˆr n,µ r n 2 efinitions: p Z Y z y;ẑ,µ z p Y Z y zz;ẑ,µ z z p Y Z y z z ;ẑ,µ z D g out y,ẑ,µ z µ z EZ Y {z y;ẑ,µ z } ẑ D2 g out y,ẑ,µz varz Y {z y;ẑ,µ z } µ z µ z D3 p X Y x y;ˆr,µ r p X xx;ˆr,µ r x p Xx x ;ˆr,µ r D4 g in ˆr,µ r x xp X Yx y;ˆr,µ r D5 g in ˆr,µr µ x r x g inˆr,µ r 2 p X Y x y;ˆr,µ r D6 initialize: n : ˆ x xp Xx I n : µ x x ˆxn 2 p X x I2 m : û m0 0 I3 for t,2,3,... m : ẑ mt Amnˆxnt R m : µ z mt Amn 2 µ t R2 m : ˆp mt ẑ mt µ z m tûmt R3 m : û mt g out y m, ˆp mt,µ z mt R4 m : µ u mt g out ym, ˆpmt,µz mt R5 n : µ r nt Amn 2 µ u mt R6 n : ˆr nt ˆt+µ r n t M ma mnûmt R7 n : µ t+ µ r ntg in ˆrnt,µr nt R8 n : ˆt+ g in ˆr nt,µ r nt R9 en TABLE I THE GAMP ALGORITHM [6] for scaling factor C n p X ;q ;ˆr n,µ r n. From 2, it is straightforwar to show that BG-GAMP yiels the following posterior support probabilities: Pr{ 0 y;q} πˆr n,µ r n;q. 3 III. EM LEARIG OF THE PRIOR PARAMETERS q We use the expectation-maximization EM algorithm [0] to learn the statistical parameters q [λ,θ,φ,ψ]. The EM algorithm is an iterative technique that increases the likelihoo at each iteration, guaranteeing convergence to a local maximum of the likelihoo py;q. In our case, we choose the hien ata to be {x,w}, which yiels the EM upate q i+ argmaxe { lnpx,w;q y;q i }, 4 q where i enotes EM iteration an E{ y;q i } enotes expectation conitione on the observations y uner the parameter hypothesis q i. Moreover, we use the well-establishe incremental upating scheule [], where q is upate one element at a time while keeping the other elements fixe. A. EM upate for λ We now erive the EM upate for λ given previous parameters q i [λ i,θ i,φ i,ψ i ]. Because x is apriori inepenent of w an i.i., the joint pf px,w;q ecouples into C p X ;λ,θ,φ for a λ-invariant constant C, an so λ i+ argmax λ 0, E { lnp X ;λ,θ i,φ i y;q i }. 5 The maximizing value of λ in 5 is necessarily a value of λ that zeroes the erivative, i.e., that satisfies p y;q i λ lnp X ;λ,θ i,φ i 0. 6
3 For the p X ;λ,θ,φ given in 2, it is reaily seen that λ lnp X ;λ,θ i,φ i ;θ i,φ i δ p X ;λ,θ i,φ i { λ 0 λ 0. 7 Plugging 7 an 2 into 6, it becomes evient that the neighborhoo aroun the point 0 shoul be treate ifferently than the remainer of R. Thus, we efine the close ball B ǫ [ ǫ,ǫ] an B ǫ R\B ǫ, an note that, in the limit ǫ 0, the following is equivalent to 6: p y;q i p y;q i. λ x } n B ǫ λ {{} x } n B ǫ {{} ǫ 0 πˆr n,µ r n;q i ǫ 0 πˆr n,µ r n;q i 8 To verify that the left integral converges to the πˆr n,µ r n;q i efine in 8, it suffices to plug 2 into 8 an apply the Gaussian-pf multiplication rule; 3 meanwhile, for any ǫ, the right integral must equal one minus the left. Finally, the EM upate for λ is the unique value satisfying 8 as ǫ 0, i.e., λ i+ πˆr n,µ r n;q i. 9 Conveniently, {πˆr n,µ r n;q i } are GAMP outputs. B. EM upate for θ Similar to 5, the EM upate 4 for θ can be written as θ i+ argmax θ R E { lnp X ;λ i,θ,φ i y;q i }, 20 The maximizing value of θ in 20 is necessarily a value of θ that zeroes the erivative, i.e., that satisfies p y;q i θ lnp X ;λ i,θ,φ i 0. 2 For the p X ;λ,θ,φ given in 2, it is reaily seen that θ lnp X ;λ i,θ,φ i θ φ i λ i ;θ,φ i p X ;λ i,θ,φ i { xn θ φ i Splitting the omain of integration in 2 into B ǫ an B ǫ, an then plugging in 22, we fin that the following is equivalent to 2 in the limit of ǫ 0: θp y;q i x;a,ax;b,bx; a/a+b/b /A+/B, /A+/B 0;a b,a+b. 4 If the user has goo reason to believe that the true signal pf is symmetric aroun zero, then they may consier fixingθ0 an avoiing this EM upate. The unique value of θ satisfying 23 as ǫ 0 is then θ i+ ǫ 0 p y;q i lim ǫ 0 p y;q i 24 λ i+ πˆr n,µ r n;q i γˆr n,µ r n;q i 25 for the GAMP outputs {γˆr n,µ r n;q i } efine in 9. The equality in 25 can be verifie by plugging the GAMP posterior expression from 2 into 24 an simplifying via the Gaussian-pf multiplication rule. C. EM upate for φ Similar to 5, the EM upate for φ can be written as ˆφ i+ argmax φ>0 E { lnp X ;λ i,θ i,φ y;q i }. 26 The maximizing value of φ in 26 is necessarily a value of φ that zeroes the erivative, i.e., that satisfies p y;q i φ lnp X ;λ i,θ i,φ For the p X ;λ,θ,φ given in 2, it is reaily seen that φ lnp X ;λ i,θ i,φ xn θ i 2 2 φ 2 λ i ;θ i,φ φ p X ;λ i,θ i 28,φ { xn θ i 2 2 φ 2 φ Splitting the omain of integration in 27 into B ǫ an B ǫ, an then plugging in 29, we fin that the following is equivalent to 27 in the limit of ǫ 0: xn θ i 2 φ p y;q i The unique value of φ satisfying 30 as ǫ 0 is then φ i+ lim ǫ 0 θ i 2 p y;q i lim 3 ǫ 0 p y;q i θ λ i+ πˆr n,µ r n,q i i γˆr n,µ r n;q i 2 +νˆr n,µ r n;q i 32 for the GAMP outputs {νˆr n,µ r n;q i } efine in 0. The equality in 32 can be verifie by plugging the GAMP posterior expression from 2 into 3 an simplifying using the Gaussian-pf multiplication rule.
4 K/M Fig.. Empirical noiseless PTCs for Bernoulli- Gaussian signals an theoretical PTC for Lasso. K/M Fig. 2. Empirical noiseless PTCs for Bernoulli- Raemacher case an theoretical PTC for Lasso. K/M Fig. 3. Empirical noiseless PTCs for Bernoulli signals an theoretical PTC for Lasso. D. EM upate for ψ Finally, we erive the EM upate for ψ given previous parameters q i. Because w is apriori inepenent ofxan i.i., the joint pf px,w;q ecouples into C M m p Ww m ;ψ for a ψ-invariant constant C, an so ψ i+ argmax ψ>0 E { lnp W w m ;ψ y;q i }. 33 m The maximizing value of ψ in 33 is necessarily a value of ψ that zeroes the erivative, i.e., that satisfies pw m y;q i w m ψ lnp Ww m ;ψ m Because p W w m ;ψ w m ;0,ψ, it is reaily seen that ψ lnp Ww m ;ψ wm 2 2 ψ 2, 35 ψ which, when plugge into 34, yiels the unique solution ψ i+ w m 2 pw m y;q i. 36 M w m m Since w m y m z m for z m a T mx, we can also write 5 ψ i+ M M y m z m 2 pz m y;q i z m 37 ym ẑ m 2 +µ z m 38 m m where ẑ m an µ z m, the posterior mean an variance of z m, are available from GAMP see steps R-R2 in Table I. E. EM Initialization Since the EM algorithm converges only to a local maximum of the likelihoo function, proper initialization is essential. We initialize the sparsity as λ 0 M ρ SE M, where ρ SE M is the sparsity ratio K M achieve by the Lasso PTC [2] ρ SE M max a 0 2 M [+a2 Φa aφa] a 2 2[+a 2 Φa aφa] Empirically, we have observe that the EM upate for ψ works better with the µ z m term in 38 weighte by M an suppresse until later EM iterations. We conjecture that this is ue to bias in the GAMP variance estimates µ z m. We initialize the active mean as θ 0 0, which effectively assumes that the active pff is symmetric. Finally, noting that E{ y 2 2} SR+Mψ for SR tra T Aλφ/Mψ, we see that the variances, φ an ψ, can be initialize base on y 2 2 an a given hypothesis SR 0 0. In particular, ψ 0 y 2 2 SR 0 +M, φ0 y 2 2 Mψ 0 tra T Aλ, 40 0 where, without other knowlege, we suggest SR IV. UMERICAL RESULTS A. oiseless Phase Transitions First, we escribe the results of experiments that compute noiseless empirical phase transition curves PTCs uner various sparse-signal istributions. To compute each empirical PTC, we constructe a gri of oversampling ratio M [0.05,5] an sparsity ratio K M [0.05,5] for fixe signal length 000. At each gri point, we generate R 00 inepenent realizations of K-sparse signal x an M i.i.-gaussian measurement matrix A. From the measurements y Ax, we attempte to reconstruct the signal x using various algorithms. A recovery ˆx from realization r {,...,R} was consiere a success i.e., S r if MSE x ˆx 2 2/ x 2 2 < 0 4, where the average success rate is efine as S R R r S r. The empirical PTC was then plotte, using Matlab s contour comman, as the S contour over the sparsity-unersampling gri. Figures 3 show the empirical PTCs for three recovery algorithms: the propose algorithm, 6 a genieaie BG-GAMP that knew the true [λ, θ, φ, ψ], an the from [2]. For comparison, Figs. 3 also isplay the PTC 39. The signal was generate as BG with zero mean θ 0 an unit variance φ in Fig., as Bernoulli-Raemacher BR in Fig. 2 i.e., non-zero coefficients chosen uniformly from {, }, an as Bernoulli in Fig. 3 i.e., all non-zero coefficients set equal to or, equivalently, BG with θ an φ 0. Figures 3 emonstrate that, for all three signal types, the empirical PTC of improves on that for as well as the PTC. The latter two are known to converge in the large system limit 6 Matlab coe available at schniter/emturbogamp
5 MSE [B] MSE [B] MSE [B] Fig. 4. MSE for noisy recovery of a Bernoulli- Gaussian signal. Fig. 5. MSE for noisy recovery of a Bernoulli- Raemacher signal. Fig. 6. signal. MSE for noisy recovery of a Bernoulli [2]. The smallest gains over Lasso appear when the signal is BR i.e., the least-favorable istribution [3], whereas the largest gains appear when the signal is Bernoulli. Amazingly, EM-BG-AMP perfectly recovere almost every Bernoulli realization when M 5. The figures suggest that the EM algorithm oes a ecent job of learning the parameters λ,θ,φ. In fact, slightly outperforms genie-bg-gamp in Figs. 2 ue to realization-specific ata fitting. B. oisy Signal Recovery Figures 4 6 show MSE for noisy recovery of the same three sparse signal types consiere in Figs. 3. To construct these new plots, we fixe 000, K 00, SR 25B, an varie M. Each ata point represents MSE average over R 500 realizations. For comparison, we show the performance of the propose, Bayesian Compressive Sensing [9], Sparse Bayesian Learning [8] via, ebiase genie-aie 7 Lasso via SPGL [2], an Smoothe-l 0 [3]. All algorithms were run uner the suggeste efaults, with noise small in. In Fig. 4 an Fig. 6, we see outperforming all other algorithms for all meaningful values of unersampling ratio M. In fact, for Bernoulli signals Fig. 6, we see a significant improvement, especially when M [,8]. We have verifie in simulations not shown here that similar behavior persists at lower SRs. In Fig. 5, we see EM-BG- GAMP outperforming all algorithms except, which oes B better for large values of M. Apparently, the prior assume by is a better fit to BR than the BG prior. Amittely, the near-ominant performance observe for perfectly sparse signals in Figs. 6 oes not hol for all signal classes. As an example, Fig. 7 shows noisy recovery MSE for a Stuent s-t signal with pf p X x;q Γq+/2 2πΓq/2 +x 2 q+/2 4 uner the non-compressible parameter choice q.67 [4]. There, we see outperforme by an genie-aie Lasso, although not by an. In fact, among the competing algorithms, those that performe best for exactly sparse signals seem to o worst for this non-compressible signal, an vice versa. We attribute these 7 We ran SPGL in BPD moe: minˆx x s.t. y Ax 2 σ, for tolerances σ 2 {0.,,...,.5} Mψ, an reporte the lowest MSE. behaviors to a poor fit between the assume an actual signal priors, motivating future work on an EM Gaussian-Mixture GAMP with automatic selection of the mixture orer. REFERECES [] R. Tibshirani, Regression shrinkage an selection via the lasso, J. Roy. Statist. Soc. B, vol. 58, no., pp , 996. [2] D. L. Donoho, A. Maleki, an A. Montanari, Message passing algorithms for compresse sensing, Proc. ational Acaemy of Sciences, vol. 06, pp , ov [3] D. L. Donoho, A. Maleki, an A. Montanari, The noise-sensitivity phase transition in compresse sensing, arxiv:004.28, Apr [4] D. L. Donoho, A. Maleki, an A. Montanari, Message passing algorithms for compresse sensing: I. Motivation an construction, in Proc. Inform. Theory Workshop, Cairo, Egypt, Jan [5] M. Bayati an A. Montanari, The ynamics of message passing on ense graphs, with applications to compresse sensing, IEEE Trans. Inform. Theory, vol. 57, pp , Feb. 20. [6] S. Rangan, Generalize approximate message passing for estimation with ranom linear mixing, arxiv:04, Oct [7] M. E. Tipping, Sparse Bayesian learning an the relevance vector machine, J. Machine Learning Research, vol., pp , 200. [8] D. Wipf an B. Rao, Sparse Bayesian learning for basis selection, IEEE Trans. Signal Process., vol. 52, pp , Aug [9] S. Ji, Y. Xue, an L. Carin, Bayesian compressive sensing, IEEE Trans. Signal Process., vol. 56, pp , June [0] A. Dempster,. M. Lair, an D. B. Rubin, Maximum-likelihoo from incomplete ata via the EM algorithm, J. Roy. Statist. Soc., vol. 39, pp. 7, 977. [] R. eal an G. Hinton, A view of the EM algorithm that justifies incremental, sparse, an other variants, in Learning in Graphical Moels M. I. Joran, e., pp , MIT Press, 999. [2] E. van en Berg an M. P. Frielaner, Probing the Pareto frontier for basis pursuit solutions, SIAM J. Scientific Comput., vol. 3, no. 2, pp , [3] H. Mohimani, M. Babaie-Zaeh, an C. Jutten, A fast approach for overcomplete sparse ecomposition base on smoothe norm, IEEE Trans. Signal Process., vol. 57, pp , Jan [4] V. Cevher, Learning with compressible priors, in Proc. eural Inform. Process. Syst. Conf., Vancouver, B.C., Dec Fig. 7. MSE [B] MSE for noisy recovery of a non-compressible Stuent s-t signal.
Expectation-Maximization Gaussian-Mixture Approximate Message Passing
Expectation-Maximization Gaussian-Mixture Approximate Message Passing Jeremy Vila an Philip Schniter Dept. of ECE, The Ohio State University, Columbus, OH 43210. Email: vilaj@ece.osu.eu, schniter@ece.osu.eu
More informationCompressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach
Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional
More informationApproximate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery
Approimate Message Passing with Built-in Parameter Estimation for Sparse Signal Recovery arxiv:1606.00901v1 [cs.it] Jun 016 Shuai Huang, Trac D. Tran Department of Electrical and Computer Engineering Johns
More informationLeast-Squares Regression on Sparse Spaces
Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction
More informationRigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems
1 Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, and Sundeep Rangan Abstract arxiv:1706.06054v1 cs.it
More informationCapacity Analysis of MIMO Systems with Unknown Channel State Information
Capacity Analysis of MIMO Systems with Unknown Channel State Information Jun Zheng an Bhaskar D. Rao Dept. of Electrical an Computer Engineering University of California at San Diego e-mail: juzheng@ucs.eu,
More informationOn combinatorial approaches to compressed sensing
On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Emails:{abolhos,raha}@msu.eu
More informationA PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks
A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,
More information7.1 Support Vector Machine
67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to
More informationMessage Passing Algorithms for Compressed Sensing: II. Analysis and Validation
Message Passing Algorithms for Compressed Sensing: II. Analysis and Validation David L. Donoho Department of Statistics Arian Maleki Department of Electrical Engineering Andrea Montanari Department of
More informationAnalyzing Tensor Power Method Dynamics in Overcomplete Regime
Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical
More informationTopic 7: Convergence of Random Variables
Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information
More informationHomework 2 EM, Mixture Models, PCA, Dualitys
Homework 2 EM, Mixture Moels, PCA, Dualitys CMU 10-715: Machine Learning (Fall 2015) http://www.cs.cmu.eu/~bapoczos/classes/ml10715_2015fall/ OUT: Oct 5, 2015 DUE: Oct 19, 2015, 10:20 AM Guielines The
More informationApproximate Message Passing Algorithms
November 4, 2017 Outline AMP (Donoho et al., 2009, 2010a) Motivations Derivations from a message-passing perspective Limitations Extensions Generalized Approximate Message Passing (GAMP) (Rangan, 2011)
More informationPerformance Regions in Compressed Sensing from Noisy Measurements
Performance egions in Compressed Sensing from Noisy Measurements Junan Zhu and Dror Baron Department of Electrical and Computer Engineering North Carolina State University; aleigh, NC 27695, USA Email:
More informationINDEPENDENT COMPONENT ANALYSIS VIA
INDEPENDENT COMPONENT ANALYSIS VIA NONPARAMETRIC MAXIMUM LIKELIHOOD ESTIMATION Truth Rotate S 2 2 1 0 1 2 3 4 X 2 2 1 0 1 2 3 4 4 2 0 2 4 6 4 2 0 2 4 6 S 1 X 1 Reconstructe S^ 2 2 1 0 1 2 3 4 Marginal
More informationPhil Schniter. Collaborators: Jason Jeremy and Volkan
Bilinear Generalized Approximate Message Passing (BiG-AMP) for Dictionary Learning Phil Schniter Collaborators: Jason Parker @OSU, Jeremy Vila @OSU, and Volkan Cehver @EPFL With support from NSF CCF-,
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.
More informationSYNCHRONOUS SEQUENTIAL CIRCUITS
CHAPTER SYNCHRONOUS SEUENTIAL CIRCUITS Registers an counters, two very common synchronous sequential circuits, are introuce in this chapter. Register is a igital circuit for storing information. Contents
More informationMessage passing and approximate message passing
Message passing and approximate message passing Arian Maleki Columbia University 1 / 47 What is the problem? Given pdf µ(x 1, x 2,..., x n ) we are interested in arg maxx1,x 2,...,x n µ(x 1, x 2,..., x
More informationIntroduction to Machine Learning
How o you estimate p(y x)? Outline Contents Introuction to Machine Learning Logistic Regression Varun Chanola April 9, 207 Generative vs. Discriminative Classifiers 2 Logistic Regression 2 3 Logistic Regression
More informationTurbo-AMP: A Graphical-Models Approach to Compressive Inference
Turbo-AMP: A Graphical-Models Approach to Compressive Inference Phil Schniter (With support from NSF CCF-1018368 and DARPA/ONR N66001-10-1-4090.) June 27, 2012 1 Outline: 1. Motivation. (a) the need for
More informationBayesian Methods for Sparse Signal Recovery
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Jason Palmer, Zhilin Zhang and Ritwik Giri Motivation Motivation Sparse Signal Recovery
More informationApproximate Message Passing for Bilinear Models
Approximate Message Passing for Bilinear Models Volkan Cevher Laboratory for Informa4on and Inference Systems LIONS / EPFL h,p://lions.epfl.ch & Idiap Research Ins=tute joint work with Mitra Fatemi @Idiap
More informationA Review of Multiple Try MCMC algorithms for Signal Processing
A Review of Multiple Try MCMC algorithms for Signal Processing Luca Martino Image Processing Lab., Universitat e València (Spain) Universia Carlos III e Mari, Leganes (Spain) Abstract Many applications
More information. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.
S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial
More informationShort Intro to Coordinate Transformation
Short Intro to Coorinate Transformation 1 A Vector A vector can basically be seen as an arrow in space pointing in a specific irection with a specific length. The following problem arises: How o we represent
More informationBinary Classification and Feature Selection via Generalized Approximate Message Passing
Binary Classification and Feature Selection via Generalized Approximate Message Passing Phil Schniter Collaborators: Justin Ziniel (OSU-ECE) and Per Sederberg (OSU-Psych) Supported in part by NSF grant
More informationMismatched Estimation in Large Linear Systems
Mismatched Estimation in Large Linear Systems Yanting Ma, Dror Baron, Ahmad Beirami Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 7695, USA Department
More informationRobust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k
A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine
More informationTime-of-Arrival Estimation in Non-Line-Of-Sight Environments
2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor
More informationSlopey quantizers are locally optimal for Witsenhausen's counterexample
Slopey quantizers are locally optimal for Witsenhausen's counterexample The MIT Faculty has mae this article openly available. Please share how this access benefits you. Your story matters. Citation As
More informationConvergence of Random Walks
Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of
More informationA New Minimum Description Length
A New Minimum Description Length Soosan Beheshti, Munther A. Dahleh Laboratory for Information an Decision Systems Massachusetts Institute of Technology soosan@mit.eu,ahleh@lis.mit.eu Abstract The minimum
More informationHybrid Approximate Message Passing for Generalized Group Sparsity
Hybrid Approximate Message Passing for Generalized Group Sparsity Alyson K Fletcher a and Sundeep Rangan b a UC Santa Cruz, Santa Cruz, CA, USA; b NYU-Poly, Brooklyn, NY, USA ABSTRACT We consider the problem
More informationThe derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)
Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)
More informationAn Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an
More informationStatistical Image Recovery: A Message-Passing Perspective. Phil Schniter
Statistical Image Recovery: A Message-Passing Perspective Phil Schniter Collaborators: Sundeep Rangan (NYU) and Alyson Fletcher (UC Santa Cruz) Supported in part by NSF grants CCF-1018368 and NSF grant
More informationInferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines. Eric W. Tramel. itwist 2016 Aalborg, DK 24 August 2016
Inferring Sparsity: Compressed Sensing Using Generalized Restricted Boltzmann Machines Eric W. Tramel itwist 2016 Aalborg, DK 24 August 2016 Andre MANOEL, Francesco CALTAGIRONE, Marylou GABRIE, Florent
More informationLDA Collapsed Gibbs Sampler, VariaNonal Inference. Task 3: Mixed Membership Models. Case Study 5: Mixed Membership Modeling
Case Stuy 5: Mixe Membership Moeling LDA Collapse Gibbs Sampler, VariaNonal Inference Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox May 8 th, 05 Emily Fox 05 Task : Mixe
More informationLower Bounds for the Smoothed Number of Pareto optimal Solutions
Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.
More informationRelative Entropy and Score Function: New Information Estimation Relationships through Arbitrary Additive Perturbation
Relative Entropy an Score Function: New Information Estimation Relationships through Arbitrary Aitive Perturbation Dongning Guo Department of Electrical Engineering & Computer Science Northwestern University
More informationVector Approximate Message Passing. Phil Schniter
Vector Approximate Message Passing Phil Schniter Collaborators: Sundeep Rangan (NYU), Alyson Fletcher (UCLA) Supported in part by NSF grant CCF-1527162. itwist @ Aalborg University Aug 24, 2016 Standard
More informationLinear and quadratic approximation
Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function
More informationLecture 2 Lagrangian formulation of classical mechanics Mechanics
Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,
More informationSurvey-weighted Unit-Level Small Area Estimation
Survey-weighte Unit-Level Small Area Estimation Jan Pablo Burgar an Patricia Dörr Abstract For evience-base regional policy making, geographically ifferentiate estimates of socio-economic inicators are
More informationOn convergence of Approximate Message Passing
On convergence of Approximate Message Passing Francesco Caltagirone (1), Florent Krzakala (2) and Lenka Zdeborova (1) (1) Institut de Physique Théorique, CEA Saclay (2) LPS, Ecole Normale Supérieure, Paris
More informationLecture 2: Correlated Topic Model
Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables
More informationLecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012
CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration
More informationTIME-DELAY ESTIMATION USING FARROW-BASED FRACTIONAL-DELAY FIR FILTERS: FILTER APPROXIMATION VS. ESTIMATION ERRORS
TIME-DEAY ESTIMATION USING FARROW-BASED FRACTIONA-DEAY FIR FITERS: FITER APPROXIMATION VS. ESTIMATION ERRORS Mattias Olsson, Håkan Johansson, an Per öwenborg Div. of Electronic Systems, Dept. of Electrical
More informationImage Denoising Using Spatial Adaptive Thresholding
International Journal of Engineering Technology, Management an Applie Sciences Image Denoising Using Spatial Aaptive Thresholing Raneesh Mishra M. Tech Stuent, Department of Electronics & Communication,
More informationRecent Developments in Compressed Sensing
Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline
More informationHomework 2 Solutions EM, Mixture Models, PCA, Dualitys
Homewor Solutions EM, Mixture Moels, PCA, Dualitys CMU 0-75: Machine Learning Fall 05 http://www.cs.cmu.eu/~bapoczos/classes/ml075_05fall/ OUT: Oct 5, 05 DUE: Oct 9, 05, 0:0 AM An EM algorithm for a Mixture
More informationThis module is part of the. Memobust Handbook. on Methodology of Modern Business Statistics
This moule is part of the Memobust Hanbook on Methoology of Moern Business Statistics 26 March 2014 Metho: Balance Sampling for Multi-Way Stratification Contents General section... 3 1. Summary... 3 2.
More informationSpace-time Linear Dispersion Using Coordinate Interleaving
Space-time Linear Dispersion Using Coorinate Interleaving Jinsong Wu an Steven D Blostein Department of Electrical an Computer Engineering Queen s University, Kingston, Ontario, Canaa, K7L3N6 Email: wujs@ieeeorg
More informationLectures - Week 10 Introduction to Ordinary Differential Equations (ODES) First Order Linear ODEs
Lectures - Week 10 Introuction to Orinary Differential Equations (ODES) First Orer Linear ODEs When stuying ODEs we are consiering functions of one inepenent variable, e.g., f(x), where x is the inepenent
More informationEquivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 12, DECEMBER 2008 2009 Equivalence Probability and Sparsity of Two Sparse Solutions in Sparse Representation Yuanqing Li, Member, IEEE, Andrzej Cichocki,
More informationEmpirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message Passing
Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message Passing Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy
More informationMixture Models and EM
Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering
More informationRisk and Noise Estimation in High Dimensional Statistics via State Evolution
Risk and Noise Estimation in High Dimensional Statistics via State Evolution Mohsen Bayati Stanford University Joint work with Jose Bento, Murat Erdogdu, Marc Lelarge, and Andrea Montanari Statistical
More informationLeaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes
Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,
More informationSlide10 Haykin Chapter 14: Neurodynamics (3rd Ed. Chapter 13)
Slie10 Haykin Chapter 14: Neuroynamics (3r E. Chapter 13) CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Neural Networks with Temporal Behavior Inclusion of feeback gives temporal characteristics to
More informationSurvey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013
Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing
More informationFunction Spaces. 1 Hilbert Spaces
Function Spaces A function space is a set of functions F that has some structure. Often a nonparametric regression function or classifier is chosen to lie in some function space, where the assume structure
More informationSparse Reconstruction of Systems of Ordinary Differential Equations
Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA
More informationMotivation Sparse Signal Recovery is an interesting area with many potential applications. Methods developed for solving sparse signal recovery proble
Bayesian Methods for Sparse Signal Recovery Bhaskar D Rao 1 University of California, San Diego 1 Thanks to David Wipf, Zhilin Zhang and Ritwik Giri Motivation Sparse Signal Recovery is an interesting
More informationOptimization Notes. Note: Any material in red you will need to have memorized verbatim (more or less) for tests, quizzes, and the final exam.
MATH 2250 Calculus I Date: October 5, 2017 Eric Perkerson Optimization Notes 1 Chapter 4 Note: Any material in re you will nee to have memorize verbatim (more or less) for tests, quizzes, an the final
More informationKNN Particle Filters for Dynamic Hybrid Bayesian Networks
KNN Particle Filters for Dynamic Hybri Bayesian Networs H. D. Chen an K. C. Chang Dept. of Systems Engineering an Operations Research George Mason University MS 4A6, 4400 University Dr. Fairfax, VA 22030
More informationFast Angular Synchronization for Phase Retrieval via Incomplete Information
Fast Angular Synchronization for Phase Retrieval via Incomplete Information Aditya Viswanathan a and Mark Iwen b a Department of Mathematics, Michigan State University; b Department of Mathematics & Department
More informationBayesian Estimation of the Entropy of the Multivariate Gaussian
Bayesian Estimation of the Entropy of the Multivariate Gaussian Santosh Srivastava Fre Hutchinson Cancer Research Center Seattle, WA 989, USA Email: ssrivast@fhcrc.org Maya R. Gupta Department of Electrical
More informationProbabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms
Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,
More informationarxiv: v1 [cs.it] 21 Feb 2013
q-ary Compressive Sensing arxiv:30.568v [cs.it] Feb 03 Youssef Mroueh,, Lorenzo Rosasco, CBCL, CSAIL, Massachusetts Institute of Technology LCSL, Istituto Italiano di Tecnologia and IIT@MIT lab, Istituto
More informationTractable Upper Bounds on the Restricted Isometry Constant
Tractable Upper Bounds on the Restricted Isometry Constant Alex d Aspremont, Francis Bach, Laurent El Ghaoui Princeton University, École Normale Supérieure, U.C. Berkeley. Support from NSF, DHS and Google.
More informationThe Minimax Noise Sensitivity in Compressed Sensing
The Minimax Noise Sensitivity in Compressed Sensing Galen Reeves and avid onoho epartment of Statistics Stanford University Abstract Consider the compressed sensing problem of estimating an unknown k-sparse
More informationNonlinear Adaptive Ship Course Tracking Control Based on Backstepping and Nussbaum Gain
Nonlinear Aaptive Ship Course Tracking Control Base on Backstepping an Nussbaum Gain Jialu Du, Chen Guo Abstract A nonlinear aaptive controller combining aaptive Backstepping algorithm with Nussbaum gain
More informationIntroduction to Markov Processes
Introuction to Markov Processes Connexions moule m44014 Zzis law Gustav) Meglicki, Jr Office of the VP for Information Technology Iniana University RCS: Section-2.tex,v 1.24 2012/12/21 18:03:08 gustav
More informationunder the null hypothesis, the sign test (with continuity correction) rejects H 0 when α n + n 2 2.
Assignment 13 Exercise 8.4 For the hypotheses consiere in Examples 8.12 an 8.13, the sign test is base on the statistic N + = #{i : Z i > 0}. Since 2 n(n + /n 1) N(0, 1) 2 uner the null hypothesis, the
More informationOptimal Value Function Methods in Numerical Optimization Level Set Methods
Optimal Value Function Methods in Numerical Optimization Level Set Methods James V Burke Mathematics, University of Washington, (jvburke@uw.edu) Joint work with Aravkin (UW), Drusvyatskiy (UW), Friedlander
More informationMaximal Causes for Non-linear Component Extraction
Journal of Machine Learning Research 9 (2008) 1227-1267 Submitte 5/07; Revise 11/07; Publishe 6/08 Maximal Causes for Non-linear Component Extraction Jörg Lücke Maneesh Sahani Gatsby Computational Neuroscience
More informationSignal Recovery from Permuted Observations
EE381V Course Project Signal Recovery from Permuted Observations 1 Problem Shanshan Wu (sw33323) May 8th, 2015 We start with the following problem: let s R n be an unknown n-dimensional real-valued signal,
More informationSparse Linear Models (10/7/13)
STA56: Probabilistic machine learning Sparse Linear Models (0/7/) Lecturer: Barbara Engelhardt Scribes: Jiaji Huang, Xin Jiang, Albert Oh Sparsity Sparsity has been a hot topic in statistics and machine
More informationCollapsed Gibbs and Variational Methods for LDA. Example Collapsed MoG Sampling
Case Stuy : Document Retrieval Collapse Gibbs an Variational Methos for LDA Machine Learning/Statistics for Big Data CSE599C/STAT59, University of Washington Emily Fox 0 Emily Fox February 7 th, 0 Example
More informationAn Analytical Expression of the Probability of Error for Relaying with Decode-and-forward
An Analytical Expression of the Probability of Error for Relaying with Decoe-an-forwar Alexanre Graell i Amat an Ingmar Lan Department of Electronics, Institut TELECOM-TELECOM Bretagne, Brest, France Email:
More informationWiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error
Wiener Filters in Gaussian Mixture Signal Estimation with l -Norm Error Jin Tan, Student Member, IEEE, Dror Baron, Senior Member, IEEE, and Liyi Dai, Fellow, IEEE Abstract Consider the estimation of a
More informationECE 422 Power System Operations & Planning 7 Transient Stability
ECE 4 Power System Operations & Planning 7 Transient Stability Spring 5 Instructor: Kai Sun References Saaat s Chapter.5 ~. EPRI Tutorial s Chapter 7 Kunur s Chapter 3 Transient Stability The ability of
More informationPhil Schniter. Supported in part by NSF grants IIP , CCF , and CCF
AMP-inspired Deep Networks, with Comms Applications Phil Schniter Collaborators: Sundeep Rangan (NYU), Alyson Fletcher (UCLA), Mark Borgerding (OSU) Supported in part by NSF grants IIP-1539960, CCF-1527162,
More informationSchrödinger s equation.
Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of
More informationLeast Distortion of Fixed-Rate Vector Quantizers. High-Resolution Analysis of. Best Inertial Profile. Zador's Formula Z-1 Z-2
High-Resolution Analysis of Least Distortion of Fixe-Rate Vector Quantizers Begin with Bennett's Integral D 1 M 2/k Fin best inertial profile Zaor's Formula m(x) λ 2/k (x) f X(x) x Fin best point ensity
More informationPHYS 414 Problem Set 2: Turtles all the way down
PHYS 414 Problem Set 2: Turtles all the way own This problem set explores the common structure of ynamical theories in statistical physics as you pass from one length an time scale to another. Brownian
More informationStudents need encouragement. So if a student gets an answer right, tell them it was a lucky guess. That way, they develop a good, lucky feeling.
Chapter 8 Analytic Functions Stuents nee encouragement. So if a stuent gets an answer right, tell them it was a lucky guess. That way, they evelop a goo, lucky feeling. 1 8.1 Complex Derivatives -Jack
More informationEstimation Error Bounds for Frame Denoising
Estimation Error Bounds for Frame Denoising Alyson K. Fletcher and Kannan Ramchandran {alyson,kannanr}@eecs.berkeley.edu Berkeley Audio-Visual Signal Processing and Communication Systems group Department
More informationu!i = a T u = 0. Then S satisfies
Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace
More informationRamsey numbers of some bipartite graphs versus complete graphs
Ramsey numbers of some bipartite graphs versus complete graphs Tao Jiang, Michael Salerno Miami University, Oxfor, OH 45056, USA Abstract. The Ramsey number r(h, K n ) is the smallest positive integer
More informationELEC3114 Control Systems 1
ELEC34 Control Systems Linear Systems - Moelling - Some Issues Session 2, 2007 Introuction Linear systems may be represente in a number of ifferent ways. Figure shows the relationship between various representations.
More informationECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis
ECE 8201: Low-dimensional Signal Models for High-dimensional Data Analysis Lecture 3: Sparse signal recovery: A RIPless analysis of l 1 minimization Yuejie Chi The Ohio State University Page 1 Outline
More informationIteratively Reweighted l 1 Approaches to Sparse Composite Regularization Rizwan Ahmad and Philip Schniter, Fellow, IEEE
220 IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, VOL., NO. 4, DECEMBER 205 Iteratively Reweighte l Approaches to Sparse Composite Regularization Rizwan Ahma an Philip Schniter, Fellow, IEEE Abstract Motivate
More informationLATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION
The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische
More information'HVLJQ &RQVLGHUDWLRQ LQ 0DWHULDO 6HOHFWLRQ 'HVLJQ 6HQVLWLYLW\,1752'8&7,21
Large amping in a structural material may be either esirable or unesirable, epening on the engineering application at han. For example, amping is a esirable property to the esigner concerne with limiting
More informationOptimization of Geometries by Energy Minimization
Optimization of Geometries by Energy Minimization by Tracy P. Hamilton Department of Chemistry University of Alabama at Birmingham Birmingham, AL 3594-140 hamilton@uab.eu Copyright Tracy P. Hamilton, 1997.
More informationIntroduction to Probabilistic Graphical Models: Exercises
Introduction to Probabilistic Graphical Models: Exercises Cédric Archambeau Xerox Research Centre Europe cedric.archambeau@xrce.xerox.com Pascal Bootcamp Marseille, France, July 2010 Exercise 1: basics
More information