On combinatorial approaches to compressed sensing

Size: px
Start display at page:

Download "On combinatorial approaches to compressed sensing"

Transcription

1 On combinatorial approaches to compresse sensing Abolreza Abolhosseini Moghaam an Hayer Raha Department of Electrical an Computer Engineering, Michigan State University, East Lansing, MI, U.S. Abstract In this paper, we look at combinatorial algorithms for Compresse Sensing from a ifferent perspective. We show that certain combinatorial solvers are in fact recursive implementations of convex relaxation methos for solving compresse sensing, uner the assumption of sparsity for the projection matrix. We exten the notion of sparse binary projection matrices to sparse real-value ones. We prove that, contrary to their binary counterparts, this class of sparse-real matrices has the Restricte Isometry Property. Finally, we generalize the voting mechanism (employe in combinatorial algorithms) to notions of isolation/alignment an present the require solver for real-value sparse projection matrices base on such isolation/alignment mechanisms. Keywors: Compresse sensing, combinatorial approaches I. INTRODUCTION The theory of Compresse Sensing (CS)[1][2] eals with fining the unique sparse (or compressible) signal x R from a limite number of linear samples y = Px where P R an m < n. Roughly speaking, there are three approaches to fining the solution x : convex relaxation methos [2][3][4], greey algorithms [5] an combinatorial approaches [6][7][8][11]. Convex relaxation methos fin the unique sparsest solution to the uner-etermine system of linear equations y = Px by solving a convex optimization problem in the form of: arg min x s. t. y = Px (1) where p = 1. Although being tractable an emaning the fewest number of samples/equations (m) to guarantee perfect recovery, solving such linear program generally coul be very complex (e.g. O(n ) for Basis Pursuit [3]). On the other extreme, combinatorial algorithms have much less complexities (compare to convex relaxation methos) but at the cost of higher sample/equation requirements. The low computational costs of combinatorial algorithms stem from the fact that these approaches are base on binary sparse projection matrices P an light voting-like ecoing stages. Although there have been some works (e.g.[8]) highlighting connections between convex relaxation methos an combinatorial algorithms, in this paper we present a ifferent perspective onto such link. We show that, a typical combinatorial ecoer to CS is in fact a simple recursive l minimizer in case of a binary projection matrix. Then, we answer the following question: what will happen if non-zero entries of the binary projection matrix P are replace by (specifically Gaussian) ranom variables? We prove that, such moification provies us RIP-2 [2] in case of sparse projection matrices, a property which binary sparse projection matrices o not possess [10] for m = O(k log n/k ) where k is the sparsity of the signal x ( k = x = #i: x 0 ). Unfortunately, this moification (going from a sparse binary matrix to a sparse real-value one) implies that the voting mechanism of combinatorial approaches [6][7] will no longer work in case of such new projection matrices. Consequently, we present concepts of alignment an isolation for generic (real-value) sparse projection matrices. We show that the voting mechanism is in fact a special implementation of those concepts in case of binary projections. Finally, we present necessary an sufficient conitions uner which the new voting mechanism (alignment/isolation) is guarantee to fin the sparsest solution. This paper is structure as follows: In section II, we show how a basic combinatorial algorithm coul be viewe as a recursive l minimizer an exten the notion of using sparse binary projections to sparse real-value matrices. In section III, we prove that RIP-2 hols for such new class of projection matrices. Concepts of alignment/isolation an conitions for their equivalency are presente in Section IV. Simulation results are brought in Section V an Section VI conclues this paper. II. A GENERAL FRAMEWORK FOR CS COMBINATORIAL APPROACHES First, we introuce the notation use in this paper. For q N, efine[q] {1,2,, q}. For a projection matrix P R an for a row inex i [m], efine ω = j: P, 0. Hence, ω is the set of column inices where the i-th row of P is nonzero. We generalize such notation for any set of rows i [m] as: ω = ω. The set of row inices where the i -th column of P are non-zero, is Ω, i.e. Ω = {j: P, 0}. Similarly, one can generalize this notation for any set of columns i [n] by: Ω = Ω. A zero vector of length l is enote by 0. Throughout this paper, we consier the class of combinatorial algorithms in their most basic form. Hence, our iscussion exclues message passing algorithms [9] an we focus only on those algorithms which estimate signal values at each coorinate only once (e.g. [6]). Algorithm 1 shows how a typical combinatorial approach to CS operates for solving (1) uner the zero pseuo norm p = 0. It can be easily verifie that Algorithm 2 coul be perceive as the recursive formulation of Algorithm 1 when p is set to zero in the thir line of Algorithm 2.

2 Inputs : P R an y R Output : x R 1) y = y, z = ; 2) While z [n] 3) Fin l [m] such that x = 1 4) Let x enotes estimate of values x 5) For θ = ω : y y P, x, z z ω 6) En Algorithm 1. A typical combinatorial algorithm for CS. The set of coefficients which their values are etermine, is enote by z. Inputs : P R, θ an y R Output : x R 1) x 0 2) Fin l [m] an θ = ω θ such that x = 1 3) x = arg min {} ξ subject to y = P, ξ 4) x / Δ(P, θ/θ, y Px) Algorithm 2. Δ(P, θ, y) the recursive formulation of Algorithm 1. Initializing with θ = [n] an after fining θ such that x = 1 (fining these subsets is iscusse in Section IV), the algorithm recursively fins θ [n]/ [] θ such that x = 1 where θ enotes the value of θ (line 2 in Algorithm 2) in the j -th level of the stack. This process continues until all non-zero coefficients are recovere. Note that, for a vector ξ with a single non-zero entry ( ξ = 1) an z = Pξ, the solutions to (P1) for p = 0 an p = 1 are equal by the triangle inequality: (P1) ξ = arg min ξ s. t. z = Pξ (2) Also note that for any signal x an two isjoint subsets of a [n] an a = [n]/a one has: x + x = x both for p = 0 an p = 1 (see line 3 an 4 in Algorithm 2). Combining this observation with the equality of solutions of (P1) for p = 0 an p = 1, one can conclue that in Algorithm 2, the solutions for p = 0 an p = 1 are exactly the same. Hence, Algorithm 1 coul be recast as a recursive l minimizer. The success of the aforementione recursive algorithm epens on: (a) the ability of step 3 in Algorithm 1 to fin isolate non-zeros an (b) the quality of the projection matrix P in recoverability. More specifically, the first conition emans that the ecoer at the i-th iteration coul truly fin θ = ω, a subset of signal coefficients, containing only one non-zero ( x = 1 ). Meanwhile, the secon requirement states that, at least one isolate non-zero must exist throughout the recovery process an hence such process shall not be halte before the perfect recovery. It has been shown [7] that certain sparse matrices, for instance the ajacency matrices of high quality expaner graphs provie the secon conition. In the rest of this paper, we assume that the employe sparse projection matrix has the secon conition, meaning that if isolate non-zeros are correctly ientifie in step 3 of Algorithm 1, then the recovery process will not halt. To that en an to make our analysis simple, we aapt the ajacency matrices of ranom left regular expaner graphs [7]. That is, each column of P is non-zero in exactly = O(log ) ranom inices. However, we exten the notion of using binary sparse projection matrices to ranom (real value) sparse projections. More specifically, we simply substitute the non-zero entries of a sparse binary P (i.e. entries with values of one) with a zeromean Gaussian ranom variable with variance of 1/. We show that by such simple moification, RIP-2 will hol true in case of the propose P, the property which binary sparse projection matrices o not possess [10] (for m = k log n/k). Having the guarantee of recoverability, now our concern will be to satisfy the first conition, i.e. correctly fining isolate non-zeros. The most common approach for fining isolate non-zeros (in case of a binary sparse projection matrix) is to apply a voting mechanism (see [6][7]). More specifically, assume that there exists a set of sample inices of l [m] such that all of them vote the same value, i.e.: p, q l: y = y an they span the same single coefficient (i.e. ω = i). Then typical combinatorial algorithms infer that x equals to that vote an x = 0 for j ω /i. Such conclusion might imply certain presumptions about the unerlying signal. For example, it has been assume in [11] that no two subsetsθ, θ [n] exist such that x x or other assumptions in some other methos such as non-negativity of the signal an so on. Consequently, the applicability of the aforementione approaches will be restricte to signals satisfying corresponing conitions. On the other han, ue to the non-binary nature of the propose sparse projection matrix P, the mechanism of voting is not applicable in case of P. Consequently, we introuce the notions of alignment an isolation in section IV an show that the metho of voting is a special case of such notions in case of binary matrices. Furthermore, we present necessary an sufficient conitions for etecting isolate non-zeros. Before that, we prove that the RIP property hols for the propose sparse-real matrices; whereas it has been shown that RIP oes not hol for the corresponing sparse-binary matrices [10]. III. RESTRICTED ISOMETRY PROPERTY A matrix P R has RIP-2 of orer k N [2], if for every k-sparse signal x R, there exists a constant δ such that: (1 δ ) x Px (1 + δ ) x In this section, we present the proof sketch that certain sparse real-value projection matrices have RIP-2. Theorem 1. Assume P R has the following properties: (a) each column is non-zero in exactly = O(log ) ranom inices an (b) the non-zeros entries are zero mean Gaussian ranom variables with variance of 1/. Then for any x R with k = x = O(n) an sufficiently large m = O(k log ) e(), there exists constant c (ε) such that: Pr( Px x ε x ) 2e () (3) Proof: We follow simple an elegant approaches of [12][13] to show (3) hols. Recall that non-zero entries of P are i.i.

3 N(0,1/). Hence E(y ) = x E(P, ) = 0. Also by using the fact that non-zero entries of P are inepenent, it can be easily shown that: x E(y ) = x E(P, ) = (4) Consequently for E( y ) = E( y ) = E(y ) E( y ) = x = c x : where c is the number of times that i [n] occurs in ω. Recall that, all columns of P are non-zero in exactly row inices. Hence, for all i [n]we have c =. Consequently: E( y ) = x. Now we focus on fining how much y coul eviate from x. Here, we erive bouns for the lower an upper tails separately. Note that (see [13]): y ~x / N(0,1). Hence: y ~ x χ (1) where χ (1) is a chi-square with one egree of freeom. Since non-zero entries of P are inepenent, y an y are inepenent for i j. Consequently the moment generating function for y is: M(t, y ) = Ee = 1 2x t / for t < 1/2. Applying the metho of generating functions an then the Markov s inequality yiels: Pr( y (1 + ε) x ) g(t) = Ee = 1 e () e () For simplicity an without loss of generality assume x is a unit norm vector. Thus for all i, we have x x = 1 an consequently: (5) (6) (8) g(t) 1 2t e () (9) To get the tightest bouns, one nees to set the erivate of g(t) with respect to t to zero. Since t < 1/2, only the solution of t = woul be of importance. Plugging this value () into (9) an then (8) yiels: Pr( y (1 + ε) x ) e () m e(1 + ε) (7) (10) It is straightforwar to verify that, uner the assumption of m e () (recall = O log an m = O k log ): Pr( y (1 + ε) x ) e ( ) (11) where exp{ m matrix from a Gaussian ensemble[12]. } is the boun for a ense projection Similarly for fining the boun for the lower tail, one can follow the same approach to get: Pr( y (1 + ε) x ) Ee e () t = e () 1 + 2x / (12) for the unit-norm vector x an t < 1/2. Define S = {i: x 0}. Hence S is the set of inices where the signal x is nonzero. Then efine the constant δ (note that this constant is ifferent from the RIP constant δ ) as follow: δ = inf{ x } i S (13) In wors, if x 0 then x δ. Note that, since k = x = O(n) an P (when its non-zero entries are replace with one) correspons to a (k, β) expaner graph (β < 1), hence (a) there exists inices Ξ where Ξ [m], (1 β)k ξ = Ξ k an (b) i Ξ: δ x 1. Thus ξ is in the orer of O(m). Consequently: Where: Letting t = Pr( y (1 ε) x ) f(t) (14) f(t) = e () 1 δt / () < aroun ε = 0 yiels: (15) an calculating Taylor series of f(t) f(t) = δξ 4 ε + (4 12δξ 2δ ξ + δ ξ ) 32 + O(ε ) It can be verifie that, when: then: 2 ξ ε < δ (16) f(t) 2 exp 1 2 δξ 4 ϵ (17) up to orer O(ε ). Note that in the asymptotic case when k, constraint (16) emans that δ > 0 which by efinition of (13) is trivially true. Assume m = O k log =

4 C log an = O log = C log. Then combining (17) with (14) yiels: Where: Pr( y (1 ε) x ) e (18) C (1 β)c δε 4C (19) Thus, uner the assumption of k = O(n) (or equivalently = O log = O(1)), we woul have: C = O(1). Having Theorem 1, one can apply Theorem 5.2 in [12] to prove that the sparse ranom projection matrix P (with the aforementione conitions) has RIP-2. Consequently, BP [3] or other convex relaxation approaches [4] coul be utilize for the propose projection matrix to have a robust an stable recovery in the presence of noise an in case of compressible (as oppose to exactly sparse) signals. In the next section, we fin conitions for Algorithm 1 to guarantee perfect recovery in case of exactly sparse signals. IV. NECESSARY AND SUFFICIENT CONDITIONS FOR FINDING ISOLATED NON-ZEROS As before, assume P R is a matrix where each of its columns is non-zero in exactly = O(log ) ranom inices. Also assume that, non-zero entries of P are zero-mean Gaussian ranom variables with variance of 1/. If non-zero entries of P are replace with one, then the resulting binary matrix woul correspon to the ajacency matrix of a high quality expaner graph (with high probability) [7]. Since the analysis of [7] is not base on values of P, therefore the recoverability hols for P. Consequently in here, we only iscuss lines 3 an 4 in Algorithm 1. We show, how one can fin subsets of signal coefficients containing only one nonzero, by examining compressive samples. To that en, we nee to efine two basic concepts. We say x is isolate in sample inices of l [m] if x = 1, x 0. In other wors, x is the only non-zero signal coefficient (among inices ω ) spanne by samples y. Clearly, since P :, is nonzero only in inices Ω, therefore l Ω. For q [m] we say y is aligne with the j-th column of P if: i q: P, 0 an α 0: y = αp,. Note that, if P is binary (i.e. P, is an all one vector), then the efinition of alignment reas as there exists a set of samples (y ), all voting the same value of α. This is exactly the voting mechanism, employe in combinatorial algorithms. This verifies our claim that the voting mechanism is in fact an alignment in the special case of binary matrices. Assume that x 0 is isolate in sample inices of l Ω [m]. Then: y = P, x = x P, (20) In wors, if x is isolate in sample inices of l then y will be aligne with the i -th column of P. Thus isolation is a necessary conition for alignment. Now, if isolation is a sufficient conition for alignment as well, then isolate nonzero coefficients in the signal coul be foun by looking for alignment of samples among the columns of P. For now assume that isolation an alignment are equivalent. Then step 3 an 4 in Algorithm 1 coul be implemente as follows: to see whether x is isolate in a set of unknown samples, one might consier P Ω, an looks for q Ω such that y = αp,. If there exists any q with those conitions, then (20) coul be applie to infer x = α. In the rest of this section, we ientify the unerlying conitions uner which isolation is a sufficient conition for alignment. Here, it is important to highlight the impact of the size of the set l Ω [m] in our analysis. Since, in an extreme case when l = 1 in (20), there always exists a meaningless alignment. Recall that, each column of P is non-zero in exactly = O(log n/k) entries. In [7], it has been shown that in case of a sparse binary projection matrix base on the ajacency matrix of a left -regular ranom graph, there exists at least τ 1 + /2 samples which span only one (an the same) non-zero in all stages of ecoing. Hence, in here we have l = τ > /2 = O(log n/k) an ignore alignments with size smaller than /2 in the ecoer. As before, assume y is aligne with the i-th column of P, where l > /2. Since by efinition y = P, x an y > 0, there are two possible cases here: (a) x = 1 or (b) x > 1. In the first case, since y is non-zero in all entries ( y = l ) an for all j ω /i we have P, < l (this comes from the expaner property of P) thus one can conclue that, x is the only non-zero among inices of ω an its value can be easily compute by (20). In wors, in the first case, isolation is a sufficient an necessary conition for alignment. Unfortunately, a similar argument generally oes not hol true for the secon case (x > 1) for P. Nevertheless, here we show how one can moify P such that, the secon case woul never happen with a very high probability. To that en, we follow these steps (a) for any subset of samples l [m] with l 1 + = O(log ) first we fin Q = maxx, the maximum number of non-zeros which will be spanne (with high probability) by samples l. (b) Then we moify the projection matrix, such that every Q columns of P, woul be inepenent. Finally by using elementary arguments, one can show that it is impossible to have x > 1 an y to be aligne with the i-th column of P. Define the binary ranom variable X as: X = 1 if x 0 an i ω (21) 0 otherwise It can be easily verifie that Q = max X.Now since the probability that x 0 is inepenent from the probability of the event that i ω, thus: Pr(X = 1) = k m l n 1 m (22)

5 Assuming k 1 an to simplify erivations, here use the approximation: 1/(m + 1) 1/m. Using the inequality 1 + α e, one can show: μ = E[Q] = n Pr(X = 1) k(1 exp( l/m)). Recalling that = O log = c log, /2 < l an m = O log = c k log, we have: μ k 1 k n (23) Here we woul like to note that μ is extremely small. In fact, it can be easily shown that for a constant α 1 an uner the assumption of n > c log /c log 1 : we get μ α. Nevertheless, here we assume that μ = α = O(1) for a properly chosen value of α. Now it only remains to boun the eviation of Q from μ. Fortunately, since Q is the sum of Bernoulli ranom variables, there exists goo concentration inequalities for it. For instance, by using Chernoff boun for Poisson trials, it can be easily shown that for log(1 + δ) > 2 (or δ 6.389) we have: Pr(Q μ + δβ log n). Now if the spark [14] of P, is more then Q, then it is impossible to have y aligne with the i-th column of P an at the same time x > 1. This can be prove by contraiction. More specifically, assume 1 < x < Q an y is aligne with the i -th column of P ( y = P, x = αp, ). Define a new vector x where x = x α an x = x for i ω /j. Then P, x = 0 an x Q which contraicts the presumption that the spark of P, is more than Q. In summary, as long as every Q = O(log n) columns of P, for every possible subset l [m] with l O log are inepenent, then isolation woul be a sufficient an necessary conition for alignment. Unfortunately, this requirement oes not hol for a projection matrix constructe from a ranom left regular bi-partite graph, since it is quite possible to have two columns with inices u an v in P, such that, both of them are non-zero exactly in one (an the same) row inex j. Clearly, these two columns in P, are linearly epenent which reuces the spark of P, to a minimal of one. This problem coul be resolve by using Algorithm 3 to moify P. Inputs : P R an Q Output : P For i = 1 to n o T = 0 For j Ω o l {Q(j 1) + 1,, Qj}: T ~N 0, En P :, = T En Algorithm 3 Increasing the spark of sub-matrices of P. The output matrix P has the following properties: (a) P R (b) Ω = j: P, 0 = Ω {j(q 1) + 1, j(q 1) + 2,, jq} (c) Let ω = j: P, 0. Then i {j(q 1) + 1, j(q 1) + 2,, jq} we have ω = ω. Thus for instance, if x is isolate in sample inices l, then the same coefficient x shall be isolate in samples y where y = P x an b = {j(q 1) + 1, j(q 1) + 2,, jq}. Consequently, the ecoing algorithm an all the analysis we presente in case of the projection matrix P stay the same in case of the new projection matrix P. However, P has a feature that P oes not have an that is: for any subset l [m] with l = O(), P, woul have a spark of at least Q. This can be shown by the following argument. If u, v ω then (by properties of P ) either Ω Ω = (meaning those two columns are orthogonal) or Ω Ω Q. Now since the non-zero entries of P are populate ranomly (~ N(0, )), thus every Q columns of P woul be inepenent. Note that, this property is gaine at the cost of increasing the sample requirement from O(k log(n/k)) to O(Qk log(n/k)) = O(k log( n) log(n/k)). Amittely our analysis coul be improve furthermore an one can come up with tighter bouns for Q. Our simulation results show that, in practice an optimal number of samples is require for perfect recovery. V. SIMULATION RESULTS In this section, we present simulation results valiating our claims/proofs in this paper. First, the joint performance of the propose combinatorial algorithm an sparse real-value projection matrix for recovering sparse signals is investigate. To that en, for a fixe signal length of n = 1000, ten ifferent levels of m/n an seven levels of k/m, we fin probability of exact recovery as functions of those parameters. For each configuration of m/n an k/m, one hunre simulations are performe as follows: (a) we select uniformly at ranom k locations among [n] (b) In selecte inices, the test signal x woul be populate accoring to a normal istribution. (c) We generate P, a sparse real-value projection matrix base on assumptions Section II an Algorithm 4, with parameters = 3 an Q = 2. () The compressive samples y = Px are then given to Algorithm 1 to compute x. Figure 1, shows the ratio of perfect recovery ( x = x ) among one hunre simulations for each configuration of m/n an k/m. Figure 1. The probability of exact recovery of with signal of length n = 1000 as a function of k/m an m/n.

6 To evaluate performance of introuce projection matrices in the presence of noise, we consiere a signal x with length n = 2000 where k = 150 of its entries are selecte uniformly at ranom an their values are uniformly chosen in the range of [ 0.5,0.5]. To assess, how the propose projection matrix woul act in conjunction with a robust solver such as BP, we generate two matrices P, P R where m = 600, P is a sparse real-value projection matrix with parameters = 3 an Q = 2 an P is a ense ranom matrix with Gaussian entries. Corresponingly, two sets of compressive samples were generate an contaminate by the same noise ε, i.e. y = Px + ε an y = P x + ε an fe into BP. We chose ε to be a Gaussian noise (ε~n(0, σ )) an consiere ten ifferent levels for σ. For each variance level, we performe one hunre simulations an recore the maximum l norm of recovery error ( x x ). The results are presente in Figure 2. As clearly shown in that figure, BP shows very similar stabilities uner both types of (ense Gaussian an the propose) projection matrices. Figure 2. l 2 norm of recovery error in a noisy setting. VI. CONCLUSION In this paper, we show how certain classes of combinatorial algorithms are linke to convex relaxation methos for CS. We consiere the scenario when non-zero entries of a sparse binary projection matrix are replace with ranom Gaussian numbers an show how the notion of voting (employe for sparse binary projections) extens to isolation/alignment. The RIP-2 is prove for this new class of projections. Sufficient an necessary conitions for the equivalency of isolation an alignment are presente. REFERENCES [1] Davi Donoho, Compresse sensing. IEEE Trans. on Information Theory, 52(4), pp , April [2] E. Canès, J. Romberg, an T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. on IT, 52(2) pp , February 2006 [3] S.S. Chen, D. L. Donoho, Michael, A. Sauners, Atomic ecomposition by Basis Pursuit, Siam J. of Scientific Comp, pp , [4] M. A. T. Figueireo, R. D. Nowak, an S. J. Wright, Graient projection for sparse reconstruction: Application to compresse sensing an other inverse problems. IEEE J. of Selecte Topics in Signal Processing: Special Issue on Convex Optimization Methos for Signal Processing, 1(4), pp , [5] D. Neeell, J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete an inaccurate samples. (Preprint, 2008) [6] R. Berine an P. Inyk, Sparse recovery using sparse ranom matrices. (Preprint, 2008) [7] S. Jafarpour, W. Xu, B. Hassibi, an R. Calerbank, Efficient compresse sensing using high-quality expaner graphs, IEEE Trans Info Theory, 2009 [8] R. Berine, A. C. Gilbert, P. Inyk, H. Karloff, an M. J. Strauss, Combining geometry an combinatorics: A unifie approach to sparse signal recovery. (Preprint, 2008). [9] D.L. Donoho, A. Maleki, an A. Montanari. Message passing algorithms for compresse sensing, Proc. Natl Aca. Sci., [10] Venkat Chanar, A negative result concerning explicit matrices with the restricte isometry property. (Preprint, 2008) [11] S. Sarvotham, D. Baron, an R. Baraniuk, Suocoes-Fast measurement an reconstruction of sparse signals. IEEE ISIT [12] R. Baraniuk, M. Davenport, R. DeVore, an M. Wakin, A simple proof of the restricte isometry property for ranom matrices. Constructive Approximation, 28(3), pp , December [13] D. Achlioptas, Database-frienly ranom projections, 20th Annual Symposium on Principles of Database Systems, 2001, pp [14]

u!i = a T u = 0. Then S satisfies

u!i = a T u = 0. Then S satisfies Deterministic Conitions for Subspace Ientifiability from Incomplete Sampling Daniel L Pimentel-Alarcón, Nigel Boston, Robert D Nowak University of Wisconsin-Maison Abstract Consier an r-imensional subspace

More information

Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing

Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing Sparse Expander-like Real-valued Projection (SERP) matrices for compressed sensing Abdolreza Abdolhosseini Moghadam and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University,

More information

Least-Squares Regression on Sparse Spaces

Least-Squares Regression on Sparse Spaces Least-Squares Regression on Sparse Spaces Yuri Grinberg, Mahi Milani Far, Joelle Pineau School of Computer Science McGill University Montreal, Canaa {ygrinb,mmilan1,jpineau}@cs.mcgill.ca 1 Introuction

More information

Topic 7: Convergence of Random Variables

Topic 7: Convergence of Random Variables Topic 7: Convergence of Ranom Variables Course 003, 2016 Page 0 The Inference Problem So far, our starting point has been a given probability space (S, F, P). We now look at how to generate information

More information

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions

Computing Exact Confidence Coefficients of Simultaneous Confidence Intervals for Multinomial Proportions and their Functions Working Paper 2013:5 Department of Statistics Computing Exact Confience Coefficients of Simultaneous Confience Intervals for Multinomial Proportions an their Functions Shaobo Jin Working Paper 2013:5

More information

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012

Lecture Introduction. 2 Examples of Measure Concentration. 3 The Johnson-Lindenstrauss Lemma. CS-621 Theory Gems November 28, 2012 CS-6 Theory Gems November 8, 0 Lecture Lecturer: Alesaner Mąry Scribes: Alhussein Fawzi, Dorina Thanou Introuction Toay, we will briefly iscuss an important technique in probability theory measure concentration

More information

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments

Time-of-Arrival Estimation in Non-Line-Of-Sight Environments 2 Conference on Information Sciences an Systems, The Johns Hopkins University, March 2, 2 Time-of-Arrival Estimation in Non-Line-Of-Sight Environments Sinan Gezici, Hisashi Kobayashi an H. Vincent Poor

More information

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification

Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection an System Ientification Borhan M Sananaji, Tyrone L Vincent, an Michael B Wakin Abstract In this paper,

More information

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k

Robust Forward Algorithms via PAC-Bayes and Laplace Distributions. ω Q. Pr (y(ω x) < 0) = Pr A k A Proof of Lemma 2 B Proof of Lemma 3 Proof: Since the support of LL istributions is R, two such istributions are equivalent absolutely continuous with respect to each other an the ivergence is well-efine

More information

Lower Bounds for the Smoothed Number of Pareto optimal Solutions

Lower Bounds for the Smoothed Number of Pareto optimal Solutions Lower Bouns for the Smoothe Number of Pareto optimal Solutions Tobias Brunsch an Heiko Röglin Department of Computer Science, University of Bonn, Germany brunsch@cs.uni-bonn.e, heiko@roeglin.org Abstract.

More information

Convergence of Random Walks

Convergence of Random Walks Chapter 16 Convergence of Ranom Walks This lecture examines the convergence of ranom walks to the Wiener process. This is very important both physically an statistically, an illustrates the utility of

More information

GREEDY SIGNAL RECOVERY REVIEW

GREEDY SIGNAL RECOVERY REVIEW GREEDY SIGNAL RECOVERY REVIEW DEANNA NEEDELL, JOEL A. TROPP, ROMAN VERSHYNIN Abstract. The two major approaches to sparse recovery are L 1-minimization and greedy methods. Recently, Needell and Vershynin

More information

Iterated Point-Line Configurations Grow Doubly-Exponentially

Iterated Point-Line Configurations Grow Doubly-Exponentially Iterate Point-Line Configurations Grow Doubly-Exponentially Joshua Cooper an Mark Walters July 9, 008 Abstract Begin with a set of four points in the real plane in general position. A to this collection

More information

Lecture 2: Correlated Topic Model

Lecture 2: Correlated Topic Model Probabilistic Moels for Unsupervise Learning Spring 203 Lecture 2: Correlate Topic Moel Inference for Correlate Topic Moel Yuan Yuan First of all, let us make some claims about the parameters an variables

More information

Permanent vs. Determinant

Permanent vs. Determinant Permanent vs. Determinant Frank Ban Introuction A major problem in theoretical computer science is the Permanent vs. Determinant problem. It asks: given an n by n matrix of ineterminates A = (a i,j ) an

More information

PDE Notes, Lecture #11

PDE Notes, Lecture #11 PDE Notes, Lecture # from Professor Jalal Shatah s Lectures Febuary 9th, 2009 Sobolev Spaces Recall that for u L loc we can efine the weak erivative Du by Du, φ := udφ φ C0 If v L loc such that Du, φ =

More information

Lecture 2 Lagrangian formulation of classical mechanics Mechanics

Lecture 2 Lagrangian formulation of classical mechanics Mechanics Lecture Lagrangian formulation of classical mechanics 70.00 Mechanics Principle of stationary action MATH-GA To specify a motion uniquely in classical mechanics, it suffices to give, at some time t 0,

More information

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013

Survey Sampling. 1 Design-based Inference. Kosuke Imai Department of Politics, Princeton University. February 19, 2013 Survey Sampling Kosuke Imai Department of Politics, Princeton University February 19, 2013 Survey sampling is one of the most commonly use ata collection methos for social scientists. We begin by escribing

More information

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery

Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Compressibility of Infinite Sequences and its Interplay with Compressed Sensing Recovery Jorge F. Silva and Eduardo Pavez Department of Electrical Engineering Information and Decision Systems Group Universidad

More information

Combining geometry and combinatorics

Combining geometry and combinatorics Combining geometry and combinatorics A unified approach to sparse signal recovery Anna C. Gilbert University of Michigan joint work with R. Berinde (MIT), P. Indyk (MIT), H. Karloff (AT&T), M. Strauss

More information

Lower bounds on Locality Sensitive Hashing

Lower bounds on Locality Sensitive Hashing Lower bouns on Locality Sensitive Hashing Rajeev Motwani Assaf Naor Rina Panigrahy Abstract Given a metric space (X, X ), c 1, r > 0, an p, q [0, 1], a istribution over mappings H : X N is calle a (r,

More information

Approximate Constraint Satisfaction Requires Large LP Relaxations

Approximate Constraint Satisfaction Requires Large LP Relaxations Approximate Constraint Satisfaction Requires Large LP Relaxations oah Fleming April 19, 2018 Linear programming is a very powerful tool for attacking optimization problems. Techniques such as the ellipsoi

More information

Ramsey numbers of some bipartite graphs versus complete graphs

Ramsey numbers of some bipartite graphs versus complete graphs Ramsey numbers of some bipartite graphs versus complete graphs Tao Jiang, Michael Salerno Miami University, Oxfor, OH 45056, USA Abstract. The Ramsey number r(h, K n ) is the smallest positive integer

More information

Multipath Matching Pursuit

Multipath Matching Pursuit Multipath Matching Pursuit Submitted to IEEE trans. on Information theory Authors: S. Kwon, J. Wang, and B. Shim Presenter: Hwanchol Jang Multipath is investigated rather than a single path for a greedy

More information

Multi-View Clustering via Canonical Correlation Analysis

Multi-View Clustering via Canonical Correlation Analysis Technical Report TTI-TR-2008-5 Multi-View Clustering via Canonical Correlation Analysis Kamalika Chauhuri UC San Diego Sham M. Kakae Toyota Technological Institute at Chicago ABSTRACT Clustering ata in

More information

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs

Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Perfect Matchings in Õ(n1.5 ) Time in Regular Bipartite Graphs Ashish Goel Michael Kapralov Sanjeev Khanna Abstract We consier the well-stuie problem of fining a perfect matching in -regular bipartite

More information

A Weak First Digit Law for a Class of Sequences

A Weak First Digit Law for a Class of Sequences International Mathematical Forum, Vol. 11, 2016, no. 15, 67-702 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1288/imf.2016.6562 A Weak First Digit Law for a Class of Sequences M. A. Nyblom School of

More information

Lecture 5. Symmetric Shearer s Lemma

Lecture 5. Symmetric Shearer s Lemma Stanfor University Spring 208 Math 233: Non-constructive methos in combinatorics Instructor: Jan Vonrák Lecture ate: January 23, 208 Original scribe: Erik Bates Lecture 5 Symmetric Shearer s Lemma Here

More information

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk

Model-Based Compressive Sensing for Signal Ensembles. Marco F. Duarte Volkan Cevher Richard G. Baraniuk Model-Based Compressive Sensing for Signal Ensembles Marco F. Duarte Volkan Cevher Richard G. Baraniuk Concise Signal Structure Sparse signal: only K out of N coordinates nonzero model: union of K-dimensional

More information

Randomness-in-Structured Ensembles for Compressed Sensing of Images

Randomness-in-Structured Ensembles for Compressed Sensing of Images Randomness-in-Structured Ensembles for Compressed Sensing of Images Abdolreza Abdolhosseini Moghadam Dep. of Electrical and Computer Engineering Michigan State University Email: abdolhos@msu.edu Hayder

More information

Implicit Differentiation

Implicit Differentiation Implicit Differentiation Thus far, the functions we have been concerne with have been efine explicitly. A function is efine explicitly if the output is given irectly in terms of the input. For instance,

More information

On colour-blind distinguishing colour pallets in regular graphs

On colour-blind distinguishing colour pallets in regular graphs J Comb Optim (2014 28:348 357 DOI 10.1007/s10878-012-9556-x On colour-blin istinguishing colour pallets in regular graphs Jakub Przybyło Publishe online: 25 October 2012 The Author(s 2012. This article

More information

The Exact Form and General Integrating Factors

The Exact Form and General Integrating Factors 7 The Exact Form an General Integrating Factors In the previous chapters, we ve seen how separable an linear ifferential equations can be solve using methos for converting them to forms that can be easily

More information

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes

Leaving Randomness to Nature: d-dimensional Product Codes through the lens of Generalized-LDPC codes Leaving Ranomness to Nature: -Dimensional Prouct Coes through the lens of Generalize-LDPC coes Tavor Baharav, Kannan Ramchanran Dept. of Electrical Engineering an Computer Sciences, U.C. Berkeley {tavorb,

More information

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction

FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS. 1. Introduction FLUCTUATIONS IN THE NUMBER OF POINTS ON SMOOTH PLANE CURVES OVER FINITE FIELDS ALINA BUCUR, CHANTAL DAVID, BROOKE FEIGON, MATILDE LALÍN 1 Introuction In this note, we stuy the fluctuations in the number

More information

7.1 Support Vector Machine

7.1 Support Vector Machine 67577 Intro. to Machine Learning Fall semester, 006/7 Lecture 7: Support Vector Machines an Kernel Functions II Lecturer: Amnon Shashua Scribe: Amnon Shashua 7. Support Vector Machine We return now to

More information

Qubit channels that achieve capacity with two states

Qubit channels that achieve capacity with two states Qubit channels that achieve capacity with two states Dominic W. Berry Department of Physics, The University of Queenslan, Brisbane, Queenslan 4072, Australia Receive 22 December 2004; publishe 22 March

More information

Calculus of Variations

Calculus of Variations 16.323 Lecture 5 Calculus of Variations Calculus of Variations Most books cover this material well, but Kirk Chapter 4 oes a particularly nice job. x(t) x* x*+ αδx (1) x*- αδx (1) αδx (1) αδx (1) t f t

More information

Logarithmic spurious regressions

Logarithmic spurious regressions Logarithmic spurious regressions Robert M. e Jong Michigan State University February 5, 22 Abstract Spurious regressions, i.e. regressions in which an integrate process is regresse on another integrate

More information

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT

CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT CS on CS: Computer Science insights into Compresive Sensing (and vice versa) Piotr Indyk MIT Sparse Approximations Goal: approximate a highdimensional vector x by x that is sparse, i.e., has few nonzero

More information

Linear First-Order Equations

Linear First-Order Equations 5 Linear First-Orer Equations Linear first-orer ifferential equations make up another important class of ifferential equations that commonly arise in applications an are relatively easy to solve (in theory)

More information

Analyzing Tensor Power Method Dynamics in Overcomplete Regime

Analyzing Tensor Power Method Dynamics in Overcomplete Regime Journal of Machine Learning Research 18 (2017) 1-40 Submitte 9/15; Revise 11/16; Publishe 4/17 Analyzing Tensor Power Metho Dynamics in Overcomplete Regime Animashree Ananumar Department of Electrical

More information

Discrete Mathematics

Discrete Mathematics Discrete Mathematics 309 (009) 86 869 Contents lists available at ScienceDirect Discrete Mathematics journal homepage: wwwelseviercom/locate/isc Profile vectors in the lattice of subspaces Dániel Gerbner

More information

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery

Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Sparse analysis Lecture VII: Combining geometry and combinatorics, sparse matrices for sparse signal recovery Anna C. Gilbert Department of Mathematics University of Michigan Sparse signal recovery measurements:

More information

Acute sets in Euclidean spaces

Acute sets in Euclidean spaces Acute sets in Eucliean spaces Viktor Harangi April, 011 Abstract A finite set H in R is calle an acute set if any angle etermine by three points of H is acute. We examine the maximal carinality α() of

More information

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences.

. Using a multinomial model gives us the following equation for P d. , with respect to same length term sequences. S 63 Lecture 8 2/2/26 Lecturer Lillian Lee Scribes Peter Babinski, Davi Lin Basic Language Moeling Approach I. Special ase of LM-base Approach a. Recap of Formulas an Terms b. Fixing θ? c. About that Multinomial

More information

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback

An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback Journal of Machine Learning Research 8 07) - Submitte /6; Publishe 5/7 An Optimal Algorithm for Banit an Zero-Orer Convex Optimization with wo-point Feeback Oha Shamir Department of Computer Science an

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 12 EFFICIENT LEARNING So far, our focus has been on moels of learning an basic algorithms for those moels. We have not place much emphasis on how to learn quickly.

More information

Necessary and Sufficient Conditions for Sketched Subspace Clustering

Necessary and Sufficient Conditions for Sketched Subspace Clustering Necessary an Sufficient Conitions for Sketche Subspace Clustering Daniel Pimentel-Alarcón, Laura Balzano 2, Robert Nowak University of Wisconsin-Maison, 2 University of Michigan-Ann Arbor Abstract This

More information

Agmon Kolmogorov Inequalities on l 2 (Z d )

Agmon Kolmogorov Inequalities on l 2 (Z d ) Journal of Mathematics Research; Vol. 6, No. ; 04 ISSN 96-9795 E-ISSN 96-9809 Publishe by Canaian Center of Science an Eucation Agmon Kolmogorov Inequalities on l (Z ) Arman Sahovic Mathematics Department,

More information

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x)

The derivative of a function f(x) is another function, defined in terms of a limiting expression: f(x + δx) f(x) Y. D. Chong (2016) MH2801: Complex Methos for the Sciences 1. Derivatives The erivative of a function f(x) is another function, efine in terms of a limiting expression: f (x) f (x) lim x δx 0 f(x + δx)

More information

Euler equations for multiple integrals

Euler equations for multiple integrals Euler equations for multiple integrals January 22, 2013 Contents 1 Reminer of multivariable calculus 2 1.1 Vector ifferentiation......................... 2 1.2 Matrix ifferentiation........................

More information

Parameter estimation: A new approach to weighting a priori information

Parameter estimation: A new approach to weighting a priori information Parameter estimation: A new approach to weighting a priori information J.L. Mea Department of Mathematics, Boise State University, Boise, ID 83725-555 E-mail: jmea@boisestate.eu Abstract. We propose a

More information

6 General properties of an autonomous system of two first order ODE

6 General properties of an autonomous system of two first order ODE 6 General properties of an autonomous system of two first orer ODE Here we embark on stuying the autonomous system of two first orer ifferential equations of the form ẋ 1 = f 1 (, x 2 ), ẋ 2 = f 2 (, x

More information

The Principle of Least Action

The Principle of Least Action Chapter 7. The Principle of Least Action 7.1 Force Methos vs. Energy Methos We have so far stuie two istinct ways of analyzing physics problems: force methos, basically consisting of the application of

More information

Diagonalization of Matrices Dr. E. Jacobs

Diagonalization of Matrices Dr. E. Jacobs Diagonalization of Matrices Dr. E. Jacobs One of the very interesting lessons in this course is how certain algebraic techniques can be use to solve ifferential equations. The purpose of these notes is

More information

arxiv: v2 [math.pr] 27 Nov 2018

arxiv: v2 [math.pr] 27 Nov 2018 Range an spee of rotor wals on trees arxiv:15.57v [math.pr] 7 Nov 1 Wilfrie Huss an Ecaterina Sava-Huss November, 1 Abstract We prove a law of large numbers for the range of rotor wals with ranom initial

More information

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1

Assignment 1. g i (x 1,..., x n ) dx i = 0. i=1 Assignment 1 Golstein 1.4 The equations of motion for the rolling isk are special cases of general linear ifferential equations of constraint of the form g i (x 1,..., x n x i = 0. i=1 A constraint conition

More information

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit

Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Signal Recovery From Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit Deanna Needell and Roman Vershynin Abstract We demonstrate a simple greedy algorithm that can reliably

More information

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit

New Coherence and RIP Analysis for Weak. Orthogonal Matching Pursuit New Coherence and RIP Analysis for Wea 1 Orthogonal Matching Pursuit Mingrui Yang, Member, IEEE, and Fran de Hoog arxiv:1405.3354v1 [cs.it] 14 May 2014 Abstract In this paper we define a new coherence

More information

Conservation Laws. Chapter Conservation of Energy

Conservation Laws. Chapter Conservation of Energy 20 Chapter 3 Conservation Laws In orer to check the physical consistency of the above set of equations governing Maxwell-Lorentz electroynamics [(2.10) an (2.12) or (1.65) an (1.68)], we examine the action

More information

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods

Hyperbolic Moment Equations Using Quadrature-Based Projection Methods Hyperbolic Moment Equations Using Quarature-Base Projection Methos J. Koellermeier an M. Torrilhon Department of Mathematics, RWTH Aachen University, Aachen, Germany Abstract. Kinetic equations like the

More information

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy,

NOTES ON EULER-BOOLE SUMMATION (1) f (l 1) (n) f (l 1) (m) + ( 1)k 1 k! B k (y) f (k) (y) dy, NOTES ON EULER-BOOLE SUMMATION JONATHAN M BORWEIN, NEIL J CALKIN, AND DANTE MANNA Abstract We stuy a connection between Euler-MacLaurin Summation an Boole Summation suggeste in an AMM note from 196, which

More information

Chromatic number for a generalization of Cartesian product graphs

Chromatic number for a generalization of Cartesian product graphs Chromatic number for a generalization of Cartesian prouct graphs Daniel Král Douglas B. West Abstract Let G be a class of graphs. The -fol gri over G, enote G, is the family of graphs obtaine from -imensional

More information

Self-normalized Martingale Tail Inequality

Self-normalized Martingale Tail Inequality Online-to-Confience-Set Conversions an Application to Sparse Stochastic Banits A Self-normalize Martingale Tail Inequality The self-normalize martingale tail inequality that we present here is the scalar-value

More information

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit

Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit arxiv:0707.4203v2 [math.na] 14 Aug 2007 Deanna Needell Department of Mathematics University of California,

More information

Table of Common Derivatives By David Abraham

Table of Common Derivatives By David Abraham Prouct an Quotient Rules: Table of Common Derivatives By Davi Abraham [ f ( g( ] = [ f ( ] g( + f ( [ g( ] f ( = g( [ f ( ] g( g( f ( [ g( ] Trigonometric Functions: sin( = cos( cos( = sin( tan( = sec

More information

arxiv: v1 [cs.lg] 22 Mar 2014

arxiv: v1 [cs.lg] 22 Mar 2014 CUR lgorithm with Incomplete Matrix Observation Rong Jin an Shenghuo Zhu Dept. of Computer Science an Engineering, Michigan State University, rongjin@msu.eu NEC Laboratories merica, Inc., zsh@nec-labs.com

More information

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina

INDUSTRIAL MATHEMATICS INSTITUTE. B.S. Kashin and V.N. Temlyakov. IMI Preprint Series. Department of Mathematics University of South Carolina INDUSTRIAL MATHEMATICS INSTITUTE 2007:08 A remark on compressed sensing B.S. Kashin and V.N. Temlyakov IMI Preprint Series Department of Mathematics University of South Carolina A remark on compressed

More information

Schrödinger s equation.

Schrödinger s equation. Physics 342 Lecture 5 Schröinger s Equation Lecture 5 Physics 342 Quantum Mechanics I Wenesay, February 3r, 2010 Toay we iscuss Schröinger s equation an show that it supports the basic interpretation of

More information

Generalized Orthogonal Matching Pursuit- A Review and Some

Generalized Orthogonal Matching Pursuit- A Review and Some Generalized Orthogonal Matching Pursuit- A Review and Some New Results Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur, INDIA Table of Contents

More information

Recent Developments in Compressed Sensing

Recent Developments in Compressed Sensing Recent Developments in Compressed Sensing M. Vidyasagar Distinguished Professor, IIT Hyderabad m.vidyasagar@iith.ac.in, www.iith.ac.in/ m vidyasagar/ ISL Seminar, Stanford University, 19 April 2018 Outline

More information

Linear and quadratic approximation

Linear and quadratic approximation Linear an quaratic approximation November 11, 2013 Definition: Suppose f is a function that is ifferentiable on an interval I containing the point a. The linear approximation to f at a is the linear function

More information

arxiv: v4 [math.pr] 27 Jul 2016

arxiv: v4 [math.pr] 27 Jul 2016 The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,

More information

of Orthogonal Matching Pursuit

of Orthogonal Matching Pursuit A Sharp Restricted Isometry Constant Bound of Orthogonal Matching Pursuit Qun Mo arxiv:50.0708v [cs.it] 8 Jan 205 Abstract We shall show that if the restricted isometry constant (RIC) δ s+ (A) of the measurement

More information

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential

A Note on Exact Solutions to Linear Differential Equations by the Matrix Exponential Avances in Applie Mathematics an Mechanics Av. Appl. Math. Mech. Vol. 1 No. 4 pp. 573-580 DOI: 10.4208/aamm.09-m0946 August 2009 A Note on Exact Solutions to Linear Differential Equations by the Matrix

More information

and from it produce the action integral whose variation we set to zero:

and from it produce the action integral whose variation we set to zero: Lagrange Multipliers Monay, 6 September 01 Sometimes it is convenient to use reunant coorinates, an to effect the variation of the action consistent with the constraints via the metho of Lagrange unetermine

More information

Tractability results for weighted Banach spaces of smooth functions

Tractability results for weighted Banach spaces of smooth functions Tractability results for weighte Banach spaces of smooth functions Markus Weimar Mathematisches Institut, Universität Jena Ernst-Abbe-Platz 2, 07740 Jena, Germany email: markus.weimar@uni-jena.e March

More information

Sparse Reconstruction of Systems of Ordinary Differential Equations

Sparse Reconstruction of Systems of Ordinary Differential Equations Sparse Reconstruction of Systems of Orinary Differential Equations Manuel Mai a, Mark D. Shattuck b,c, Corey S. O Hern c,a,,e, a Department of Physics, Yale University, New Haven, Connecticut 06520, USA

More information

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE

TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS. Yannick DEVILLE TEMPORAL AND TIME-FREQUENCY CORRELATION-BASED BLIND SOURCE SEPARATION METHODS Yannick DEVILLE Université Paul Sabatier Laboratoire Acoustique, Métrologie, Instrumentation Bât. 3RB2, 8 Route e Narbonne,

More information

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks

A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks A PAC-Bayesian Approach to Spectrally-Normalize Margin Bouns for Neural Networks Behnam Neyshabur, Srinah Bhojanapalli, Davi McAllester, Nathan Srebro Toyota Technological Institute at Chicago {bneyshabur,

More information

REAL ANALYSIS I HOMEWORK 5

REAL ANALYSIS I HOMEWORK 5 REAL ANALYSIS I HOMEWORK 5 CİHAN BAHRAN The questions are from Stein an Shakarchi s text, Chapter 3. 1. Suppose ϕ is an integrable function on R with R ϕ(x)x = 1. Let K δ(x) = δ ϕ(x/δ), δ > 0. (a) Prove

More information

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes

3.7 Implicit Differentiation -- A Brief Introduction -- Student Notes Fin these erivatives of these functions: y.7 Implicit Differentiation -- A Brief Introuction -- Stuent Notes tan y sin tan = sin y e = e = Write the inverses of these functions: y tan y sin How woul we

More information

arxiv: v1 [math.co] 15 Sep 2015

arxiv: v1 [math.co] 15 Sep 2015 Circular coloring of signe graphs Yingli Kang, Eckhar Steffen arxiv:1509.04488v1 [math.co] 15 Sep 015 Abstract Let k, ( k) be two positive integers. We generalize the well stuie notions of (k, )-colorings

More information

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION

LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION The Annals of Statistics 1997, Vol. 25, No. 6, 2313 2327 LATTICE-BASED D-OPTIMUM DESIGN FOR FOURIER REGRESSION By Eva Riccomagno, 1 Rainer Schwabe 2 an Henry P. Wynn 1 University of Warwick, Technische

More information

A Modification of the Jarque-Bera Test. for Normality

A Modification of the Jarque-Bera Test. for Normality Int. J. Contemp. Math. Sciences, Vol. 8, 01, no. 17, 84-85 HIKARI Lt, www.m-hikari.com http://x.oi.org/10.1988/ijcms.01.9106 A Moification of the Jarque-Bera Test for Normality Moawa El-Fallah Ab El-Salam

More information

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control

19 Eigenvalues, Eigenvectors, Ordinary Differential Equations, and Control 19 Eigenvalues, Eigenvectors, Orinary Differential Equations, an Control This section introuces eigenvalues an eigenvectors of a matrix, an iscusses the role of the eigenvalues in etermining the behavior

More information

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah

LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION. By Srikanth Jagabathula Devavrat Shah 00 AIM Workshop on Ranking LIMITATION OF LEARNING RANKINGS FROM PARTIAL INFORMATION By Srikanth Jagabathula Devavrat Shah Interest is in recovering distribution over the space of permutations over n elements

More information

1. Aufgabenblatt zur Vorlesung Probability Theory

1. Aufgabenblatt zur Vorlesung Probability Theory 24.10.17 1. Aufgabenblatt zur Vorlesung By (Ω, A, P ) we always enote the unerlying probability space, unless state otherwise. 1. Let r > 0, an efine f(x) = 1 [0, [ (x) exp( r x), x R. a) Show that p f

More information

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs

Problem Sheet 2: Eigenvalues and eigenvectors and their use in solving linear ODEs Problem Sheet 2: Eigenvalues an eigenvectors an their use in solving linear ODEs If you fin any typos/errors in this problem sheet please email jk28@icacuk The material in this problem sheet is not examinable

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION 2 (Rates of change) A.J.Hobson JUST THE MATHS UNIT NUMBER 10.2 DIFFERENTIATION 2 (Rates of change) by A.J.Hobson 10.2.1 Introuction 10.2.2 Average rates of change 10.2.3 Instantaneous rates of change 10.2.4 Derivatives 10.2.5 Exercises

More information

Physics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2

Physics 505 Electricity and Magnetism Fall 2003 Prof. G. Raithel. Problem Set 3. 2 (x x ) 2 + (y y ) 2 + (z + z ) 2 Physics 505 Electricity an Magnetism Fall 003 Prof. G. Raithel Problem Set 3 Problem.7 5 Points a): Green s function: Using cartesian coorinates x = (x, y, z), it is G(x, x ) = 1 (x x ) + (y y ) + (z z

More information

Simultaneous Sparsity

Simultaneous Sparsity Simultaneous Sparsity Joel A. Tropp Anna C. Gilbert Martin J. Strauss {jtropp annacg martinjs}@umich.edu Department of Mathematics The University of Michigan 1 Simple Sparse Approximation Work in the d-dimensional,

More information

Lecture 6 : Dimensionality Reduction

Lecture 6 : Dimensionality Reduction CPS290: Algorithmic Founations of Data Science February 3, 207 Lecture 6 : Dimensionality Reuction Lecturer: Kamesh Munagala Scribe: Kamesh Munagala In this lecture, we will consier the roblem of maing

More information

arxiv: v1 [math.mg] 10 Apr 2018

arxiv: v1 [math.mg] 10 Apr 2018 ON THE VOLUME BOUND IN THE DVORETZKY ROGERS LEMMA FERENC FODOR, MÁRTON NASZÓDI, AND TAMÁS ZARNÓCZ arxiv:1804.03444v1 [math.mg] 10 Apr 2018 Abstract. The classical Dvoretzky Rogers lemma provies a eterministic

More information

Introduction to the Vlasov-Poisson system

Introduction to the Vlasov-Poisson system Introuction to the Vlasov-Poisson system Simone Calogero 1 The Vlasov equation Consier a particle with mass m > 0. Let x(t) R 3 enote the position of the particle at time t R an v(t) = ẋ(t) = x(t)/t its

More information

Math 342 Partial Differential Equations «Viktor Grigoryan

Math 342 Partial Differential Equations «Viktor Grigoryan Math 342 Partial Differential Equations «Viktor Grigoryan 6 Wave equation: solution In this lecture we will solve the wave equation on the entire real line x R. This correspons to a string of infinite

More information

The total derivative. Chapter Lagrangian and Eulerian approaches

The total derivative. Chapter Lagrangian and Eulerian approaches Chapter 5 The total erivative 51 Lagrangian an Eulerian approaches The representation of a flui through scalar or vector fiels means that each physical quantity uner consieration is escribe as a function

More information

On non-antipodal binary completely regular codes

On non-antipodal binary completely regular codes Discrete Mathematics 308 (2008) 3508 3525 www.elsevier.com/locate/isc On non-antipoal binary completely regular coes J. Borges a, J. Rifà a, V.A. Zinoviev b a Department of Information an Communications

More information

Chapter 4. Electrostatics of Macroscopic Media

Chapter 4. Electrostatics of Macroscopic Media Chapter 4. Electrostatics of Macroscopic Meia 4.1 Multipole Expansion Approximate potentials at large istances 3 x' x' (x') x x' x x Fig 4.1 We consier the potential in the far-fiel region (see Fig. 4.1

More information