An RIP-based approach to Σ quantization for compressed sensing

Size: px
Start display at page:

Download "An RIP-based approach to Σ quantization for compressed sensing"

Transcription

1 An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized easureents for copressed sensing. Our ethod is based on the restricted isoetry property (RIP) of a certain projection of the easureent atrix. The ain application of our result is the error analysis for partial rando circulant atrices. This is the first tie such bounds are provided for a structured easureent atrix with the fast ultiplication property. Our results also recover the best-known reconstruction error bounds for Gaussian and subgaussian easureent atrix. Introduction. Copressed sensing Copressed sensing has drawn significant attention since the seinal works by Candés, Roberg, Tao [6], and Donoho [0]. The theory of copressed sensing is based on the observation that various cases of natural signals are approxiately sparse with respect to certain bases or fraes. The basic idea is to recover such signals fro a sall nuber of linear easureents. Hence the proble turns into an undeterined linear syste. Various criteria have been proposed to deterine whether such a syste has a unique sparse solution. In this paper we will work with the restricted isoetry property (RIP), which has been shown by Candés et al. [8] to guarantee uniqueness. Definition. A atrix A R N has the restricted isoetry property (RIP) of order s if there exists 0 < δ < such that for all s-sparse vectors x C N, i.e., vectors that have at ost s non-zero coponents, ( δ) x 2 2 Ax 2 2 ( + δ) x 2 2. The sallest such δ is called the RIP-constant of order s denoted by δ s. Finding the RIP-constant of a easureent atrix is in general an NP hard proble [23]. That is why ost papers work with rando atrices. Exaples of rando atrices known to have the RIP include subgaussian, partial rando circulant, and partial Fourier atrices. A subgaussian atrix has independent rando entries whose tail is doinated by a Gaussian

2 (cf. Definition 3). Such atrices have been shown to have the RIP provided = Ω(s log(en/s)) [8]. This order of the ebedding diension is known to be optial. Exaples of subgaussian atrices include Gaussian and Bernoulli atrices. A partial rando circulant atrix is the atrix representation of a deterinistically subsapled rando convolution. Such a atrix has been shown to have the RIP provided = Ω(s(log s) 2 (log N) 2 ) []. A partial rando Fourier atrix consists of rows of a discrete Fourier atrix drawn at rando. The RIP for such a atrix has been shown provided = Ω(s log 3 s log N)) [2]. These ebedding diensions are slightly worse than for subgaussian atrices, but such structured atrices are preferred in any applications, e.g., because of their reduced randoness and their fast ultiplication properties..2 Quantized copressed sensing While the easureents in copressed sensing are usually not representable by finite bits, we need to quantize these easureents such that the syste can be operated on coputers. To quantize the easureents we seek to represent our easureents by finite any sybols fro a finite alphabet consisting of real nubers. The extree case of considering the set of only two eleents {, } is also called -bit quantization. The ost intuitive ethod to quantize the easureents is to ap each of the easureents to the closest eleent fro the alphabet set. Since this ethod processes the quantization independently for each easureent, it is also called eoryless scalar quantization (MSQ). Most of the literature on MSQ quantization copressed sensing up to date consider -bit quantization [5, 7, 9, ], which bias down to considering only the easureent signs. Jacques et al. [7] show for Gaussian easureents or easureents drawn uniforly fro the unit sphere, the worst case error is bounded by O( s N log s ). Later, for Gaussian easureents, Gupta et al. [4] deonstrate that one ay tractably recover the support of the signal fro O(s log n) easureents. Plan et al. [9] show that one can again for Gaussian easureents, reconstruct the direction of an s-sparse signal via convex optiization, with accuracy O( s ) 5 up to logarithic factors with high probability. Later Ai et al. [] derive siilar results for subgaussian easureents under additional assuptions on the size of the signal entries. However in [7] it is shown that the reconstruction l 2 -error can never be better than Ω( s ). To break this bottleneck of MSQ, Σ quantization for copressed sensing has drawn attention recently. Σ quantizes a vector as a whole rather than the coponents individually, i.e., the quantized values depend on previous quantization steps. Güntürk et al. [3] show that for rth order Σ quantization in Gaussian copressed sensing case, the l 2 -error is bounded by O(( s )α(r 2 ) ) for any 0 < α < with high probability. More recently, in [8], this result has been generalized to subgaussian easureents. Indeed for r large enough this breaks the MSQ bottleneck..3 Contributions The priary contribution of this paper is that the restricted isoetry property (RIP) is applied to estiate the error bound for Σ quantized copressed sensing. That is, once we know the RIP-constant of a odification of the easureent atrix, we can estiate the reconstruction error. 2

3 In the following results, we assue that the Σ quantized easureents with quantization alphabet Z = Z are given to us. We refer the readers to Section 2.2 for details on the quantization schee eployed. A special role is played by the rth power finite difference atrix D; denoting the singular value decoposition of D r by D r = U D rs D rv D r, we obtain our ain theore below. Theore. Given an s-sparse signal x R N, denoted by Φ R N a easureent atrix, and q the rth order Σ -quantized easureents of Φx with step size. Suppose Φ has the RIP such that the support set T can be deterined. Choose L as the Sobolev dual atrix of Φ T and reconstruct the signal by ˆx = Lq (see Section for details), if l P lvd Φ, l, has r RIP-constant δ s δ, where P l aps a vector to its first l coponents. then the reconstruction error is bounded above by x ˆx 2 2c 2 (r) ( δ) ( l ) r+ 2, where c 2 (r) > 0 is a constant depending only on r. Note fro Theore, the saller l is the better the bound. However, l has to be large enough such that (P l VD Φ) has the RIP-constant δ r s δ. This result can be applied to obtain recovery guarantees for various copressed sensing setting such as Gaussian, subgaussian and partial rando circulant easureents. We will show in later section that our result covers which in [3] and [8]. Since the result of reconstruction bound on rando circulant atrix has not been proposed so far, we state it as the following. Theore 2. Given an s-sparse signal x R N, let Φx = P Ω (ξ x) R N be a partial rando circulant atrix, where the ξ i s are independent ean-zero, ρ-subgaussian rando variables (see Definition 3) of variance one and q the rth order Σ -quantization of Φx with step size. If C s log 2 2α s log 2 2α N, where 0 α 2, C depends only on ρ, and large enough such that the support set T is deterined, choosing L as the Sobolev dual atrix of Φ T, reconstructing the signal by ˆx = Lq, then, with probability at least 0.99, the reconstructed error is bounded by x ˆx 2 ρ ( s ) α(r 2 ), the notation ρ eans less than or equal to up to a constant depending on ρ..4 Organization The following paper is organized as the following. We first introduce the background and previous results on Σ quantization, suprea of chaos process, and the restricted isoetry property for partial rando circulant atrices. Further we present our ain result in Section 3, where we show how the RIP is 3

4 used to estiate reconstruction error for quantized copressed sensing. In Section 4, we show that our result can cover the best-known bounds for Gausssian and subgaussian easureent atrices. In the last section, we apply our result on partial rando circulant easureent atrices. This is the first tie such bounds are provided for a structured easureent atrix with the fast ultiplication property. 2 Background and previous results 2. Notations Through out this paper we use the following notations. The set D s,n = {x C x 2, x 0 s} is the set of unit nor s-sparse vectors. l 0 -nor 0 counts the nuber of non-zero coponents of a vector. Denote the Frobenius nor by A F = tra A, and l 2 -operation nor A 2 2 = sup x 2 = Ax 2. Based on last two nors d F (B) = sup A B A F, and d 2 2 (B) = sup A B A 2 2. We also assign orders to singular value of a atrix as σ i ( ) and σ in ( ) indicating the ith largest and the sallest singular value of a atrix respectively. Also denoted by, up to a positive constant, while eans up to a positive constant. Denoted by the Moore-Penrose inversion operator. 2.2 Σ Quantization Σ quantization is originally introduced as an efficient quantizer for redundant representations of oversapled band-liited functions [6]. Later on the atheatical analysis is provided by [9, 2] and any follow-up papers, and further the schee has been extended to frae expansions [3]. Given an alphabet Z such that Z = Z, the idea of rth order Σ quantization is to quantize each coponent of a vector taking the previous r quantization steps into account. More explicitly, a greedy rth order Σ quantization aps a sequence of inputs (y j ) to eleents q i Z via an internal state variable u i, explicitly ( r u) i = r j=0 ( r j ) ( ) j u i j = y i q i, () where q i is chosen such that u i is iniized. Setting initial conditions (u i ) i=0 = 0, then equation () can be expressed as D r u = y q, where the finite difference atrix is given by, if i = j, D ij, if i = j +, 0, otherwise. (2) 2.2. Coarse recovery Given an s-sparse signal x, and an N easureent atrix Φ, where N, we acquire easureents y = Φx. Applying rth order Σ quantization to y, 4

5 we have q. If treat q as perturbed easureents, i.e., q = y + e = Φx + e, then by [3], we can deterine the support set. This is proved by a odified version of Proposition 4. in [3] and Theore in [7], which says Proposition. Given x R N an s-sparse signal, denote e a noise vector with e 2 ϵ, and let Φ R N be a easureent atrix. Reconstruct x fro q = Φx + e via l iniization we obtain x, i.e., ˆx = arg in z subject to Φz q 2 ϵ. If Φ has RIP-constants such that δ 3s + 3δ 4s < 2, then x ˆx 2 K η, and denote T the support set of x and s = T, and if in j T x j K2 r 2, j T, for soe positive constant K, then the largest s coponents of x is T. Note the factor which akes Φ a unit-nor-coluned atrix, while in the copressed sensing literature, this is coon to noralize the easureent atrix such that it has unit-nor coluns. However for quantization copressed sensing, if the easureent size varies depending on the scale, then to analysis the reconstruction error it is not justified to use a fix step size. To fairly copare the error while we let grow, we should leave our easureent entries independent of, therefore, in this paper the easureent atrices are not noralized, and instead, for convenience we set each entry of the easureent atrices to have variance one Σ error estiate and Sobolev dual According to the previous section, denote the support set of x by T, we solve x by ultiplying a left inverse of Φ T, say L. Then the reconstruction l 2 -error is given by x ˆx 2 = Ly Lq = L(y q) 2 = L(D r u) 2 LD r 2 2 u 2. The Sobolev dual atrix L sob,r, first introduced in [4], is a left inverse of Φ T defined to iniize LD r 2 2, i.e., in L LD r 2 2 subject to LΦ T = I. The geoetric intuition is that the frae is soothly varying. Using the explicit forula L sob,r D r = (D r Φ T ), we obtain the error bound x ˆx 2 (D r Φ T ) 2 2 u 2 = σ in (D r Φ T ) u 2. (3) Recall that u 2 2, once we find a bound for σ in (D r Φ T ) fro below we can bound x ˆx 2 fro above. A key ingredient to bound this singular value is the following result fro the study of Toeplitz atrices, which depends highly on Weyl s inequality [5] (see also for exaple in [3]). Proposition 2. Let r be any positive integer and D be as in (2). There are positive constants c s (r) and c s2 (r), independent of, such that c s (r)( j )r σ j (D r ) c s2 (r)( j )r, j =,...,. (4) 5

6 2.3 Suprea of chaos process Based on [], in this section we will see one of the ost iportant tools for our ain result, which is Theore 3. Definition 2. For a etric space (T, d), an adissible sequence of T is a collection of subsets of T, {T r : r }, T r 2 2r and T 0 =, define γ 2 function introduced by Talagrand [22] as γ 2 (T, d) = inf T r sup t T 2 r/2 d(t, T r ), γ 2 function can be bounded by a Dudley integral [22]. More specificaly, the γ 2 function used in the following theore is bounded by γ 2 (B, 2 2 ) d2 2(B) 0 r=0 log /2 N(B, 2 2, u)du, where N(B, 2 2, u) is the covering nuber of B with respect to the nor 2 2 and the radius u. A key ingredient for out ain result will be the following theore. Theore 3. [] Let B be a set of atrices, and let ξ be a rando vector whose entries are independent, ean zero, variance one, and ρ-subgaussian rando variables (see Definition 3). Then P(C B c E + t) 2 exp( c 2 in{ t2 V 2, t }), (5) U where c, c 2 > 0 are constants depending on ρ, C B := sup Aξ 2 2 E Aξ 2 2, A B where and E = γ 2 (B, 2 2 )(γ 2 (B, 2 2 ) + d F (B)) + d F (B)d 2 2 (B), V = d 2 2 (B)(γ 2 (B, 2 2 ) + d F (B)), U = d 2 2 2(B). 2.4 The restricted isoetry property for partial rando circulant atrices In this section we will show the RIP for partial rando atrices, which is selected fro []. The ain ingredient to define a partial rando circulant atrix is an ρ- subgaussian rando variable as below. Definition 3. A rando variable X is called ρ-subgaussian if P( X t) 2 exp( t 2 /2ρ 2 ) 6

7 Based on this definition we can define a partial rando circulant atrix corresponding to the atrix representation of a subsapled rando convolution. Then the definition of a partial rando circulant atrix is Definition 4. Given a rando vector ξ = (ξ i ) N i=, where the ξ i s are independent ean-zero, ρ-subgaussian rando variables of variance one. A partial rando circulant atrix Φ,N, generated by ξ is defined via Φ,N x = P Ω (ξ x), where P Ω is a deterinistic projection to coponents of a vector. [] states the RIP for partial rando circulant atrix Φ as below. Theore 4. Let ξ = (ξ j ) N j= be a rando vector with independent ean-zero, variance one, ρ-subgaussian entries. If for s N and η, δ (0, ), cδ 2 s ax{(log s) 2 (log N) 2, log(η )}, (6) then with probability at least η, the RIP-constant of the partial rando circulant atrix Φ R N generated by ξ satisfies δ s δ. The constant c depends only on ρ. 3 RIP-based error analysis In this section we will give the quantized copressed sensing proble a atheatical odel, and explain how we approach the reconstruction error via the restricted isoetry property. In the next two sections we show its applications. Fro Section 2.2.2, the ain issue to estiate the reconstruction error is to estiate σ in (D r Φ T ). Coparing to σ in (D r Φ), since the doain is restricted, σ in (D r Φ T ) σ in (D r Φ). The intuition is to find a suprea of the effective sallest singular value of σ in (D r Φ) while the doain is restricted on D s,n. This restriction otivates the concept of RIP. In the following proof we show how RIP can be applied to find this effective sallest singular value. Proof of Theore. Recall that D r = U D rs D rv D r. Then, as s is a diagonal atrix, σ in (D r Φ T ) = σ in (S D rvd rφ T ) σ in (P l S D rvd rφ T ) = σ in ((P l S D rpl )(P l VD rφ T ) l s ) s l σ in (P l VD rφ T ) l s, Next we need to bound σ in (P l VD Φ r T ) uniforly over all support set T. If l P l VD Φ RIP-constant δ r s δ then σ in (P l VD Φ r T ) l s is uniforly bounded fro below by l δ. (7) The theore follows by applying (3), (4), (7) as σ in (D r Φ T ) u 2 2c 2 (r) ( δ) ( l ) r+ 2 (8) 7

8 4 Gaussian and subgaussian atrices Given Φ a standard Gaussian rando atrix. Since (P l V D r Φ) is also a standard Gaussian rando atrix due to rotation invariance, with l = Ω(s log N), l (P l V D r Φ) has the RIP-constant δ s < δ with high probability [20]. Since s l, l ( s )α, α (0, ). Provided that l = Ω(s log N), Apply Theore directly, we obtain l ( s )α s log N ( s )α s(log N) α. x ˆx 2 ( s ) α(r 2 ), with high probability. This therefore recovers the result in [3]. By siilar steps, this can also be generalized to subgaussian easureent atrices, which recovering the result in [2]. As this arguent involves soe additional technical steps, we refrain fro presenting the details here. 5 Partial rando circulant atrices In this section we apply our ain theore, Theore, on partial rando circulant atrices to obtain Theore 2. To prove Theore 2 we need the following lea. Lea. Let ξ R N be a rando vector with independent ean-zero, variance one, ρ-subgaussian entries, and denote by Φ R n a partial rando circulant atrix generated by ξ. If l satisfies l C s(log s)(log N), where C is a constant depending only on ρ, then, with probability at least 0.99, l P lv D Φ has the restricted isoetry property with constant δ s 0.. Proof. The proof of this lea is based on Theore 3 Define B = { l P lvd V r x : x D s,n }, then δ s ( l P lvd Φ) = sup x D s,n l P lvd rv xξ 2 2 E l P lvd rv xξ 2 2 = sup x D s,n l P lvd rv xξ 2 2 E x 2 2 = sup Aξ 2 2 E Aξ 2 2 A B = C B, 8

9 since for A B E Aξ 2 2 = Eξ A Aξ = A 2 F = l P lvd rv x 2 F = l P lv x 2 F = x 2 2. Estiating the ters in Theore 3 using the definition of B and the corresponding bounds for A = { V x : x D s,n } derived in [], we obtain and d F (B) = d 2 2 (B) = df (A) l l, s l d 2 2(A) l. To estiate γ 2 (B, 2 2 ), we need to estiate the covering nubers of the set B with respect to the nor l /2 ˆ, while the nor used in the estiation of γ 2 (A, 2 2 ) in [] is /2 ˆ. Paralleling to the arguents to s γ 2 (A, 2 2 ) (log s)(log N), we obtain γ 2 (B, 2 2 ) s (log s)(log N). l Suppose l 3c s(log s)(log N), where c is the sae as in Theore 3, and assue t = δ 2 then by definition of E c E + δ s s s 2 c { (log s)(log N)[ (log s)(log N) + l l l ] + l l } + δ 2 δ, (9) Further by definition of V and U, one has for C large enough and t 2 2 exp( c 2 V 2 ) 2 exp( c 2( s l [ s l (log s)(log N) + ) 0.0 (0) l ])2 δ 2 t 2 exp( c 2 U ) 2 exp( c 2( δ/2 )) 0.0. () s/l Cobining equation (0) and (), we obtain 2 exp( c 2 in{ t2 V 2, t }) 0, 0. (2) U Setting δ = 0., we obtain fro equation (5) that This proves the lea. P (δ s 0.)

10 Proof of Theore 2. To deterine the support set of s-sparse signal x, we need fro Theore 4 that s log 2 s log 2 N. (3) Now choose δ = 0., and l C s(log s)(log N), where C is as in Lea, cobining Lea and Theore. We obtain that the reconstruction error is bounded by x ˆx 2 ( l ) r+ 2 ( ) r+ 2 s(log s)(log N) (( s )) α(r 2 ), This follows fro the fact that (/s) 2α 2 log s log N. This inequality only holds for 2α 2 0, and thus α 2. Together with equation (3), we obtain that 0 α 2 This concludes the proof. References [] Albert Ai, Alex Lapanowski, Yaniv Plan, and Roan Vershynin. One-bit copressed sensing with non-gaussian easureents. Linear Algebra and its Applications, 203. [2] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin. A siple proof of the restricted isoetry property for rando atrices. Constr. Approx., 28(3): , [3] John J Benedetto, Alexander M Powell, and Ozgur Yilaz. Siga-delta (Σ ) quantization and finite fraes. Inforation Theory, IEEE Transactions on, 52(5): , [4] Jaes Blu, Mark Laers, Alexander M Powell, and Özgür Yılaz. Sobolev duals in frae theory and siga-delta quantization. Journal of Fourier Analysis and Applications, 6(3):365 38, 200. [5] Petros T Boufounos and Richard G Baraniuk. -bit copressive sensing. In Inforation Sciences and Systes, CISS nd Annual Conference on, pages 6 2. IEEE, [6] Eanuel J Candés, Justin K Roberg, and Terence Tao. Stable signal recovery fro incoplete and inaccurate easureents. Counications on pure and applied atheatics, 59(8): , [7] Eanuel J Candés, Justin K Roberg, and Terence Tao. Stable signal recovery fro incoplete and inaccurate easureents. Counications on pure and applied atheatics, 59(8): , [8] Eanuel J Candes and Terence Tao. Decoding by linear prograing. Inforation Theory, IEEE Transactions on, 5(2): ,

11 [9] Ingrid Daubechies and Ron DeVore. Approxiating a bandliited function using very coarsely quantized data: A faily of stable siga-delta odulators of arbitrary order. The Annals of Matheatics, 58(2):679 70, [0] D. L. Donoho. Copressed sensing. IEEE Trans. Infor. Theory, 52(4): , [] Felix Kraher, Shahar Mendelson, and Holger Rauhut. Suprea of chaos processes and the restricted isoetry property. Co. Pure Appl. Math., to appear. [2] C Sinan Güntürk. One-bit siga-delta quantization with exponential accuracy. Counications on Pure and Applied Matheatics, 56(): , [3] C Sinan Güntürk, Mark Laers, Alexander M Powell, Rayan Saab, and Ö Yılaz. Sobolev duals for rando fraes and σδ quantization of copressed sensing easureents. Foundations of Coputational Matheatics, 3(): 36, 203. [4] Ankit Gupta, Robert Nowak, and Benjain Recht. Saple coplexity for -bit copressed sensing and sparse classification. In Inforation Theory Proceedings (ISIT), 200 IEEE International Syposiu on, pages IEEE, 200. [5] Roger A Horn and Charles R Johnson. Matrix analysis. Cabridge university press, 202. [6] Hi Inose, Y Yasuda, and J Murakai. A teleetering syste by code odulation- -Σ odulation. Space Electronics and Teleetry, IRE Transactions on, (3): , 962. [7] Laurent Jacques, Jason N Laska, Petros T Boufounos, and Richard G Baraniuk. Robust -bit copressive sensing via binary stable ebeddings of sparse vectors. 20. [8] Felix Kraher, Rayan Saab, and Rachel Ward. Root-exponential accuracy for coarse quantization of finite frae expansions. Inforation Theory, IEEE Transactions on, 58(2): , 202. [9] Yaniv Plan and Roan Vershynin. One-bit copressed sensing by linear prograing. Counications on Pure and Applied Matheatics, 66(8):63 333, 203. [20] Holger Rauhut. Copressive sensing and structured rando atrices. Theoretical foundations and nuerical ethods for sparse recovery, 9: 92, 200. [2] Mark Rudelson and Roan Vershynin. On sparse reconstruction fro fourier and gaussian easureents. Counications on Pure and Applied Matheatics, 6(8): , [22] Michel Talagrand. The generic chaining: upper and lower bounds for stochastic processes. Springer, 2005.

12 [23] Andreas M Tillann and Marc E Pfetsch. The coputational coplexity of the restricted isoetry property, the nullspace property, and related concepts in copressed sensing. arxiv preprint arxiv: ,

Sigma Delta quantization of sub-gaussian frame expansions and its application to compressed sensing

Sigma Delta quantization of sub-gaussian frame expansions and its application to compressed sensing Inforation and Inference: A Journal of the IMA (204) 3, 40 58 doi:0093/iaiai/iat007 Advance Access publication on 4 February 204 Siga Delta quantization of sub-gaussian frae expansions and its application

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Compressive Sensing Over Networks

Compressive Sensing Over Networks Forty-Eighth Annual Allerton Conference Allerton House, UIUC, Illinois, USA Septeber 29 - October, 200 Copressive Sensing Over Networks Soheil Feizi MIT Eail: sfeizi@it.edu Muriel Médard MIT Eail: edard@it.edu

More information

On the theoretical analysis of cross validation in compressive sensing

On the theoretical analysis of cross validation in compressive sensing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.erl.co On the theoretical analysis of cross validation in copressive sensing Zhang, J.; Chen, L.; Boufounos, P.T.; Gu, Y. TR2014-025 May 2014 Abstract

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Suprema of Chaos Processes and the Restricted Isometry Property

Suprema of Chaos Processes and the Restricted Isometry Property Suprea of Chaos Processes and the Restricted Isoetry Property Felix Kraher, Shahar Mendelson, and Holger Rauhut June 30, 2012; revised June 20, 2013 Abstract We present a new bound for suprea of a special

More information

Hamming Compressed Sensing

Hamming Compressed Sensing Haing Copressed Sensing Tianyi Zhou, and Dacheng Tao, Meber, IEEE Abstract arxiv:.73v2 [cs.it] Oct 2 Copressed sensing CS and -bit CS cannot directly recover quantized signals and require tie consuing

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

Lecture 20 November 7, 2013

Lecture 20 November 7, 2013 CS 229r: Algoriths for Big Data Fall 2013 Prof. Jelani Nelson Lecture 20 Noveber 7, 2013 Scribe: Yun Willia Yu 1 Introduction Today we re going to go through the analysis of atrix copletion. First though,

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements

Compressive Distilled Sensing: Sparse Recovery Using Adaptivity in Compressive Measurements 1 Copressive Distilled Sensing: Sparse Recovery Using Adaptivity in Copressive Measureents Jarvis D. Haupt 1 Richard G. Baraniuk 1 Rui M. Castro 2 and Robert D. Nowak 3 1 Dept. of Electrical and Coputer

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

Weighted Superimposed Codes and Constrained Integer Compressed Sensing

Weighted Superimposed Codes and Constrained Integer Compressed Sensing Weighted Superiposed Codes and Constrained Integer Copressed Sensing Wei Dai and Olgica Milenovic Dept. of Electrical and Coputer Engineering University of Illinois, Urbana-Chapaign Abstract We introduce

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Efficient Algorithms for Robust One-bit Compressive Sensing

Efficient Algorithms for Robust One-bit Compressive Sensing Efficient Algoriths for Robust One-bit Copressive Sensing Lijun Zhang ZHANGLJ@LAMDA.NJU.EDU.CN National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 003, China Jinfeng Yi JINFENGY@US.IBM.COM

More information

Tail estimates for norms of sums of log-concave random vectors

Tail estimates for norms of sums of log-concave random vectors Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta Quantization Schemes

An Optimal Family of Exponentially Accurate One-Bit Sigma-Delta Quantization Schemes An Optial Faily of Exponentially Accurate One-Bit Siga-Delta Quantization Schees Percy Deift C. Sinan Güntürk Felix Kraher January 2, 200 Abstract Siga-Delta odulation is a popular ethod for analog-to-digital

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)

Recovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup) Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Fairness via priority scheduling

Fairness via priority scheduling Fairness via priority scheduling Veeraruna Kavitha, N Heachandra and Debayan Das IEOR, IIT Bobay, Mubai, 400076, India vavitha,nh,debayan}@iitbacin Abstract In the context of ulti-agent resource allocation

More information

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

Sigma Delta Quantization for Compressed Sensing

Sigma Delta Quantization for Compressed Sensing Sigma Delta Quantization for Compressed Sensing C. Sinan Güntürk, 1 Mark Lammers, 2 Alex Powell, 3 Rayan Saab, 4 Özgür Yılmaz 4 1 Courant Institute of Mathematical Sciences, New York University, NY, USA.

More information

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions

An l 1 Regularized Method for Numerical Differentiation Using Empirical Eigenfunctions Journal of Matheatical Research with Applications Jul., 207, Vol. 37, No. 4, pp. 496 504 DOI:0.3770/j.issn:2095-265.207.04.0 Http://jre.dlut.edu.cn An l Regularized Method for Nuerical Differentiation

More information

MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER

MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER MEASUREMENT MATRIX DESIGN FOR COMPRESSIVE SENSING WITH SIDE INFORMATION AT THE ENCODER Pingfan Song João F. C. Mota Nikos Deligiannis Miguel Raul Dias Rodrigues Electronic and Electrical Engineering Departent,

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Research Article Robust ε-support Vector Regression

Research Article Robust ε-support Vector Regression Matheatical Probles in Engineering, Article ID 373571, 5 pages http://dx.doi.org/10.1155/2014/373571 Research Article Robust ε-support Vector Regression Yuan Lv and Zhong Gan School of Mechanical Engineering,

More information

Supplement to: Subsampling Methods for Persistent Homology

Supplement to: Subsampling Methods for Persistent Homology Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Recovery of Sparsely Corrupted Signals

Recovery of Sparsely Corrupted Signals TO APPEAR IN IEEE TRANSACTIONS ON INFORMATION TEORY 1 Recovery of Sparsely Corrupted Signals Christoph Studer, Meber, IEEE, Patrick Kuppinger, Student Meber, IEEE, Graee Pope, Student Meber, IEEE, and

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Finite-State Markov Modeling of Flat Fading Channels

Finite-State Markov Modeling of Flat Fading Channels International Telecounications Syposiu ITS, Natal, Brazil Finite-State Markov Modeling of Flat Fading Channels Cecilio Pientel, Tiago Falk and Luciano Lisbôa Counications Research Group - CODEC Departent

More information

Sobolev Duals of Random Frames

Sobolev Duals of Random Frames Sobolev Duals of Random Frames C. Sinan Güntürk, Mark Lammers, 2 Alex Powell, 3 Rayan Saab, 4 Özgür Yılmaz 4 Courant Institute of Mathematical Sciences, New York University, NY, USA. 2 University of North

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Random encoding of quantized finite frame expansions

Random encoding of quantized finite frame expansions Random encoding of quantized finite frame expansions Mark Iwen a and Rayan Saab b a Department of Mathematics & Department of Electrical and Computer Engineering, Michigan State University; b Department

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION Vol. IX Uncertainty Models For Robustness Analysis - A. Garulli, A. Tesi and A. Vicino UNCERTAINTY MODELS FOR ROBUSTNESS ANALYSIS A. Garulli Dipartiento di Ingegneria dell Inforazione, Università di Siena, Italy A. Tesi Dipartiento di Sistei e Inforatica, Università di Firenze, Italy A.

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

A SIMPLE METHOD FOR FINDING THE INVERSE MATRIX OF VANDERMONDE MATRIX. E. A. Rawashdeh. 1. Introduction

A SIMPLE METHOD FOR FINDING THE INVERSE MATRIX OF VANDERMONDE MATRIX. E. A. Rawashdeh. 1. Introduction MATEMATIČKI VESNIK MATEMATIQKI VESNIK Corrected proof Available online 40708 research paper originalni nauqni rad A SIMPLE METHOD FOR FINDING THE INVERSE MATRIX OF VANDERMONDE MATRIX E A Rawashdeh Abstract

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs

Research Article On the Isolated Vertices and Connectivity in Random Intersection Graphs International Cobinatorics Volue 2011, Article ID 872703, 9 pages doi:10.1155/2011/872703 Research Article On the Isolated Vertices and Connectivity in Rando Intersection Graphs Yilun Shang Institute for

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

The linear sampling method and the MUSIC algorithm

The linear sampling method and the MUSIC algorithm INSTITUTE OF PHYSICS PUBLISHING INVERSE PROBLEMS Inverse Probles 17 (2001) 591 595 www.iop.org/journals/ip PII: S0266-5611(01)16989-3 The linear sapling ethod and the MUSIC algorith Margaret Cheney Departent

More information

Shannon Sampling II. Connections to Learning Theory

Shannon Sampling II. Connections to Learning Theory Shannon Sapling II Connections to Learning heory Steve Sale oyota echnological Institute at Chicago 147 East 60th Street, Chicago, IL 60637, USA E-ail: sale@athberkeleyedu Ding-Xuan Zhou Departent of Matheatics,

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION

REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information