60 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES Under the 2-PAM ap, the set (F 2 ) n of all binary n-tuples aps to the set of all real n-tuples of the

Size: px
Start display at page:

Download "60 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES Under the 2-PAM ap, the set (F 2 ) n of all binary n-tuples aps to the set of all real n-tuples of the"

Transcription

1 Chapter 6 Introduction to binary block codes In this chapter we b e g i n to study binary signal constellations, which are the Euclidean-space iages of binary block codes. Such constellations have bit rate (noinal spectral efficiency) ρ» 2 b/2d, and are thus suitable only for the power-liited regie. We will focus ainly on binary linear block codes, which have a certain useful algebraic structure. Specifically, t h e y are vector spaces over the binary field F 2. A useful infinite faily of such codes is the set of Reed-Muller codes. We discuss the penalty incurred by aking hard decisions and then perforing classical errorcorrection, and show how the penalty ay b e partially itigated by using erasures, or rather copletely by using generalized iniu distance (GMD) decoding. 6.1 Binary signal constellations In this chapter we will consider constellations that are the Euclidean-space iages of binary codes via a coordinatewise 2-PAM ap. Such constellations will be called binary signal constellations. A binary block code of length n is any subset C f0; 1g n of the set of all binary n-tuples of length n. We will usually identify the binary alphabet f0; 1g with the finite field F 2 with two eleents, whose arithetic rules are those of od-2 arithetic. Moreover, we will usually ipose the requireent t h a t C be linear ; i.e., that C be a subspace of the n-diensional vector space (F 2 ) n of all binary n-tuples. We will shortly begin to discuss such algebraic properties. Each coponent x k 2 f0; 1g of a codeword x 2 C will be apped to one of the two points ±ff of a 2-PAM signal set A = f±ffg ρ R according to a 2-PAM ap s: f0; 1g! A. Explicitly, t wo standard ways of specifying such a 2-PAM ap are s(x) = ff( 1) x ; s(x) = ff(1 2x): The first ap is ore algebraic in that, ignoring scaling, it is an isoorphis fro the additive binary group Z 2 = f0; 1g to the ultiplicative binary group f±1g, sin ce s(x) s(x 0 ) = ( 1) x+x0 = s(x + x 0 ). The second ap is ore geoetric, in that it is the coposition of a ap fro f0; 1g 2 F 2 to f0; 1g 2 R, followed by a linear transforation and a translation. owever, ultiately both forulas specify the sae ap: fs(0) = ff; s(1) = ffg: 59

2 60 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES Under the 2-PAM ap, the set (F 2 ) n of all binary n-tuples aps to the set of all real n-tuples of the for (±ff; ±ff; : : : ; ±ff). Geoetrically, this is the set of all 2 n vertices of an n-cube of side 2ff centered on the origin. It follows that a binary signal constellation A 0 = s(c) based on a binary code C (F 2 ) n aps to a subset of the vertices of this n-cube. The size of an N -diensional binary constellation A 0 is thus bounded by ja 0 j» 2 n, and its bit rate ρ = (2 =n) log 2 ja 0 j is bounded by ρ» 2 b/2d. Thus binary constellations can be used only in the power-liited regie. Since the n-cube constellation A n = s((f 2 ) n ) = ( s(f 2 )) n is siply the n-fold Cartesian product A n of the 2-PAM constellation A = s(f 2 ) = f±ffg, its noralized paraeters are the sae as those of 2-PAM, and it achieves no coding gain. Our hope is that by restricting to a subset A 0 ρ A n, a distance gain can be achieved that will ore than offset the rate loss, thus yielding a coding gain. Exaple 1. Consider the binary code C = f000; 011; 110; 101 g, whose four codewords are binary 3-tuples. The bit rate of C is thus ρ = 4 =3 b/2d. Its Euclidean-space iage s(c) is a set of four vertices of a 3-cube that for a regular tetrahedron, as shown in Figure 1. The iniu squared Euclidean distance of s(c) is d 2 in(s(c)) = 8ff 2, and every signal p o i n t in s(c) has 3 nearest neighb o r s. The average energy of s(c) is E(s(C)) = 3ff 2, s o i t s a verage energy per bit is Eb = (3 =2)ff 2, and its noinal coding gain is fl c (s(c)) = d 2 in (s(c)) 4 = (1.25 db): 4E b fi Φ(( ((((((((( Φ ΦΦ t fi J@ fi J@ fi J@ fi fi fi t fi Φ ΦΦΦ fi Φ ΦΦ Φ 011 Figure 1. The Euclidean iage of the binary code C = f000; 011; 110; 101g is a regular tetrahedron in R 3. It ight at first appear that the restriction of constellation p o i n ts to vertices of an n-cube ight force binary signal constellations to be seriously suboptial. owever, it turns out that when ρ is sall, this apparently drastic restriction does not hurt potential perforance very uch. A capacity calculation using a rando code enseble with binary alphabet A = f±ffg rather than R shows that the Shannon liit on E b =N 0 at ρ = 1 b/2d is 0.2 db rather than 0 db; i.e., the loss is only 0.2 db. As ρ! 0, the loss b e c o e s negligible. Therefore at spectral efficiencies ρ» 1 b/2d, binary signal constellations are good enough.

3 6.2. BINARY LINEAR BLOCK CODES AS BINARY VECTOR SPACES Binary linear block codes as binary vector spaces Practically all of the binary block codes that we consider will be linear. A binary linear block code is a set of n-tuples of eleents of the binary finite field F 2 = f0; 1g that for a vector space over the field F 2. As we will see in a oent, this eans siply that C ust have the group property under n-tuple addition. We therefore begin by studying the algebraic structure of the binary finite field F 2 = f0; 1g and of vector spaces over F 2. In later chapters we will study codes over general finite fields. In general, a field F is a set of eleents with two operations, addition and ultiplication, which satisfy the usual rules of ordinary arithetic (i.e., coutativity, associativity, distributivity). A field contains an additive identity 0 such that a + 0 = a for all field eleents a 2 F, and every field eleent a has an additive inverse a such that a + ( a) = 0. A field contains a ultiplicative identity 1 such that a 1 = a for all field eleents a 2 F, and every nonzero field eleent a has a ultiplicative i n verse a 1 such that a a 1 = 1. The binary field F 2 (soeties called a Galois field, and written GF(2)) is the finite field with only two eleents, naely 0 and 1, which a y be thought o f a s r e p r e s e n tatives of the even and o d d integers, odulo 2. Its addition and ultiplication tables are given by the rules of od 2 (even/odd) arithetic, with 0 acting as the additive identity and 1 as the ultiplicative identity: = = = = = = = = 1 In fact these rules are deterined by the general properties of 0 and 1 in any field. Notice that the additive i n verse of 1 is 1, so a = a for both field eleents. In general, a vector space V over a field F is a set of vectors v including 0 such that addition of vectors and ultiplication by scalars in F is well defined, and such that various other vector space axios are satisfied. For a vector space over F 2, ultiplication by scalars is trivially well defined, since 0v = 0 and 1v = v are autoatically in V. Therefore all that really needs to be checked is additive closure, or the group property of V under vector addition; i.e., for all v; v 0 2 V, v + v 0 is in V. Finally, every vector is its own additive i n verse, v = v, sin ce v + v = 1 v + 1 v = (1 + 1) v = 0 v = 0 : In suary, o ver a binary field, subtraction is the sae as addition. A vector space over F 2 is called a binary vector space. The set (F 2 ) n of all binary n-tuples v = (v 1 ; : : : ; v n) under coponentwise binary addition is an eleentary exaple of a binary vector space. ere we consider only binary vector spaces which are subspaces C (F 2 ) n, w hich are called binary linear block codes of length n. If G = fg 1 ; : : : ; g k g is a set of vectors in a binary vector space V, then the set C(G) of all binary linear cobinations X C(G) = f a j g j ; a j 2 F 2 ; 1» j» kg j

4 62 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES is a subspace of V, since C(G) evidently has the group property. The set G is called linearly independent i f these 2 k binary linear cobinations are all distinct, so that the size of C(G) is jc(g)j = 2 k. A set G of linearly independent v ectors such that C(G) = V is called a basis for V, and the eleents fg j ; 1» j» kg of the basis are called generators. The set G = fg 1 ; : : : ; g k g ay be arranged as a k n atrix over F 2, called a generator atrix for C(G). The diension of a binary vector space V is the nub er k of generators in any basis for V. As with any v ector space, the diension k and a basis G for V ay be found by the following greedy algorith: Initialization: set k = 0 a n d G = ; (the epty set); Do loop: if C(G) = V we are done, and di V = k; otherwise, increase k by 1 and take a n y v 2 V C(G) a s g k. Thus the size of V is always jv j = 2 k for soe integer k = di V ; c o n versely, di V = l o g 2 jv j. An (n, k) binary linear code C is any subspace of the vector space (F 2 ) n with diension k, or equivalently size 2 k. In other words, an (n; k) binary linear code is any s e t o f 2 k binary n-tuples including 0 that has the group property under coponentwise binary addition. Exaple 2 (siple binary linear codes). The (n; n) binary linear code is the set (F 2 ) n of all binary n-tuples, soeties called the universe code of length n. The (n; 0) binary linear code is f0g, the set containing only the all-zero n-tuple, soeties called the trivial code of length n. The code consisting of 0 and the all-one n-tuple 1 is an (n; 1) binary linear code, called the repetition code of length n. The code consisting of all n-tuples with an even nub e r of ones is an (n; n 1) binary linear code, called the even-weight or single-parity-check (SPC) code of length n The aing etric The geoetry of (F 2 ) n is defined by th e aing etric: w (x) = nuber of ones in x: The aing etric satisfies the axios of a etric: (a) Strict positivity: w (x) 0, with equality if and only if x = 0; (b) Syetry: w ( x) = w (x) (since x = x); (c) Triangle inequality: w (x + y)» w (x) + w (y). Therefore the aing distance, d (x; y) = w (x y) = w (x + y); ay be used to define (F 2 ) n as a etric space, called a aing space. We now show that the group property of a binary linear block code C leads to a rearkable syetry in the distance distributions fro each of the codewords of C to all other codewords. Let x 2 C b e a given codeword of C, and consider the set fx + y j y 2 C g = x + C as y runs through the codewords in C. By the group property o f C, x + y ust be a codew ord in C.

5 6.2. BINARY LINEAR BLOCK CODES AS BINARY VECTOR SPACES 63 Moreover, since x + y = x + y 0 if and only if y = y 0, all of these codewords ust b e distinct. But since the size of the set x + C is jcj, this iplies that x + C = C; i.e., x + y runs through all codewords in C as y runs through C. Since d (x; y) = w (x + y), this iplies the following syetry: Theore 6.1 (Distance invariance) The set of aing distances d (x; y) fro any codeword x 2 C to all codewords y 2 C is independent of x, and is equal to the set of distances fro 0 2 C, naely the set of aing weights w (y) of all codewords y 2 C. An (n; k) binary linear block code C is said to have iniu aing distance d, and is denoted as an (n; k; d) code, if d = in d (x; y): x6=y2c Theore 6.1 then has the iediate corollary: Corollary 6.2 (Miniu distance = iniu nonzero weight) The iniu aing distance of C is equal to the iniu aing weight of any nonzero codeword of C. More generally, the nuber of codewords y 2 C at distance d fro any codeword x 2 C is equal to the nuber N d of weight-d codewords in C, independent of x. Exaple 2 (cont.) The (n; n) universe code has iniu aing distance d = 1, and the nub e r of words at distance 1 fro any codeword is N 1 = n. The (n; n 1) SPC code has iniu weight and distance d = 2, and N 2 = n(n 1)=2. The (n; 1) repetition code has d = n and N n = 1. By convention, the trivial (n; 0) code f0g is said to have d = Inner products and orthogonality A syetric, bilinear inner product on the vector space (F 2 ) n is defined by the F 2 -valued dot product X hx; yi = x y = xy T = x i y i ; where n-tuples are regarded as row v ectors and T " denotes transpose." Two v ectors are said to be orthogonal if hx; yi = 0. owever, this F 2 inner product does not have a property analogous to strict positivity: hx; xi = 0 does not iply that x = 0, but only that x has an even nub e r o f ones. Thus it is perfectly possible for a nonzero vector to b e orthogonal to itself. ence hx; xi does not have a key property of the Euclidean squared nor and cannot be used to define a etric space analogous to Euclidean space. The aing geoetry of (F 2 ) n is very different fro Euclidean geoetry. In particular, the projection theore does not hold, and it is therefore not possible in general to find an orthogonal basis G for a binary vector space C. Exaple 3. The (3; 2) SPC code consists of the four 3-tuples C = f000; 011; 101; 110g. Any two nonzero codewords for a basis for C, but no two such codewords are orthogonal. The orthogonal code (dual code) C? to an (n; k) code C is defined as the set of all n-tuples that are orthogonal to all eleents of C: C? = fy 2 (F 2 ) n j h x; yi = 0 for all x 2 Cg : i

6 64 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES ere are soe eleentary facts about C? : (a) C? is an (n; n k) binary linear code, and thus has a basis of size n k. 1 (b) If G is a basis for C, then a set of n k linearly independent n-tuples in C? is a basis for C? if and only if every vector in is orthogonal to every vector in G. (c) (C? )? = C. A basis G for C consists of k linearly independent n-tuples in C, and is usually written as a k n generator atrix G of rank k. The code C then consists of all binary linear cobinations C = fag; a 2 (F 2 ) k g. A basis for C? consists of n k linearly independent n-tuples in C?, and is usually written as an (n k) n atrix ; then C? = fb; b 2 (F 2 ) n k g. According to property (b) above, C and C? are dual codes if and only if their generator atrices satisfy G T = 0. The transpose T of a generator atrix for C? is called a parity-check atrix for C; it has the property that a vector x 2 (F 2 ) n is in C if and only if x T = 0, since x is in the dual code to C? if and only if it is orthogonal to all generators of C?. Exaple 2 (cont.; duals of siple codes). In general, the (n; n) u n iv erse code and the (n; 0) trivial code are dual codes. The (n; 1) repetition code and the (n; n 1) SPC code are dual codes. Note that the (2; 1) code f00; 11g is both a repetition code and an SPC code, and is its own dual; such a code is called self-dual. (Self-duality cannot occur in real or coplex vector spaces.) 6.3 Euclidean-space iages of binary linear block codes In this section we derive the principal paraeters of a binary signal constellation s(c) fro the paraeters of the binary linear block code C on which it is based, naely the paraeters (n; k; d) and the nub e r N d of weight-d codewords in C. The diension of s(c) is N = n, and its size is M = 2 k. It thus supports k bits p e r block. The bit rate (noinal spectral efficiency) is ρ = 2 k=n b/2d. Since k» n, ρ» 2 b/2d, and we are in the power-liited regie. Every point i n s(c) is of the for (±ff; ±ff; : : : ; ±ff), and therefore every point has energy nff 2 ; i.e., the signal points all lie on an n-sphere of squared radius nff 2. The average energy per block is thus E(s(C)) = nff 2, and the average energy per bit is E b = nff 2 =k. If two codewords x; y 2 C have aing distance d (x; y), then their Euclidean iages s(x); s (y) will be the sae in n d (x; y) places, and will differ by 2ff in d (x; y) places, so 1 The standard proof of this fact involves finding a systeatic generator atrix G = [ I k j P ] for C, where I k is the k k identity atrix and P is a k (n k) c heck atrix. Then C = f(u; up ); u 2 (F2 ) k g, where u is a free inforation k-tuple and up is a check ( n k)-tuple. The dual code C? is then evidently the code generated T by = [ P T j I n k ], where P is the transpose of P ; i.e., C? T = f( vp ; v); v 2 (F2 ) n k g, whose diension is n k. A ore elegant proof based on the fundaental theore of group hooorphiss (which the reader is not expected to know at this point) is as follows. Let M be the jc? j n atrix whose rows are the codewords of C?. T Consider the hooorphis M : ( F2 ) n! (F2 ) jc? j T T defined by y 7! ym ; i.e., ym is the set of inner products of an n-tuple y 2 (F2 ) n with all codewords x 2 C?. The kernel of this hooorphis is evidently C. By the T fundaental theore of hooorphiss, the iage of M T (the row space of M ) is isoorphic to the quotient space (F2 ) n =C, which is isoorphic to (F2 ) n k. Thus the colun rank of M is n k. But the colun rank is equal to the row rank, which is the diension of the row space C? of M.

7 6.4. REED-MULLER CODES 65 their squared Euclidean distance will be 2 Therefore ks(x) s(y)k 2 = 4 ff 2 d (x; y): d 2 in(s(c)) = 4ff 2 d (C) = 4 ff 2 d; where d = d (C) is the iniu aing distance of C. It follows that the noinal coding gain of s(c) is d 2 in (s(c)) kd fl c (s(c)) = = : (6.1) 4E b n Thus the paraeters (n; k; d) directly deterine fl c (s(c)) in this very siple way. (This gives another reason to prefer E b =N 0 to SNR nor in the power-liited regie.) Moreover, every vector s(x) 2 s(c) has the sae nub e r of nearest neighb o r s K in (s(x)), naely the nub e r N d of nearest neighbors to x 2 C. Thus K in (s(c)) = N d, and K b (s(c)) = N d =k. Consequently the union bound estiate of P b (E) is p P b (E) ß K b (s(c))q (fl c (s(c))(2e b =N 0 )) N d p dk = Q 2E b =N 0 : (6.2) k n In suary, the paraeters and perforance of the binary signal constellation s(c) ay b e siply deterined fro the paraeters (n; k; d) and N d of C. Exercise 1. Let C be an P (n; k; d) binary linear code with d odd. Show that if we append an overall parity check p = i x i to each codeword x, then we obtain an (n + 1 ; k ; d + 1 ) binary linear code C 0 with d even. Show that the noinal coding gain fl c (C 0 ) is always greater than flc (C) if k > 1. Conclude that we can focus priarily on linear codes with d even. Exercise 2. Show that if C is a binary linear block code, then in every coordinate position either all codeword coponents are 0 or half are 0 and half are 1. Show that a coordinate in which all codeword coponents are 0 ay b e deleted ( punctured") without any loss in perforance, but with savings in energy and in diension. Show t h a t if C has no such all-zero coordinates, then s(c) has zero ean: (s(c)) = Reed-Muller codes The Reed-Muller (RM) codes are an infinite faily of binary linear codes that were aong the first to b e discovered (1954). For block lengths n» 32, they are the b e s t codes known with iniu distances d equal to p o wers of 2. For greater block lengths, they are not in general the best codes known, but in ters of perforance vs. decoding coplexity they are still quite good, since they adit relatively siple ML decoding algoriths. 2 Moreover, the Eudlidean-space inner product of s(x) and s(y) is hs(x); s (y)i = ( n d (x; y))ff 2 + d (x; y)( ff 2 ) = ( n 2d (x; y))ff 2 : Therefore s(x) and s(y) are orthogonal if and only if d (x; y) = n=2. Also, s(x) and s(y) a re a n tipodal (s(x) = s(y)) if and only if d (x; y) = n.

8 66 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES For any i n tegers 0 and 0» r», there exists an RM code, denoted by R M ( r; ), that r has length n = 2 and iniu aing distance d = 2, 0» r». For r =, RM(; ) is defined as the universe (2 ; 2 ; 1) code. It is helpful also to define RM codes for r = 1 by RM( 1; ) = (2 ; 0; 1), the trivial code of length 2. Thus for = 0, th e two RM codes of length 1 are the (1; 1; 1) universe code RM(0; 0) and the (1; 0; 1) trivial code RM( 1; 0). The reaining RM codes for 1 and 0» r < ay be constructed fro these eleentary codes by the following length-doubling construction, called the juju + vj construction (originally due to Plotkin). RM(r; ) is constructed fro RM(r 1; 1) and RM(r; 1) as RM(r; ) = f(u; u + v) j u 2 RM(r; 1); v 2 RM(r 1; 1)g: (6.3) Fro this construction, it is easy to prove the following facts by recursion: (a) RM(r; ) is a binary linear block code with length n = 2 and diension k(r; ) = k(r; 1) + k(r 1; 1): (b) The codes are nested, in the sense that RM(r 1; ) RM(r; ). (c) The iniu distance of RM(r; ) is d = 2 r if r 0 (if r = 1, then d = 1). We v erify that these assertions hold for RM(0; 0) and RM( 1; 0). For 1, the linearity and length of RM(r; ) are obvious fro the construction. The diension (size) follows fro the fact that (u; u + v) = 0 if and only if u = v = 0. Exercise 5 below s h o ws that the recursion for k(r; ) leads to the explicit forula where j X k(r; ) = ; (6.4) j 0»j»r! denotes the cobinatorial coefficient. j!( j)! The nesting property f o r follows fro the nesting property f o r 1. Finally, w e v erify that the iniu nonzero weight of RM(r; ) is 2 r as follows: (a) if u = 0, then w ((0; v)) = w (v) 2 r if v 6= 0, sin ce v 2 RM(r 1; 1). (b) if u + v = 0, then u = v 2 RM(r 1; 1) and w ((v; 0)) 2 r if v 6= 0. (c) if u = 6 0 and u + v 6= 0, then both u and u + v are in RM(r; 1) (since RM(r 1; 1) is a subcode of RM(r; 1)), so w ((u; u + v)) = w (u) + w (u + v) 2 2 r 1 r = 2 : Equality clearly holds for (0; v), (v; 0) or (u; u) if we choose v or u as a iniu-weight codeword fro their respective codes.

9 6.4. REED-MULLER CODES 67 The juju + vj construction suggests the following tableau of RM codes: Φ Φ ; 32; ΦΦΦ* 1) Φ ΦΦΦ(32 Φ ; 16; 1) Φ 8; ΦΦΦ(16 1) Φ Φ ; 31; ΦΦΦ* 2) Φ 4; ΦΦΦ(8Φ; 1) Φ ΦΦΦ(32 Φ ; 15; 2) Φ ΦΦΦ(4Φ; Φ ΦΦΦ(16 * ΦΦΦ 2; 1) 7; 2) Φ Φ ; 26; 4) Φ ΦΦΦ(2Φ; (1Φ; 1; 1) Φ 3; ΦΦΦ(8Φ; 2) Φ ΦΦΦ(32 Φ ; 11; 4) (2Φ; Φ 1; ΦΦΦ(4Φ; 2) Φ 4; ΦΦΦ(16 4) (32; 16; 8)- Φ ΦΦΦ(8Φ; (1; 0; 1) (4Φ; 1; 4) (16; 5; 8) (2; 0; 1) (8; 1; 8) (32; 6; 16) j (4; 0; 1) (16; 1; 16) (8; 0; 1) (32; 1; 32) j (16; 0; 1) (32; 0; 1) j Figure 2. Tableau of Reed-Muller codes. r = ; d = 1; universe codes r = 1; d = 2; SPC codes r = 2; d = 4; ext. aing codes k = n=2; self-dual codes r = 1 ; d = n=2; biorthogonal codes r = 0 ; d = n; repetition codes r = 1; d = 1; trivial codes In this tableau each RM code lies halfway b e t ween the two codes of half the length that are used to construct it in the juju + vj construction, fro which we can iediately deduce its diension k. Exercise 3. Copute the paraeters (k ; d ) of the RM codes of lengths n = 64 and 128. There is a known closed-for forula for the nub e r N d of codewords of iniu weight r d = 2 in RM(r; ): Y r 2 i 1 N d = 2 : (6.5) 2 r i 1 0»i» r 1 Exaple 4. The nub e r o f w eight-8 words in the (32; 16; 8) code RM(2; 5) is N 8 = = 620: The noinal coding gain of RM(2; 5) is fl c (C) = 4 (6.02 db); however, since K b = N 8 =k = 3 8 :75, the effective coding gain by our rule of thub i s o n l y a b o u t fl eff (C) ß 5:0 db.

10 68 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES The codes with r = 1 are single-parity-check (SPC) codes with d = 2. These codes have noinal coding gain 2(k=n ), which goes to 2 (3.01 db) as n! 1; however, since N d = 1 2 (2 1)=2, we h ave K b = 2! 1, w h i c h ultiately liits the effective coding gain. The codes with r = 2 are extended aing (E) codes with d = 4. These codes have noinal coding gain 4(k=n ), which goes to 4 (6.02 db) as n! 1; however, since N d = 2 (2 1)(2 2)=24, we again have K b! 1. Exercise 4 (optiizing SPC and E codes). Using the rule of thub that a factor of two increase in K b costs 0.2 db in effective coding gain, find the value of n for which a n ( n; n 1; 2) SPC code has axiu effective coding gain, and copute this axiu in db. Siilarly, fi n d such that a (2 ; 2 1; 4) extended aing code has axiu effective coding gain, using N d = 2 (2 1)(2 2)=24, and copute this axiu in db. The codes with r = 1 (first-order Reed-Muller codes) are interesting, because as shown in Exercise 5 they generate biorthogonal signal sets of diension n = 2 and size 2 +1, with noinal coding gain ( + 1) =2! 1. It is known that as n! 1 this sequence of codes can achieve arbitrarily sall Pr(E) for any E b =N 0 greater than the ultiate Shannon liit, naely E b =N 0 > ln 2 (-1.59 db). Exercise 5 (biorthogonal codes). We have shown that the first-order Reed-Muller codes RM(1; ) h a ve paraeters (2 ; +1 ; 2 1 ), and that the (2 ; 1; 2 ) repetition code RM(0; ) is a subcode. (a) Show that RM(1; ) has one word of weight 0, one word of weight 2, and words of weight 2 1. [int: first show that the RM(1; ) code consists of 2 copleentary codeword pairs fx; x + 1g.] (b) Show that the Euclidean iage of an RM(1; ) code is an M = 2 +1 biorthogonal signal set. [int: copute all inner products between code vectors.] (c) Show that the code C 0 consisting of all words in RM(1; ) with a 0 in any g i v en coordinate position is a (2 ; ; 2 1 ) binary linear code, and that its Euclidean iage is an M = 2 orthogonal signal set. [Sae hint as in part (a).] (d) Show that the code C 00 consisting of the code words of C 0 with the given coordinate deleted ( punctured") is a binary linear (2 1; ; 2 1 ) code, and that its Euclidean iage is an M = 2 siplex signal set. [int: use Exercise 7 of Chapter 5.] In Exercise 2 of Chapter 1, it was shown how a 2 -orthogonal signal set A can be constructed as the iage of a 2 2 binary adaard atrix. The corresponding biorthogonal signal set ±A is identical to that constructed above fro the (2 ; + 1 ; 2 1 ) first-order RM code. The code dual to RM(r; ) is RM( r 1; ); this can b e shown by recursion fro the facts that the (1; 1) and (1; 0) codes are duals and that by bilinearity h(u; u + v); (u 0 ; u 0 + v 0 )i = hu; u 0 i + hu + v; u 0 + v 0 i = hu; v 0 i + hv; u 0 i + hv; v 0 i; since hu; u 0 i + hu; u 0 i = 0. In particular, this confirs that the repetition and SPC codes are duals, and shows that the biorthogonal and extended aing codes are duals. This also shows that RM codes with k=n = 1 =2 are self-dual. The noinal coding gain of a rate-1=2 R M code of length 2 ( odd) is 2 ( 1)=2, which goes to infinity a s! 1. It sees likely that as n! 1 this sequence of codes can achieve arbitrarily sall Pr(E) for any E b =N 0 greater than the Shannon liit for ρ = 1 b/2d, naely E b =N 0 > 1 (0 db ).

11 6.4. REED-MULLER CODES 69 Exercise 6 (generator atrices for RM codes). Let square 2 2 atrices U, 1, b e specified recursively as follows. The atrix U 1 is the 2 2 atrix The atrix U is the 2 2 atrix» 1 0 U 1 = : 1 1» U U 1 0 = : U 1 U 1 (In other words, U is the -fold tensor product of U 1 with itself.) (a) Show that RM(r; ) is generated by the rows of U of aing weight 2 r or greater. [int: observe that this holds for = 1, and prove b y recursion using the juju+vj construction.] For exaple, give a generator atrix for the (8; 4; 4) RM code. (b) Show that the nuber of rows of U of weight 2 r is is the coefficient o f z r in the integer polynoial (1 + z).] (c) Conclude that the diension of RM(r; ) is k(r; ) = Effective coding gains of RM codes P r. [int: use the fact that 0»j»r j. We provide below a table of the noinal spectral efficiency ρ, noinal coding gain fl c, nuber of nearest neighb o r s N d, error coefficient per bit K b, and estiated effective coding gain fl eff at P b (E) ß 10 5 for various Reed-Muller codes, so that the student can consider these codes as coponents in syste design exercises. In later lectures, we will consider trellis representations and trellis decoding of RM codes. We give h e r e t wo coplexity paraeters of the inial trellises for these codes: the state coplexity s (the binary logarith of the axiu nuber of states in a inial trellis), and the branch coplexity t (the binary logarith of the axiu nuber of branches per section in a inial trellis). The latter paraeter gives a ore accurate estiate of decoding coplexity. code ρ fl c (db) N d K b fl eff (db) s t (8,7,2) / (8,4,4) (16,15,2) / (16,11,4) / (16, 5,8) / (32,31, 2) / (32,26, 4) / (32,16, 8) (32, 6,16) (64,63, 2) / (64,57, 4) / (64,42, 8) / (64,22,16) / (64, 7,32) / Table 1. Paraeters of RM codes with lengths n» 64. r

12 70 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES 6.5 Decoding of binary block codes In this section we will first show that with binary codes MD decoding reduces to axiureliability decoding." We will then discuss the penalty incurred by aking hard decisions and then perforing classical error-correction. We s h o w h o w the penalty a y be partially itigated by using erasures, or rather copletely by using generalized iniu distance (GMD) decoding Maxiu-reliability decoding All of our perforance estiates assue iniu-distance (MD) decoding. In other words, given a received sequence r 2 R n, the receiver ust find the signal s(x) for x 2 C such that the squared distance kr s(x)k 2 is iniu. We will show that in the case of binary codes, MD decoding reduces to axiu-reliability (MR) decoding. Since ks(x)k 2 = nff 2 is independent of x with binary constellations s(c), MD decoding is equivalent to axiu-inner-product decoding : find the signal s(x) for x 2 C such that the inner product X hr; s (x)i = r k s(x k ) is axiu. Since s(x k ) = ( 1) x k ff, the inner product ay be expressed as X k X hr; s (x)i = ff r k ( 1) x k = ff jr k j sgn(r k )( 1) x k k The sign sgn(r k ) 2 f± 1g is often regarded as a hard decision" based on r k, indicating which o f the two possible signals f±ffg is ore likely in that coordinate without taking into account the reaining coordinates. The agnitude jr k j ay be viewed as the reliability of the hard decision. This rule ay t h us be expressed as: find the codeword x 2 C that axiizes the reliability X r(x j r) = jr k j( 1) e(x k ;r k ) ; k where the error" e(x k ; r k ) is 0 if the signs of s(x k ) and r k agree, or 1 if they disagree. We call this rule axiu-reliability decoding. Any of these optiu decision rules is easy to ipleent for sall constellations s(c). owever, without special tricks they require at least one coputation for every codeword x 2 C, and therefore becoe ipractical when the nub e r 2 k of codewords becoes large. Finding sipler decoding algoriths that give a good tradeoff of perforance vs. coplexity, perhaps only for special classes of codes, has therefore been the ajor thee of practical coding research. For exaple, the Wagner decoding rule, the earliest soft-decision" decoding algorith (circa 1955), is an optiu decoding rule for the special class of (n; n 1; 2) SPC codes that requires any fewer than 2 n 1 coputations. Exercise 7 ( Wagner decoding"). Let C be an (n; n 1; 2) SPC code. The Wagner decoding rule is as follows. Make hard decisions on every syb o l r k, and check whether the resulting binary wo rd is in C. If so, accept it. If not, change the hard decision in the syb o l r k for which the reliability etric jr k j is iniu. Show that the Wagner decoding rule is an optiu decoding rule for SPC codes. [int: show t h a t the Wagner rule finds the codeword x 2 C that axiizes r(x j r).] k

13 6.5. DECODING OF BINARY BLOCK CODES ard decisions and error-correction Early work on decoding of binary block codes assued hard decisions on every sybol, yielding a hard-decision n-tuple y 2 (F 2 ) n. The ain decoding step is then to find the codeword x 2 C that is closest to y in aing space. This is called error-correction. If C is a linear (n; k; d) code, then, since the aing etric is a true etric, no error can occur when a codeword x is sent unless the nub e r of hard decision errors t = d (x; y) is at least as great as half the iniu aing distance, t d=2. For any classes of binary block codes, efficient algebraic error-correction algoriths exist that are guaranteed to decode correctly provided that 2t < d. This is called bounded-distance error-correction. Exaple 5 (aing codes). The first binary error-correction codes were the aing codes (entioned in Shannon's original paper). A aing code C is a (2 1; 2 1; 3) code that ay be found by puncturing a (2 ; 2 1; 4) extended aing RM( 2; ) code in any coordinate. Its dual C? is a (2 1; ; 2 1 ) code whose Euclidean iage is a 2 -siplex constellation. For exaple, the siplest aing code is the (3; 1; 3) repetition code; its dual is the (3; 2; 2) SPC code, whose iage is the 4-siplex constellation of Figure 1. The generator atrix of C? is an (2 1) atrix whose 2 1 coluns ust run through the set of all nonzero binary -tuples in soe order (else C would not b e guaranteed to correct any single error; see next paragraph). Since d = 3, a aing code should be able to correct any single error. A siple ethod for doing so is to copute the syndroe" y T = ( x + e) T = e T ; where e = x + y. If y T = 0, then y 2 C and y is assued to be correct. If y T 6= 0, then the syndroe y T is equal to one of the rows in T, and a single error is assued to have occurred in the corresponding position. Thus it is always possible to change any y 2 (F 2 ) n into a c o d e w ord by c hanging at ost one bit. This iplies that the 2 n aing spheres" of radius 1 and size 2 centered on the 2 n codewords x, w h ich consist of x and the n = 2 1 n-tuples y within aing distance 1 of x, for an exhaustive partition of the set of 2 n n-tuples that coprise aing n-space (F 2 ) n. In suary, aing codes for a perfect" aing sphere-packing of (F 2 ) n, and have a siple single-error-correction algorith. We n o w show t h a t e v en if an error-correcting decoder does optial MD decoding in aing space, there is a loss in coding gain of the order of 3 db relative to MD Euclidean-space decoding. Assue an (n; k; d) binary linear code C with d o d d (the situation is worse when d is even). Let x b e the transitted codeword; then there is at least one codeword at aing distance d fro x, and thus at least one real n-tuple in s(c) at Euclidean distance 4ff 2 d fro s(x). For any " > 0, a hard-decision decoding error will occur if the noise exceeds ff + " in any ( d + 1) =2 of the places in which that word differs fro x. Thus with hard decisions the iniu squared distance to the decision boundary in Euclidean space is ff 2 (d + 1) =2. (For d even, it is ff 2 d=2.) On the other hand, with soft decisions" (reliability w eights) and MD decoding, the iniu squared distance to any decision boundary in Euclidean space is ff 2 d. To the accuracy of p the union bound estiate, the arguent of the Q function thus decreases with hard-decision decoding by a factor of (d + 1) =2d, or approxiately 1=2 ( 3 db) when d is large. (When d is even, this factor is exactly 1=2.)

14 72 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES Exaple 6 (ard and soft decoding of antipodal codes). Let C be the (2; 1; 2) binary code; then the two signal points in s(c) are antipodal, as shown in Figure 3(a) b e l o w. With hard decisions, real 2-space R 2 is partitioned into four quadrants, which ust then be assigned to one or the other of the two signal points. Of course, two of the quadrants are assigned to the signal p o i n ts that they contain. owever, no atter how t h e o t h e r t wo quadrants are assigned, there will be at least one decision boundary at squared distance ff 2 fro a signal point, whereas with MD decoding the decision boundary is at distance 2ff 2 fro both signal points. The loss in the error exponent o f P b (E) is therefore a factor of 2 (3 db). ff ff t ff? R 0 R 1? 6 6 ff R 1? ff p R 1 2 ff t - t - ff ff Φ ΦΦ R 0 Φ ΦΦ R 0 Φ ΦΦΦΦ R 1 Φ ΦΦΦΦ R 0 Φ ΦΦΦΦ R 0 Φ ΦΦ R 0 Φ ΦΦ R 0 Φ ΦΦΦΦ R 0 R 1 R 1 Φ ΦΦΦΦ Φ ΦΦ (a) (b) Figure 3. Decision regions in R n with hard decisions. (a) (2; 1; 2) code; (b) (3; 1; 3) code. Siilarly, i f C is the (3; 1; 3) code, then R 3 is partitioned by hard decisions into 8 octants, as shown in Figure 3(b). In this case (the siplest exaple of a aing code), it is clear how b e s t to assign four o c t a n ts to each signal p o i n t. The squared distance fro each signal point to the nearest decision boundary is now 2ff 2, copared to 3ff 2 with soft decisions" and MD decoding in Euclidean space, for a loss of 2/3 (1.76 db) in the error exponent Erasure-and-error-correction A decoding ethod halfway b e t ween hard-decision and soft-decision" (reliability-based) techniques involves the use of erasures." With this ethod, the first step of the receiver is to ap each received signal r k into one of three values, say f0; 1;?g, where for soe threshold T, r k! 0 if r k > T ; rk! 1 if r k < T ; rk!? if T» r k» T: The decoder subsequently tries to ap the ternary-valued n-tuple into the closest codeword x 2 C in aing space, where the erased positions are ignored in easuring aing distance. If there are s erased positions, then the iniu distance b e t ween codewords is at least d s in the unerased positions, so correct decoding is guaranteed if the nub e r t of errors in the unerased positions satisfies t < (d s)=2, or equivalently if 2t+s < d. For any classes of binary block codes, efficient algebraic erasure-and-error-correcting algoriths exist that are guaranteed to decode correctly if 2t + s < d. This is called bounded-distance erasure-and-error-correction.

15 6.5. DECODING OF BINARY BLOCK CODES 73 Erasure-and-error-correction ay b e v i e w ed as a for of MR decoding in which all reliabilities jr k j are ade equal in the unerased positions, and are set to 0 in the erased positions. The ternary-valued output allows a closer approxiation to the optiu decision regions in Euclidean space than with hard decisions, and therefore reduces the loss. With an optiized threshold T, the loss is typically only about half as uch ( i n d B ). ff b t a? b Ψ 6?? b a? t - b Figure 4. Decision regions with hard decisions and erasures for the (2; 1; 2) code. Exaple 6 (cont.). Figure 4 shows the 9 decision regions for the (2; 1; 2) code that result fro hard decisions and/or erasures on each syb o l. Three of the resulting regions are abiguous. The iniu squared distances to these regions are a 2 b 2 = 2(ff T ) 2 = (ff + T ) 2 : p 1 To axiize the iniu of a 2 and b 2, we ake a 2 = b 2 by c hoosing T = p 2 ff, which yields 2+1 a 2 = b 2 8 p ff 2 = 1 :372ff 2 : ( 2 + 1) 2 This is about 1.38 db better than the squared Euclidean distance ff 2 achieved with hard decisions only, but is still 1.63 db worse than the 2ff 2 achieved with MD decoding. Exercise 8 (Optiu threshold T ). Let C b e a binary code with iniu distance d, and let received sybols be apped into hard decisions or erasures as above. Show that: (a) For any i n tegers t and s such that 2 t + s d and for any decoding rule, there exists soe pattern of t errors and s erasures that will cause a decoding error; (b) The iniu squared distance fro any signal point to its decoding decision boundary is equal to at least in 2t+s d fs(ff T ) 2 + t(ff + T ) 2 g; (c) The value of T that axiizes this iniu squared distance is T = p 2 1 ff, in which 2+ 1 case the iniu squared distance is equal to p 4 ff 2 d = 0 :686 ff 2 d. Again, this is a loss of ( 2+1) db relative to the squared distance ff 2 d that is achieved with MD decoding. p

16 74 CAPTER 6. INTRODUCTION TO BINARY BLOCK CODES Generalized iniu distance decoding A further step in this direction that achieves alost the sae perforance as MD decoding, to the accuracy of the union bound estiate, yet still perits algebraic decoding algoriths, is generalized iniu distance (GMD) decoding. In GMD decoding, the decoder keeps both the hard decision sgn(r k ) and the reliability jr k j of each received sybol, and orders the in order of their reliability. The GMD decoder then perfors a series of erasure-and-error decoding trials in which the s = d 1, d 3; : : : least reliable sybols are erased. (The interediate trials are not necessary because if d s is even and 2t < d s, then also 2 t < d s 1, so the trial with one additional erasure will find the sae codeword.) The nub e r o f s u c h trials is d=2 if d is even, or (d + 1) =2 if d is odd; i.e., the nuber of trials needed is dd=2e. Each trial ay produce a candidate codeword. The set of dd=2e trials ay thus produce up to dd=2e distinct candidate codewords. These words ay finally be copared according to their reliability r(x j r) ( o r a n y equivalent o p t i u etric), and the best candidate chosen. Exaple 7. For an (n; n 1; 2) SPC code, GMD decoding perfors just one trial with the least reliable sybol erased; the resulting candidate codeword is the unique codeword that agrees with all unerased sybols. Therefore in this case the GMD decoding rule is equivalent to the Wagner decoding rule (Exercise 7), which iplies that it is optiu. It can be shown that no error can occur with a GMD decoder provided that the squared nor jjnjj 2 of the noise vector is less than ff 2 d; i.e., the squared distance fro any signal point t o i t s decision boundary is ff 2 d, just as for MD decoding. Thus there is no loss in coding gain or error exponent copared to MD decoding. It has been shown that for the ost iportant classes of algebraic block codes, GMD decoding can be perfored with little ore coplexity than ordinary hard-decision or erasures-and-errors decoding. Furtherore, it has b e e n s h o wn that not only is the error exponent o f GMD decoding equal to that of optiu MD decoding, but also the error coefficient and thus the union bound estiate are the sae, provided that GMD decoding is augented to include a d-erasurecorrection trial (a purely algebraic solution of the n k linear parity-check equations for the d unknown erased sybols). owever, GMD decoding is a bounded-distance decoding algorith, so its decision regions are like spheres of squared radius ff 2 d that lie within the MD decision regions R j. For this reason GMD decoding is inferior to MD decoding, typically iproving over erasure-and-error-correction by 1 db or less. GMD decoding has rarely been used in practice Suary In conclusion, hard decisions allow the use of efficient algebraic decoding algoriths, but incur a significant SNR penalty, of the order of 3 db. By using erasures, about half of this p e n a l t y can b e avoided. With GMD decoding, efficient algebraic decoding algoriths can in principle be used with no loss in perforance, at least as estiated by the the union bound estiate.

Introduction to binary block codes

Introduction to binary block codes 58 Chapter 6 Introduction to binary block codes In this chapter we begin to study binary signal constellations, which are the Euclidean-space images of binary block codes. Such constellations have nominal

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Reed-Muller codes for random erasures and errors

Reed-Muller codes for random erasures and errors Reed-Muller codes for rando erasures and errors Eanuel Abbe Air Shpilka Avi Wigderson Abstract This paper studies the paraeters for which Reed-Muller (RM) codes over GF (2) can correct rando erasures and

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

AN APPLICATION OF INCOMPLETE BLOCK DESIGNS. K. J. C. Smith University of North Carolina. Institute of Statistics Mimeo Series No. 587.

AN APPLICATION OF INCOMPLETE BLOCK DESIGNS. K. J. C. Smith University of North Carolina. Institute of Statistics Mimeo Series No. 587. AN APPLICATION OF INCOMPLETE BLOCK DESIGNS TO THE CONSTRUCTION OF ERROR-CORRECTING CODES by K. J. C. Sith University of North Carolina Institute of Statistics Mieo Series No. 587 August, 1968 This research

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k

G G G G G. Spec k G. G Spec k G G. G G m G. G Spec k. Spec k 12 VICTORIA HOSKINS 3. Algebraic group actions and quotients In this section we consider group actions on algebraic varieties and also describe what type of quotients we would like to have for such group

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields Finite fields I talked in class about the field with two eleents F 2 = {, } and we ve used it in various eaples and hoework probles. In these notes I will introduce ore finite fields F p = {,,...,p } for

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

arxiv: v1 [math.nt] 14 Sep 2014

arxiv: v1 [math.nt] 14 Sep 2014 ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

i ij j ( ) sin cos x y z x x x interchangeably.)

i ij j ( ) sin cos x y z x x x interchangeably.) Tensor Operators Michael Fowler,2/3/12 Introduction: Cartesian Vectors and Tensors Physics is full of vectors: x, L, S and so on Classically, a (three-diensional) vector is defined by its properties under

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes Low-coplexity, Low-eory EMS algorith for non-binary LDPC codes Adrian Voicila,David Declercq, François Verdier ETIS ENSEA/CP/CNRS MR-85 954 Cergy-Pontoise, (France) Marc Fossorier Dept. Electrical Engineering

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n THE POLYNOMIAL REPRESENTATION OF THE TYPE A n RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n SHEELA DEVADAS AND YI SUN Abstract. We study the polynoial representation of the rational Cherednik algebra

More information

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel 1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

Understanding Machine Learning Solution Manual

Understanding Machine Learning Solution Manual Understanding Machine Learning Solution Manual Written by Alon Gonen Edited by Dana Rubinstein Noveber 17, 2014 2 Gentle Start 1. Given S = ((x i, y i )), define the ultivariate polynoial p S (x) = i []:y

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

A NOTE ON HILBERT SCHEMES OF NODAL CURVES. Ziv Ran

A NOTE ON HILBERT SCHEMES OF NODAL CURVES. Ziv Ran A NOTE ON HILBERT SCHEMES OF NODAL CURVES Ziv Ran Abstract. We study the Hilbert schee and punctual Hilbert schee of a nodal curve, and the relative Hilbert schee of a faily of curves acquiring a node.

More information

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition

Upper bound on false alarm rate for landmine detection and classification using syntactic pattern recognition Upper bound on false alar rate for landine detection and classification using syntactic pattern recognition Ahed O. Nasif, Brian L. Mark, Kenneth J. Hintz, and Nathalia Peixoto Dept. of Electrical and

More information

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples

Order Recursion Introduction Order versus Time Updates Matrix Inversion by Partitioning Lemma Levinson Algorithm Interpretations Examples Order Recursion Introduction Order versus Tie Updates Matrix Inversion by Partitioning Lea Levinson Algorith Interpretations Exaples Introduction Rc d There are any ways to solve the noral equations Solutions

More information

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011) E0 370 Statistical Learning Theory Lecture 5 Aug 5, 0 Covering Nubers, Pseudo-Diension, and Fat-Shattering Diension Lecturer: Shivani Agarwal Scribe: Shivani Agarwal Introduction So far we have seen how

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition

Pattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

VC Dimension and Sauer s Lemma

VC Dimension and Sauer s Lemma CMSC 35900 (Spring 2008) Learning Theory Lecture: VC Diension and Sauer s Lea Instructors: Sha Kakade and Abuj Tewari Radeacher Averages and Growth Function Theore Let F be a class of ±-valued functions

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

a a a a a a a m a b a b

a a a a a a a m a b a b Algebra / Trig Final Exa Study Guide (Fall Seester) Moncada/Dunphy Inforation About the Final Exa The final exa is cuulative, covering Appendix A (A.1-A.5) and Chapter 1. All probles will be ultiple choice

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Descent polynomials. Mohamed Omar Department of Mathematics, Harvey Mudd College, 301 Platt Boulevard, Claremont, CA , USA,

Descent polynomials. Mohamed Omar Department of Mathematics, Harvey Mudd College, 301 Platt Boulevard, Claremont, CA , USA, Descent polynoials arxiv:1710.11033v2 [ath.co] 13 Nov 2017 Alexander Diaz-Lopez Departent of Matheatics and Statistics, Villanova University, 800 Lancaster Avenue, Villanova, PA 19085, USA, alexander.diaz-lopez@villanova.edu

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

AN ALGEBRAIC AND COMBINATORIAL APPROACH TO THE CONSTRUCTION OF EXPERIMENTAL DESIGNS

AN ALGEBRAIC AND COMBINATORIAL APPROACH TO THE CONSTRUCTION OF EXPERIMENTAL DESIGNS Geotec., Const. Mat. & Env., ISSN: 2186-2982(P), 2186-2990(O), Japan AN ALGEBRAIC AND COMBINATORIAL APPROACH TO THE CONSTRUCTION OF EXPERIMENTAL DESIGNS Julio Roero 1 and Scott H. Murray 2 1 Departent

More information

lecture 36: Linear Multistep Mehods: Zero Stability

lecture 36: Linear Multistep Mehods: Zero Stability 95 lecture 36: Linear Multistep Mehods: Zero Stability 5.6 Linear ultistep ethods: zero stability Does consistency iply convergence for linear ultistep ethods? This is always the case for one-step ethods,

More information

Bipartite subgraphs and the smallest eigenvalue

Bipartite subgraphs and the smallest eigenvalue Bipartite subgraphs and the sallest eigenvalue Noga Alon Benny Sudaov Abstract Two results dealing with the relation between the sallest eigenvalue of a graph and its bipartite subgraphs are obtained.

More information

(t, m, s)-nets and Maximized Minimum Distance, Part II

(t, m, s)-nets and Maximized Minimum Distance, Part II (t,, s)-nets and Maxiized Miniu Distance, Part II Leonhard Grünschloß and Alexander Keller Abstract The quality paraeter t of (t,, s)-nets controls extensive stratification properties of the generated

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

Error Exponents in Asynchronous Communication

Error Exponents in Asynchronous Communication IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

About the definition of parameters and regimes of active two-port networks with variable loads on the basis of projective geometry

About the definition of parameters and regimes of active two-port networks with variable loads on the basis of projective geometry About the definition of paraeters and regies of active two-port networks with variable loads on the basis of projective geoetry PENN ALEXANDR nstitute of Electronic Engineering and Nanotechnologies "D

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 17th February 2010 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion

More information

An Algorithm for Posynomial Geometric Programming, Based on Generalized Linear Programming

An Algorithm for Posynomial Geometric Programming, Based on Generalized Linear Programming An Algorith for Posynoial Geoetric Prograing, Based on Generalized Linear Prograing Jayant Rajgopal Departent of Industrial Engineering University of Pittsburgh, Pittsburgh, PA 526 Dennis L. Bricer Departent

More information

The Fundamental Basis Theorem of Geometry from an algebraic point of view

The Fundamental Basis Theorem of Geometry from an algebraic point of view Journal of Physics: Conference Series PAPER OPEN ACCESS The Fundaental Basis Theore of Geoetry fro an algebraic point of view To cite this article: U Bekbaev 2017 J Phys: Conf Ser 819 012013 View the article

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Learnability and Stability in the General Learning Setting

Learnability and Stability in the General Learning Setting Learnability and Stability in the General Learning Setting Shai Shalev-Shwartz TTI-Chicago shai@tti-c.org Ohad Shair The Hebrew University ohadsh@cs.huji.ac.il Nathan Srebro TTI-Chicago nati@uchicago.edu

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 9th February 011 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion of

More information

Birthday Paradox Calculations and Approximation

Birthday Paradox Calculations and Approximation Birthday Paradox Calculations and Approxiation Joshua E. Hill InfoGard Laboratories -March- v. Birthday Proble In the birthday proble, we have a group of n randoly selected people. If we assue that birthdays

More information

(a) As a reminder, the classical definition of angular momentum is: l = r p

(a) As a reminder, the classical definition of angular momentum is: l = r p PHYSICS T8: Standard Model Midter Exa Solution Key (216) 1. [2 points] Short Answer ( points each) (a) As a reinder, the classical definition of angular oentu is: l r p Based on this, what are the units

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs

On the Inapproximability of Vertex Cover on k-partite k-uniform Hypergraphs On the Inapproxiability of Vertex Cover on k-partite k-unifor Hypergraphs Venkatesan Guruswai and Rishi Saket Coputer Science Departent Carnegie Mellon University Pittsburgh, PA 1513. Abstract. Coputing

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Projectile Motion with Air Resistance (Numerical Modeling, Euler s Method)

Projectile Motion with Air Resistance (Numerical Modeling, Euler s Method) Projectile Motion with Air Resistance (Nuerical Modeling, Euler s Method) Theory Euler s ethod is a siple way to approxiate the solution of ordinary differential equations (ode s) nuerically. Specifically,

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

A := A i : {A i } S. is an algebra. The same object is obtained when the union in required to be disjoint.

A := A i : {A i } S. is an algebra. The same object is obtained when the union in required to be disjoint. 59 6. ABSTRACT MEASURE THEORY Having developed the Lebesgue integral with respect to the general easures, we now have a general concept with few specific exaples to actually test it on. Indeed, so far

More information