Decoding Linear Block Codes Using a Priority-First Search: Performance Analysis and Suboptimal Version

Size: px
Start display at page:

Download "Decoding Linear Block Codes Using a Priority-First Search: Performance Analysis and Suboptimal Version"

Transcription

1 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY Decoding Linear Block Codes Using a Priority-First Search Performance Analysis Subotimal Version Yunghsiang S. Han, Member, IEEE, Carlos R. P. Hartmann, Fellow, IEEE, Kishan G. Mehrotra Abstract An efficient maximum-likelihood soft-decision decoding algorithm for linear block codes using a generalized Dijkstra s algorithm was roosed by Han, Hartmann, Chen. In this corresondence we rove that this algorithm is efficient for most ractical communication systems the robability of error is less than 1 3 by finding an uer bound of the comutational effort of the algorithm. A subotimal decoding algorithm is also roosed. The erformance of this subotimal decoding algorithm is within.5 db of the erformance of an otimal decoding algorithm for the (14; 5) binary extended quadratic residue code, within.5 db of the otimal erformance for the (18; 64) binary BCH code, resectively. Index Terms Block codes, decoding, Dijkstra s algorithm, maximumlikelihood, soft-decision, subotimal. I. INTRODUCTION The use of block codes is a well-known error-control technique for reliable transmission of digital information over noisy communication channels. Linear block codes with good coding gains have been known for many years; however, these block codes have not been used in ractice for lack of an efficient soft-decision decoding algorithm. Several researchers [], [5], [], [5] have resented techniques for decoding linear block codes that convert the decoding roblem into a grah-search roblem on a trellis derived from a arity-check matrix of the code. Thus the maximum-likelihood decoding (MLD) rule can be imlemented by alying the Viterbi algorithm [4] to this trellis. In ractice, however, this breadth-first search scheme can be alied only to codes with small redundancy or to codes with a small number of codewords [16]. Some coset decoding schemes have been roosed [1], [18], [19], [3]; however, they deend on the selection of a secific subcode. An efficient algorithm has also been roosed for long high-rate codes, short- moderate-length codes [4]. We recently roosed a novel maximum-likelihood soft-decision decoding algorithm for linear block codes [11] [13]. This algorithm uses a generalization of Dijkstra s algorithm (GDA) [17] to search through the trellis for a code equivalent to the transmitted code. The use of this riority-first search strategy drastically reduces the decoding search sace results in an efficient otimal soft-decision decoding algorithm for linear block codes. Furthermore, in contrast Manuscrit received Aril 11, 1994; revised Setember 4, This work was suorted in art by the National Science Foundation under Grant NCR-954 used the comutational facilities of the Northeast Parallel Architectures Center (NPAC) at Syracuse University. The work of Y. S. Han was suorted in art by the National Science Council R.O.C. under Grant NSC E The material in this corresondence was resented in art at the IEEE International Symosium on Information Theory, Trondheim, Norway, June Y. S. Han is with the Deartment of Comuter Science Information Engineering, National Chi Nan University, Taiwan, R.O.C. C. R. P. Hartmann K. G. Mehrotra are with the Deartment of Electrical Engineering Comuter Science, Syracuse University, Syracuse, NY USA. Publisher Item Identifier S (98)361-X. with Wolf s algorithm [5], the decoding effort of our algorithm is adatable to the noise level. In Section II we review the MLD of linear block codes, describe the code tree for a linear code, briefly state the decoding algorithm roosed in [13]. In Section III we give an uer bound on the comutational effort of this algorithm, in Section IV we resent a subotimal decoding algorithm. Simulation results for the (48; 4), the (14; 5) binary extended quadratic residue codes, the (18; 64) binary extended Bose Chaudhuri Hocquengham (BCH) code are given in Section V, concluding remarks in Section VI. II. PRELIMINARIES Let C be a binary (n; k) linear block code with generator matrix G, let c =(c; c1; 111;c ) be a codeword of C transmitted over a time-discrete memoryless channel with outut alhabet B. Furthermore, let r =(r; r1; 111;r ), r j B denote the received vector, assume that Pr (r j jc i ) > for all r j B c i GF (). Let c be an estimate of the transmitted codeword c. The maximum-likelihood decoding rule (MLD rule) for a timediscrete memoryless channel can be formulated as [3], [], [3] set c = c`; c` C for all c i ( j (1) c ) C ( j (1) c ) (1) Pr (rj j) j =ln Pr (rjj1) () We, therefore, may consider that the received vector is = ( ; 1; 111;). In the secial case the codewords of C have equal robability of being transmitted, the MLD rule minimizes error robability. Our decoding algorithm (resented in [13]) uses the riority-first search strategy, thus avoiding traversing the entire trellis. Guided by an evaluation function f, it searches through a grah that is a trellis for a code C 3, which is equivalent to code C. C 3 is obtained from C by ermuting the ositions of codewords of C in such a way that the first k ositions of codewords in C 3 corresond to the most reliable linearly indeendent ositions in the received vector. Let G 3 be a generator matrix of C 3 whose first k columns form a k k identity matrix. In this decoding algorithm, the vector 3 = ( 3 ; 3 1; 111; 3 ) is used as the received vector. It is obtained by ermuting the ositions of in the same manner in which the columns of G are ermuted to obtain G 3. Since the robability is very small that our decoding algorithm will revisit a node of the trellis, our imlementation did not check for reeated nodes [13]. In this case, the grah the search is erformed is a code tree. A code tree is a way to reresent every codeword of an (n; k) code C 3 as a ath through a tree containing n +1levels. In the code tree, every ath is totally distinct from every other ath. The leftmost node is called the start node, which is at level. There are two branches, labeled 1, resectively, that leave each node at the first k levels. After the k levels, there is only one branch leaving each node. The k rightmost nodes are called goal nodes, which are at level n. To determine the sequence of labels encountered when traversing a ath from a node at level k to a goal node, let G 3 be a generating /98$ IEEE

2 134 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 matrix of C 3 whose first k columns form the k k identity matrix. Furthermore, let c ;c 1; 111;c k1 be the sequence of labels encountered when traversing a ath from the start node to a node m at level k. Then c k ;c k+1 ; 111;c ; the sequence of labels encountered when traversing a ath from node m to a goal node, can be obtained as follows (c ;c 1 ; 111;c k ;c k+1 ; 111;c )=(c ;c 1 ; 111;c k1 )G 3 The cost of the branch from a node at level t to a node at level t +1is assigned the value ( 3 t (1) c ), c 3 t is the label of the branch. Thus the solution of the decoding roblem is converted into finding a lowest cost ath from the start node to a goal node. Such a ath will be called an otimal ath. We define the evaluation function f for every node m in the code tree as f (m) =g(m)+h(m), g(m) is the cost of the ath from the start node to node m h(m) is an estimate of the minimum cost among all the aths from node m to goal nodes. The cost of a ath is obtained by summing all the branch costs encountered while constructing this ath. The GDA requires that for all nodes m i m j such that node m j is an immediate successor of node m i h(m i) h(m j)+c(m i;m j) (3) c(m i ;m j ) is the branch cost between node m i node m j. This requirement guarantees that the GDA will find an otimal ath. In the GDA, the next node to be exed is the one with the smallest value of f on the list of all leaf nodes (list OPEN) of the subtree constructed so far by the algorithm. Thus list OPEN must be ket ordered according to the values f of its nodes. Every time the GDA exs a node, it calculates values f of two immediate successors of this node then inserts these two successors into list OPEN. In this rocess the GDA visits are these two successors. All the nodes on list OPEN also kee the labels of the aths from the start node to them, which can be used to calculate function f. When the algorithm chooses to ex a goal node, it is time to sto because the algorithm has constructed a ath with minimum cost. For ractical alications there may exist many functions h that satisfy inequality (3). Following are some results resented in [13] that can be used to design a suitable function h to reduce the number of nodes visited. Theorem 1 Let h 3 (m) be the minimum cost among all the aths from node m to goal nodes f 3 (m) =g(m) +h 3 (m) For every node m, if h(m) satisfies inequality (3), then Theorem Let two functions satisfy h(m) h 3 (m) f 1(m) =g(m) +h 1(m) f (m) =g(m) +h (m) h 1 (m) h (m) h 3 (m) for every non-goal node m. Furthermore, there exists a unique otimal ath. Then the GDA, using evaluation function f, will never exend more nodes than the algorithm using evaluation function f 1. Theorem 3 Assume that there exists a unique otimal ath that f 3 (m start)(= h 3 (m start)) is the cost of the otimal ath, m start is the start node. Then, for any node m selected for exansion (oen) by the GDA f (m) f 3 (m start ) From the above theorems, we intend to design a function h such that the value h(m) for any non-goal node m is as large as ossible; however, the comutational effort of h(m) is usually higher when h(m) is larger. The best function h we may have is h 3. Usually, the comutation of h 3 (m) involves the search of a ath from node m to a goal node with minimum cost, such a search is intractable. Thus there is a tradeoff between the number of nodes visited the comutation comlexity of function h. Normally, to define a good function h we need to have some knowledge of the structure of the grah the search is erformed. We use some roerties of linear block codes to define our function h [13] that satisfies inequality (3). For every received vector, since we order the comonents in according to their reliability, the roerties that we use to define function h must be invariant under any ermutation of the ositions of the codewords with which we obtain C 3 from C; otherwise, we need to define different function h for every received vector, which is imractical. Since the Hamming distance between any two codewords is invariant under the ermutations with which we obtain C 3 from C, our heuristic function is designed to take into consideration the fact that the Hamming distance between any two codewords of C 3 must belong to HW, HW = fw i j i Ig is the set of all distinct Hamming weights that codewords of C may have. Furthermore, assume w < w 1 < 111 < w I. Let c 3 be a given codeword of C 3. Our function h is defined with resect to c 3, which is called the seed of the decoding algorithm. 1) For nodes at level `, with ` k 1 Let m be a node at level `, let v ; v 1 ; 111; v`1 be the labels of the ath P m from the start node to node m. Let the set T (m) contain all binary n-tules v such that their first ` entries are the labels of P m d H (v; c 3 ) HW, d H (x; y) y is the Hamming distance between x y. That is, T (m) =fvjv =(v ; v 1; 111; v`1 ;v`; 111;v ) d H (v; c 3 ) HWg Note that T (m) 6= ;. This can easily be seen by considering the binary k-tule u =(v ; v 1; 111; v`1 ; ; 111; ) noting that u 1 G 3 T (m) We now define function h as h(m) = min vt (m) ( 3 i (1) v ) ) For nodes at level `, with k ` n Let m be a node at level `. We define function h as h(m) = ( 3 i (1) v ) v 3` ;v 3`+1; 111;v 3 are the labels of the only ath P m P m P m from node m to a goal node. Note that if node m is a goal node, then h(m) =. Since we can construct the only ath from any node m at level `, with ` k, to the goal node using G 3, the estimate h(m) comuted here is always exact. Furthermore, h(m) =h 3 (m) since there is only one ath from node m to a goal node h(m) is the cost of this ath.

3 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY An algorithm to calculate h(m) for node m at level `, with ` k 1, whose time comlexity is O(n), is resented in [1] [13]. In [13] it is shown that our decoding algorithm is a deth-first search tye that will search the code tree only u to level k. The labels of the combined aths from the start node to node m at level k, from node m to a goal node, corresond to a codeword. Therefore, the cost of this ath f (m) can be used as an uer bound (UB) on the cost of an otimal ath. Therefore, we can use this UB to reduce the size of list OPEN. Furthermore, the algorithm will still find an otimal ath even if in the comutation of function h the algorithm considers all the Hamming weights of any suerset of HW. More details about this decoding algorithm can be found in [1] [13], we also described other seedu techniques such as the stoing criterion changing the seed during the decoding rocedure. It is shown by the simulation results in [1] [13] that seedu techniques reduce the number of nodes visited by our decoding algorithm. Therefore, it is worthwhile to briefly describe the seedu techniques here. The search rocedure can be stoed at any time when we know that a generated codeword c 3` = (c 3`; c 3`1; 111;c 3`() ) satisfies inequality (1). The following criterion can be used to indicate this fact. Criterion Let m start be the start node. If h(m start) = ( 3 j (1) c ) h(m start ) is calculated with resect to seed c 3`, then c 3` satisfies inequality (1). Hence, during the decoding rocedure, if the algorithm generates a new codeword the cost of the ath whose labels corresond to this codeword is the lowest found so far by the algorithm, then we may check whether the new codeword satisfies the criterion or not. If so, this codeword satisfies inequality (1) the decoding rocedure stos. Another seedu technique is not fixing seed c 3 during the decoding of 3. When seed c 3 is allowed to change, we have an adative decoding rocedure. In order to avoid increasing comutational comlexity when the seed is changed at some stage of the decoding rocedure, we may not want to recalculate the values of function h with resect to this new seed for every node on list OPEN. Under these circumstances, nodes on list OPEN may calculate the values of function h with resect to different seeds. Since all the theorems simulation results are obtained when we assume that code C is transmitted over a memoryless, additive white Gaussian noise (AWGN) channel, we describe the channel here. We assume that antiodal signaling is used in the transmission so that the jth comonents of the transmitted codeword c received vector r are c j =(1) c E r j =(1) c E + e j resectively, E is the signal energy er channel bit e j is a noise samle of a Gaussian rocess with single-sided noise ower er hertz N. The variance of e j is N = the signal-to-noise ratio (SNR) for the channel is = E=N. In order to account for the redundancy in codes of different rates, we used the SNR er transmitted information bit b = E b =N = n=k = =R, R = k=n is the code rate. III. ANALYSIS OF THE COMPUTATIONAL EFFORT OF THE ALGORITHM Our decoding algorithm can be considered a branch--bound algorithm. In general, it is difficult to know how well a branch-bound algorithm will erform on a given roblem [6]; however, we can derive an uer bound on the average number of nodes visited by our decoding algorithm, which shows that this decoding algorithm is very efficient for most ractical communication systems the robability of error is less than 1 3. One imortant measure of the comutational effort of an algorithm is its time comlexity [1]. If the comlexity is taken as the average comlexity over all inuts of fixed size, then the comlexity is called the exected (average) comlexity. In a decoding roblem, the inuts are received vectors. The time comlexity of our decoding algorithm is the multilication of the number of nodes visited the time comlexity to calculate the function f (m) [11]. Since the time comlexity to calculate the function f (m) in our decoding algorithm is O(n), the average comlexity of our decoding algorithm is determined by the average numbers of nodes visited [1], [13]. In order to derive an uer bound on the average number of nodes visited by our decoding algorithm, we will define another heuristic function, h s, that satisfies the condition h s (m) h (m) for every node of the code tree h is the function defined in the receding section. Thus by Theorem, the decoding algorithm using the function h will never oen more nodes than the decoding algorithm using the function h s. We now define the function h s the function f s. Let m be a node at level `, with ` k1, let v ; v 1 ; 111; v`1 be the labels of the ath P m from the start node to node m. Define h s f s as h s(m) = (j ij1) f s(m) =g(m) +hs(m) g(m) = `1 i= i (1) v For a node at a level greater than k1, the function h s will be defined as in the revious section. It is easy to see that h s(m) h(m) for every node of the code tree. By the above definition, when the decoding algorithm is calculating f s(m) hs(m), i, with i n 1, v i, with i ` 1, must be known. Before the decoding rocedure starts, i can be obtained by () the received vector. Hence, i can be stored used when the algorithm calculates f s (m) h s (m) for any node m. When the decoding algorithm visits node m, the ath from the start node to node m in the code tree is known, since all nodes on list OPEN kee the labels of the aths from the start node to them. Therefore, the labels on this ath, v i, are known. Since the calculation of f s (m) h s(m) involves only n terms, at most, of known values for which the algorithm does not need to calculate or search, the time comlexities of calculating h s (m) f s (m) are O(n). However, for any node m at level `, with ` k 1, if the value of function f s of its immediate redecessor m is known, we may obtain f s (m) from f s (m ) by the simle comutation given below. Assume that node m is not the start node m start, that node m is at level `, with ` k 1. Let node m be the immediate redecessor of node m. Furthermore, let y =(y ;y 1; 111;y ) be the hard decision of. That is, 1; if i < y i = ; otherwise. Let v`1 be the label of the branch between node m node m let 8 denote a modula addition. Since h s (m) =h s (m ) (j`1 j1)

4 136 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 then f s(m) =g(m) +h s(m) Consequently, = g(m )+(`1 (1) v ) + h s (m ) (j`1 j1) f s (m) =f s (m )+(y`1 8 v`1 )(4 1j`1 j) (4) Thus when f s (m ) is calculated, f s (m) can be obtained by (4). Since the start node m start has no redecessor, we cannot use (4) to obtain the value f s (m start ). In order to obtain an uer bound of the comutational effort of our decoding algorithm, we first derive an uer bound of the comutational effort of a simlified version of it, which we denote by SDA. In this version 1) we do not order the ositions of, ) we use function h s as the heuristic function. For a given received vector, by Theorem 3, if node m is selected for exansion, then f (m) f 3 (m start ), f 3 (m start ) is the cost of the otimal ath. Since we do not order the ositions of, the comonents of are indeendent rom variables. Furthermore, since the cost of the ath whose labels corresond to the transmitted codeword is greater than or equal to f 3 (m start), by the central limit theorem we can calculate an uer bound on the robability of a node being exed. Consequently, we may calculate an uer bound on the average number of nodes visited by the SDA. We now state the main results of the comutational effort of the SDA when code C is transmitted over the AWGN channel given in Section II. Since the result is derived by using the central limit theorem, it only holds when n is large. Theorem 4 Let N s be the average number of nodes visited by the SDA let Q(1) be the stard normal distribution. Then, for a large n N = k + k1 ` `= d=1 N s N ` d = N d R b +(n `) Q 1 R b Q( R b ) 1 e R =N d +(n `) (4R b +)Q( R b ) R b er R b Q( R b ) 1 e R The roof of Theorem 4 is given in Aendix A. The first term on the right-h side of (5) is due to the assumtion that the ath whose labels corresond to the transmitted codeword will be exed. In the second term, Q(=) is an uer bound on the robability of a node being exed, the node is at level ` the sequence of the labels of the ath from the start node to this node have Hamming distance d to the transmitted codeword. In the decoding algorithm roosed in [13] we ordered the ositions of to obtain 3, which is assumed to be the received vector. (5) Next, we rove that if the k-most reliable ositions of are linearly indeendent, then the reordering will not increase the comutational effort of the SDA. Theorem 5 If the k-most reliable ositions of are linearly indeendent, then N s ( 3 ) N s () N s( 3 ) N s() are the number of nodes visited by the SDA when 3 are decoded, resectively. The roof of Theorem 5 can be found in Aendix B. Let N ( 3 ) be the number of nodes visited by the decoding algorithm roosed in [13] when it decodes 3 let N be the average number of nodes visited by this decoding algorithm. By Theorems, 4, 5 as well as the Markov inequality [1], we have the following result. Theorem 6 If the k-most reliable ositions of are linearly indeendent, then N ( 3 ) N s( 3 ) N s() N N s N Pr (N ( 3 ) L) N L L is any ositive real number. We remark here that it is not always true that the k-most reliable ositions of are linearly indeendent. In this case, we cannot guarantee that N ( 3 ) N s (). However, in our simulations we have never encountered a case N ( 3 ) >N s (). Therefore, we can take N to be a good estimator of an uer bound on N, the average number of nodes visited by our decoding algorithm in [13]. When the GDA (SDA) searches for an otimal ath in a code tree of an (n; k) linear block code, the minimum number of nodes visited by the GDA (SDA) is k, which is the number of nodes visited while the GDA (SDA) searches along the otimal ath only. Therefore, the average number of nodes visited by the SDA N are greater than or equal to k. However, the average number of nodes visited by the decoding algorithm roosed in [13] may be less than k due to the effect of the stoing criterion. Since the comutation comlexity of the stoing criterion is the same order as the comutation comlexity of the SDA that searches along the otimal ath only [13], it is reasonable that we comare N with the average number of nodes visited by the decoding algorithm in [13] without using the stoing criterion when N is close to k. The values of N for the (48; 4) code for b equal to, 3, 4, 5, 6, 7, 8, 9, 1 db are given in Fig. 1. In this figure are also given the simulation results of the average number of nodes visited by the SDA by the decoding algorithm roosed in [13] with without using the stoing criterion. These averages were obtained by simulating 1 samles. The 1 samles were generated romly by a codeword generator then transmitted over the AWGN channel described in Section II. For the AWGN channel, by the result given in Aendix A we can substitute r(r 3 ) for ( 3 ) in the decoding algorithm. Since Pr (r i j) = 1 ex (r i E) N N for the AWGN channel, the bit error robability of uncoded data (P e ) is the robability that r i < [7]. It can be shown that this robability P e is given by P e = Q( R b ) (6)

5 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY Fig. 1. Average number of nodes visited for the (48; 4) code. When the robability of error of uncoded data is less than 1 3 (P e 1 3 ) a 1= rate code is transmitted over the AWGN channel, by (6), b 6.8 db. By the results given in Fig. 1, the values of N for the (48; 4) code are very tight to the average numbers of nodes visited by the SDA that are obtained by comuter simulations. Furthermore, the values of N are very tight to the average numbers of nodes visited by the decoding algorithm roosed in [13] without using the stoing criterion when SNR is greater than 6 db. However, because of the simlifying assumtions we had to make, the values of N are not tight to the average numbers of nodes visited by the decoding algorithm roosed in [13] with without using the stoing criterion when SNR is less than 6 db. In Figs. 3 we give the values of N for the (14; 5) code the (18; 64) code for b from to 1 db, resectively. We also give the average numbers of nodes visited by the decoding algorithm roosed in [13] with without using the stoing criterion for b from 5 to 1 db. By the results resented in Figs. 3, the values of N are very tight to the average numbers of nodes visited by the decoding algorithm roosed in [13] without using the stoing criterion for SNR greater than 6.8 db. Furthermore, the values of N are closed to k for SNR greater than 6.8 db. Thus we may conclude that the decoding algorithms roosed in [13] are efficient for codes of moderate lengths for most ractical communication systems the robability of error is less than 1 3 when we assume a 1= rate code is transmitted over the AWGN channel. Even though this uer bound is not tight for SNR of less than 6.8 db, we still have comlexity gains 1 55 =1 47 (1 646 =1 58 ) on SNR = 5dB for the (14; 5) code (the (18; 64) code) comared with Wolf s algorithm. It should be mentioned here that we could not get simulation results when we alied the decoding algorithm given in [13] to the (14; 5) the (18; 64) for b under 5 db. Due to the limitations of the memory of the comuter, we encountered a received vector, generated by our simulation rogram, that could not be decoded before the comuter crashed. However, to the authors best knowledge, this algorithm is still the only feasible otimal decoding algorithm for these two codes, even for b greater than 5 db. IV. SUBOPTIMAL DECODING ALGORITHM In the revious section we showed that the GDA is quite efficient for codes of moderate lengths for most ractical communication systems robability of error is less than 1 3 ; however, by the results given in Figs. 3, for codes (14; 5) (18; 64), the number of nodes on list OPEN in the decoding algorithm resented in [13] (GDA) is still too large for the algorithm to have ractical alications for low SNR s. The results of our simulations have shown that the number of nodes that need to be stored on list OPEN before an otimal ath is found is considerably smaller than the total number of nodes stored before the algorithm stos. Thus we may limit the search with small degradations on the erformance of the algorithm. In this section we resent a subotimal soft-decision decoding algorithm in which we limit the size of list OPEN by using the following two criteria. 1) If a node m needs to be stored on list OPEN when the size of list OPEN has reached a given uer bound, then we discard the node with larger f value between node m the node on list OPEN with the maximum value of function f. ) If the robability that an otimal ath goes through a node is smaller than a given arameter, then we do not store this node. That is, we can make sure that every node that stays on list OPEN has a high robability that an otimal ath goes through it. Next, we describe in detail how to use these criteria. Memory requirements are usually a crucial factor in the ractical imlementation of any decoding algorithm, esecially in the VLSI imlementation. Since, in the worst case, for any (n; k) code the maximum size of list OPEN is k1, the GDA is imractical even for the (48; 4) code for low SNR s. However, simulation results indicate that the required size of list OPEN may be much smaller than k1 if a small degradation on the erformance of the GDA is

6 138 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 Fig.. Average number of nodes visited for the (14; 5) code. Fig. 3. Average number of nodes visited for the (18; 64) code. tolerated. Thus in the first criterion we limit the size of list OPEN by giving an uer bound on the maximum number of nodes that can be stored on list OPEN. While the GDA searches for an otimal ath in a code tree, it calculates f (m) for every node m visited. If f (m) is large, then node m has a low robability of being exed before the otimal ath is found. In other words, when f (m) is large, the robability that the otimal ath goes through node m is low we can discard node m before the otimal ath is found without degrading the erformance of the GDA much. Therefore, to use the second criterion we need to calculate the robability that an otimal ath goes through a node. We now demonstrate how to calculate this robability for an AWGN channel. For any received vector 3, if an otimal decoding algorithm decodes it to a nontransmitted codeword, then it is almost imossible for a subotimal decoding algorithm to decode it to the transmitted codeword. Thus when an otimal decoding algorithm decodes a received vector to a nontransmitted codeword, we do not care which codeword a subotimal decoding algorithm decodes to. Therefore, it is reasonable to consider only those received vectors that will be decoded to transmitted codewords by an otimal decoding algorithm.

7 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY That is, when we derive the robability that an otimal ath goes through a node, we will assume that no decoding error will occur if we emloy an otimal decoding algorithm. Under this assumtion we have the following theorem. Theorem 7 Let a codeword of an (n; k) code C be transmitted over an AWGN channel. Furthermore, assume the branch cost assigned to the branch from a node at level t to a node at level t +1 in the code tree is relaced with the value N 3 t E(1) c 4 E ct 3 is the label of the branch. When no decoding error occurs, the robability distribution of the cost of an otimal ath, h 3 (m start), is aroximately a normal distribution with mean variance, = n N = n N The roof of Theorem 7 is given in Aendix C the theorem holds only for large n. Let node m be a node in the code tree of the transmitted (n; k) code C let UB be the lowest uer bound on the cost of an otimal ath found so far by the algorithm. By Theorem 1, for any node m of the code tree, h(m) h 3 (m) Thus if an otimal ath goes through node m, then h(m) h 3 (m) h 3 (m start). Thus the robability that an otimal ath goes through node m is less than or equal to Pr (h(m) h 3 (m start ) UB). This leads us to the following theorem. Theorem 8 Let T be the robability that an otimal ath goes through node m. Furthermore, let UB be an uer bound on the cost of an otimal ath. Then T T UB, T UB = 1 UB ex 1 h(m) = n N t = n N Thus when a node is visited, the algorithm calculates T UB for this node. If this value is less than a given threshold, then we will discard this node. We remark here that to use the second criterion we need to relace the branch cost from a node at level t to a node at level t +1in the code tree with the value N t 3 4 E E(1) c Now we describe the outline of our decoding algorithm. In our subotimal decoding algorithm we will fix the maximum number of nodes MB allowed on list OPEN. As in an otimal decoding algorithm, list OPEN is always ket ordered. When a node m is visited, the algorithm calculates T UB for this node. If T UB is less than a given threshold, then discard this node. Otherwise, we need to insert this node into list OPEN. If the number of nodes on list OPEN is equal to MB, then the algorithm discards the node with larger f value between node m the node with the largest f value on list OPEN. The algorithm inserts the remaining node into list OPEN. We remark here that all the seedu techniques described in Section II, such as stoing criterion changing the seed during the decoding rocedure, can be alied to the subotimal decoding algorithm. dt V. SIMULATION RESULTS FOR THE AWGN CHANNEL In order to verify the erformance of our subotimal decoding algorithm, we resent simulation results for the (48; 4), the (14; 5) binary extended quadratic residue codes, the (18; 64) binary extended BCH code when these codes are transmitted over the AWGN channel described in Section II. For the (48; 4) code, HW = f; 1; 16; ; 4; 8; 3; 36; 48g. We do not know HW for the (14; 5) the (18; 64) codes, so we use a suerset for them. For (14; 5) we know that d min = that the Hamming weight of any codeword is divisible by 4 [15]. Thus for this code the suerset used is fxj(x is divisible by 4 x 84) or (x =)or (x = 14)g; for (18; 64), the suerset used is fxj (x is even x 16) or (x =)or (x = 18)g, since this code has d min =. We have imlemented a subotimal version of the adative decoding algorithm resented in [13]. Let y =(y ;y 1; 111;y ) be the hard decision of r 3. In the otimal version of the adative decoding algorithm resented in [13], the initial seed c 3 is constructed by c 3 = u 1 G 3, u i = y i for i k 1. Although the received vector r was reordered to get r 3 according to the reliability of its comonents, some errors may occur in the first k comonents of y when the channel is noisy. Thus the initial seed c 3 is constructed by considering the 16 codewords as follows. Let S = fuju =(u ;u 1; 111;u k1 ) u i = yi for i k 5 u i =or 1 for k 4 i k 1g For every element u in S, we get a codeword c 3 = u 1 G 3. Now we let c 3 = c 3, the value of h(m start ), calculated with resect to c 3, is the largest among all the 16 codewords. We remark here that the selection of these codewords is based on a simulation observation that y k4 ;y k3 ;y k ; y k1 are the four comonents among y ;y 1 ; 111; y k1 with higher robability to contain errors. The rule of udating seed is as follows [1], [13]. Let cs 3 be the seed constructed so far by the decoding algorithm. Whenever a codeword cn 3 is generated during the decoding rocedure, the algorithm calculates h(m start ) with resect to cn. 3 If this value is greater than the h(m start ) calculated with resect to cs, 3 then cn 3 will be the new seed. The simulation results for the (48; 4) code for b equal to 1,, 3, 4 db are given in Fig. 4 in Table I for three MB s; is equal to.. Bit error robability of the uncoded data (P e ) is also given. From the results given in Fig. 4, for the (48; 4) code the erformance of the subotimal decoding algorithm with MB = 5 is the same as that of the otimal decoding algorithm whose MB = 4 = When MB = 5, the erformance of the subotimal decoding algorithm is slightly worse than that of the otimal decoding algorithm. From the results given in Table I, the average number of nodes visited is smaller when MB is smaller. Furthermore, when we do not limit the size of list OPEN in the otimal decoding, the maximum number of nodes in list OPEN will grow to 3961, which is far smaller than , the ossible largest size of list OPEN. However, it is still very large if we comare it with 5. Therefore, for code (48; 4) to limit MB to 5 seems a feasible solution for ractical alication when the SNR is low. Since the average number of nodes visited by the subotimal decoding algorithm with MB = 5 is small (754), it is not necessary to use the second criterion given in Section IV. The simulation results for the (14; 5) code for b equal to 1.5, 1.75,.,.5,.5,.75, 3., 3.5 db are given in Fig. 5 in Table II for three threshold values. MB is equal to 3. In Fig. 5 we also give a lower bound on the bit error robability of the maximum-likelihood decoding algorithm. This lower bound is obtained as follows [8]. For every samle, when the subotimal

8 14 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 Fig. 4. Performance of subotimal decoding algorithm for the (48; 4) code for several MB s. TABLE I THE AVERAGE NUMBER OF NODES VISITED AND THE MAXIMUM SIZE OF LIST OPEN REQUIRED DURING THE DECODING OF (48; 4) CODE decoding algorithm terminates, we have a codeword that is obtained from the algorithm. If this codeword is closer with resect to Euclidean distance to the received vector than to the transmitted codeword, then any otimal decoding algorithm will also decode the received vector to a nontransmitted codeword. Thus we assume that the otimal decoding algorithm will decode to the codeword obtained from the subotimal decoding algorithm reort if a decoding error occurs. Bit error robability of the uncoded data is also given in Fig. 5. From Fig. 5, for the (14; 5) code the erformance of the subotimal decoding algorithm with = is within.5 db of the erformance of an otimal decoding algorithm; the erformance of the subotimal decoding algorithm with = 5 is within.65 db of the erformance of an otimal decoding algorithm; the erformance of the subotimal decoding algorithm with = 5 is within 1.5 db of the erformance of an otimal decoding algorithm. Thus for the samles tried, limiting the size of list OPEN to 3 nodes introduced only a small degradation on the erformance of the algorithm for the (14; 5) code. However, the average number of nodes visited for the samle tried is several orders of magnitude smaller than the uer bound given in Fig.. The simulation results for the (18; 64) code for b equal to 1., 1.5, 1.5, 1.75,. db are given in Fig. 6 Table III for three threshold values. MB is equal to 6. In Fig. 6 we also give a lower bound on the bit error robability of the maximum-likelihood decoding algorithm bit error robability of the uncoded data. From Fig. 6, for the (18; 64) code the erformance of the subotimal decoding algorithm with = is within.5 db of the erformance of an otimal decoding algorithm; the erformance of the subotimal decoding algorithm with = 5 is within.6 db of the erformance of an otimal decoding algorithm; the erformance of the subotimal decoding algorithm with = 5 is within.75 db of the erformance of an otimal decoding algorithm. Thus for the samles tried, limiting the size of list OPEN to 6 nodes introduced only a small degradation on the erformance of the algorithm for the (18; 64) code. However, the average number of nodes visited for the samles tried is several orders of magnitude smaller than the uer bound given in Fig. 3. VI. CONCLUSIONS In this corresondence we resent an uer bound on the average number of nodes visited by the maximum-likelihood soft-decision decoding algorithm given in [13] (GDA). Since this uer bound is derived by alying the central limit theorem to a simlified version of the GDA, the results hold only for large code lengths. However, from the results resented in Section III, this uer bound shows that the GDA is efficient for codes of moderate lengths when the robability of error of the channel is less than 1 3. For low SNR s,

9 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY Fig. 5. Performance of subotimal decoding algorithm for the (14; 5) code. Fig. 6. Performance of subotimal decoding algorithm for the (18; 64) code. TABLE II THE AVERAGE NUMBER OF NODES VISITED DURING THE DECODING OF (14; 5) CODE the GDA becomes imractical for these codes. In order to solve this roblem, we also give a subotimal version of the GDA that reduces the decoding comlexity, but comensates for a loss in erformance. The branch cost assigned to the branch from a node at level t to a node at level t +1 in the code tree, resented in Section II may be relaced with the value (1) c 3 t to save comutation ower. However, the designed heuristic function cannot violate the requirement of inequality (3) in order to guarantee that the GDA will find an otimal ath. For examle, in the case that h(m) =for any node m at level `, with ` k 1, the branch cost cannot be changed

10 14 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 TABLE III THE AVERAGE NUMBER OF NODES VISITED DURING THE DECODING OF (18; 64) CODE g(p ). That is, g(p )= (r i 1) i= From the definition of f 3 s (m start) which is the cost of an otimal ath, we have g(p ) f 3 s (m start) to the new value since it violate the requirement of inequality (3). More discussion of the new branch costs can be found in Aendix D. It is interesting to verify the erformance of the decoding algorithm given in [13] when all simulated samles contain at least one error in the hard decision of the received vector for high SNR s. We simulated 1 samles the samles containing at least one error among these 1 samles for SNR were equal to 7 8 db. There are samles containing errors among the samles simulated for SNR = 7 db SNR = 8 db, resectively. For SNR = 7 db, the average number of nodes visited for both cases are.164.3, resectively. For SNR = 8 db, the average number of nodes visited for both cases was.. Therefore, for high SNR s, the decoding algorithm given in [13] is still very efficient when we aly it to those received vectors whose hard decisions contain at least one error. APPENDIX A PROOF OF THEOREM 4 Let an (n; k) code C be transmitted over an AWGN channel. It is easy to show that the MLD rule can be formulated as [7] set c = c`; (a j b(1) c c` C ) (a j b(1) c ) (7) for all c i C, a b are any ositive nonzero real number. By the above MLD rule, the branch cost assigned to the branch from a node at level t to a node at level t +1in the code tree may be relaced with the value (a t b(1) c ), c t is the label of the branch. Furthermore, we may substitute (a i b(1) c ) for the value ( i (1) c ) in the definition of h. The roof that h satisfies inequality (3) is very similar to that given in [13] we use old branch costs. Hence, we omit the roof here. From [14] Thus Pr (r i jo) = 1 ex (ri E) N N Pr (r ij1) = 1 ex (r i + E) N N Pr (r i j) i =ln Pr (r i j1) = 4 E r i N = 4 E r N we can substitute r for in our decoding algorithm when C is transmitted over the AWGN channel [3], [7] if we set a = N 4 E b =1. Furthermore, without loss of generality we can assume that all zero codewords are transmitted over the AWGN channel. Let P be the ath from the start node m start to a goal node whose labels are all zero. Let us define the cost of the ath P as Now let node m be a node at level ` in the code tree the labels of ath P m, the ath from node m start to node m,bev ; v 1; 111; v`1. Let S = fijv i = 1; i ` 1g js j = d From the definition of function f s f s(m) =g(m) +hs(m) `1 = r i (1) v + (jr i j1j) i= Now we want to calculate the robability that node m is exed by the algorithm. By Theorem 3, if the GDA exs the node m then f s (m) fs 3 (m start ). Thus this robability will be less than or equal to the robability that f s(m) f 3 s (m start), i.e., Pr (f s (m) fs 3 (m start )) Since then Furthermore, g(p ) f 3 (m start ) Pr (f s (m) f 3 s (m start )) Pr (f s (m) g(p )) f s (m) g(p ) iff iff iff `1 (r i (1) v ) + i= is is (r i 1) i= 4r i + r i + (r i jr i j) (r i jr i j) (jr i j1j) Now let us define two new rom variables, Z i Z i,as Z i =ri Since is transmitted, Pr (r i )= Z i = r i jr ij 1 ex (ri E) N N Thus E(r i) is E Var (ri) is N=. Then E(Z i )= E Var (Z i ) = 4Var (r i ) =N Now let us calculate E(Z i) Var (Z i). We first note that Z i = r i if r i < Z i = if r i, r i is normally distributed with mean E variance N =. Thus E(Zi)= 1 t ex (t E) dt N 1 N

11 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY Let then E(Z i)= N N N = + E x = t E xe e N N x + E e dx dx dx 1 R b Q( R b ) 1 e R =d Var (Z i )+(n `) Var (Zi) =dn +(n `) 1 (E + N )Q E N E EQ N = N d +(n `) N e EN e N = 1 xe E dx + EQ N Q(1) is the stard normal distribution, If y = x =, then dy = xdx. Thus Thus 1 (4R b +)Q( R b ) R b er R b Q( R b ) 1 e R Similarly, E(Zi)= E EQ N N e(e=n ) Var (Zi)=E(Z i ) E (Zi) = 1 (t) e dt E (Zi) N 1 = (E + N )Q E N EN e E (Z i) = (E + N )Q E N E EQ N N e Now let us define a new rom variable X as X = is Z i + Zi EN e By Lindeberg s central limit theorem [9], the robability distribution of X is aroximately a normal distribution with mean variance, =de(z i )+(n `)E(Z i) =d E E +(n `) EQ N = N d R b +(n `) N e Pr (f s(m) f 3 s (m start)) Pr (X ) = Q Since f s (m ) g(p ) for any node m on ath P, we can assume that node m will be exed. There are k nodes on this ath that will be exed. We now consider those nodes that are not on this ath. It is easy to see that, for any node that is not on ath P, the labels of the ath from node m start to it will contain at least one 1. Consider those nodes at level ` whose aths contain d ones, 1 d ` ` k 1. From the above argument, the robability of these nodes being exed are Q( ). The ` total number of these nodes is. Since the first k ositions of any d codeword are information bits, the average number of nodes exed by the algorithm is less than or equal to k1 ` k + `= d=1 ` d Q Since, when a node is exed by the algorithm, the algorithm will visit two nodes, the average number of nodes visited is less than or equal to k1 ` k + `= d=1 ` d Q APPENDIX B PROOF OF THEOREM 5 Let = ( ; 1; 111; ) be the received vector let 3 = (; 3 1; 3 111;) 3 be obtained by ermuting the ositions of such that the first k ositions are the most reliable linearly indeendent ositions in. Furthermore, let 3 = () ; 3 1 = (1) ; 111; 3 = (), is a osition ermutation. We now rove that N s ( 3 ) N s () by roving that, for every node m 1 in the search tree generated by the decoding algorithm when it decodes, we can find a one-to-one corresondent node m in the search tree generated by the decoding algorithm when it decodes 3 such that f s(m 1) f s(m ). Let the labels of the ath from the start node to node m 1 at level ` be c ;c 1 ; 111; c`1.

12 144 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998 Let us define S s (`); S a (`); S b (`); S c (`); S d (`), which are subsets of f; 1; ; 111;g as follows S s (`) =fxjx ` 1 (x) ` 1g S a(`) =fxjx ` 1gf(x)jx S s(`)g S b (`) =fxjx ` 1gfxjx S s (`)g S c(`) =f(x)jx S b (`)g; S d (`) =fxj(x) S a (`)g It is clear that js a(`)j = js b (`)j = js c(`)j = js d (`)j Now let us define the labels c 3 ;c1; 3 111;c 3`1 from the start node to node m as follows c 3 x = c (x) ; for x S s(`) c 3 x = y 3 x 8 y q(;`)(x) 8 c q(;`)(x); for x S b (`) y i =; when i =1; otherwise y 3 i =; when 3 i =1; otherwise q(; `) is a bijection from S b (`) to S a (`). It is easy to see that for any node m 1, node m is a one-to-one corresondence to node m 1. We next rove that f s (m 1 ) f s (m ). f (m ) f (m 1) `1 = ( 3 i (1) c ) + (j 3 i j1) Since = = = =4 i= `1 ( i (1) c ) (j i j1) i= is (`) ( 3 i (1) c ) + is (`) is (`) ( i (1) c ) ( 3 i (1) c ) is (`) is (`) is (`) is (`) is (`) ( i (1) c ) (j 3 i j1) is (`) is (`) [(j 3 i j +1) (j 3 i j1) ] (j i j1) (j i j1) is (`) [(j i j +1) (j i j1) ] j 3 i j is (`) j i j (j 3 i j1) S f (`) =fxjx S b (`) c 3 x 8 y 3 x =1g S e(`) =fxjx S a(`) c x 8 y x =1g c 3 x = y 3 x 8 y q(;`)(x) 8 c q(;`)(x); x S b (`) q(; `) is a bijection from S b (`) to S a (`), then js f (`)j = js e(`)j. Furthermore, since the k-most reliable ositions in are linearly indeendent, it follows that j 3 i jj j j for any i S f (`) j S e (`). Thus f s (m ) f s (m 1 ). By Theorem 3, the GDA will ex the node m only when f s(m) f 3 s (m start). Furthermore, the cost of the otimal ath is the same, no matter when the GDA decodes or 3. Therefore, we have N s ( 3 ) N s (). APPENDIX C PROOF OF THEOREM 7 Let an (n; k) code C be transmitted over an AWGN channel. If we assume that no decoding error occurs, then the decoded codeword that forms an otimal ath is the transmitted codeword. Assume the transmitted codeword is (c ;c 1; 111;c ). By inequality (7), given in Aendix A, if we set a = N b 4 E = E, then h 3 (m start )=f 3 (m start ) = (a i b(1) c ) = = i= i= i= = e i i= r i (1) c E e i +(1) c E (1) c E for each i n 1; e i s are indeendent identically distributed (i.i.d.) e i is a normal rom variable with mean variance N =. Consequently, e i = N i= N h 3 (m start) will be distributed as a chi square rom variable with n degrees of freedom. From [1] it follows that = E(h 3 (m start)) = n N =Var (h 3 (m start )) = n N However, by the central limit theorem, for large values of n, the robability distribution of h 3 (m start) is aroximately a normal distribution with mean variance, given above. APPENDIX D In this aendix we show that the branch cost assigned to the branch from a node at level t to a node at level t +1in the code tree may be relaced with the value (1) c t 3. First, we derive another form of MLD rule which contains the value (1) c t 3. Next, we rove that the function h defined in this corresondence still satisfies inequality (3). Another form of MLD rule can formulated as follows [3], [14] (1) c j set c = c`; c` C (1) c j; for all c i C (8)

13 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY iff iff Proof Since j j =ln Pr (r jj) Pr (r j j1) ( j (1) c ) (1) c j + n (1) c j ( j (1) c ) j (1) c j (1) c j + n then the result holds directly from the MLD rule given in Section II. By the above MLD rule we may substitute (1) v 3 i (j ij) for the value ( 3 i (1) v ) ((j i j1) ) in the definition of h (h s). Next, we rove that h satisfies inequality (3). Let node m at level ` be an immediate successor of node m 1. Furthermore, let v`1 be the label of the branch from node m 1 to node m c(m 1;m )=(1) v 3`1. We now rove that Case 1. ` k 1 Let such that Since then Thus i.e., h (m 1 ) h (m )+c(m 1 ;m ) v =(v ; v 1; 111; v`1 ;v`; v`+1 ; 111v ) T (m ) Case. ` = k Since then h (m )= v T (m ) v T (m 1 ) (1) v 3 i (1) v 3 i + c(m 1 ;m ) h (m 1 ) h (m )+c(m 1;m ) h (m 1) h (m 1 ) h 3 (m 1 ) h (m )=h 3 (m ) h 3 (m 1) c(m 1;m ) h 3 (m ) h (m 1 ) h 3 (m )+c(m 1 ;m )=h (m )+c(m 1 ;m ) Case 3. `>k Since then h (m 1 )=h 3 (m 1 ) h (m )=h 3 (m ) h 3 (m 1 ) c(m 1 ;m )=h 3 (m ) h (m 1 )=h (m )+c(m 1 ;m ) Since the roof that h s satisfies inequality (3) is easy we omit it here. ACKNOWLEDGMENT The authors wish to thank Elaine Weinman for her invaluable hel in the rearation of this manuscrit on the language check. In addition, the authors wish to thank the reviewers for their invaluable suggestions, which heled to imrove the resentation of the corresondence. REFERENCES [1] A. V. Aho, J. E. Hocroft, J. D. Ullman, The Design Analysis of Comuter Algorithms. Reading, MA Addison-Wesley, [] L. R. Bahl, J. Cocke, F. Jelinek, J. Raviv, Otimal decoding of linear codes for minimizing symbol error rate, IEEE Trans. Inform. Theory, vol. IT-, , Mar [3] Y. Be ery J. Snyders, Otimal soft decision block decoders based on fast Hadamard transform, IEEE Trans. Inform. Theory, vol. IT-3, , [4] Y. Berger Y. Be ery, Soft trellis-based decoder for linear block codes, IEEE Trans. Inform. Theory, vol. 4, , May [5] R. W. D. Booth, M. A. Herro, G. Solomon, Convolutional coding techniques for certain quadratic residue codes, in Proc Int. Telemetering Conf., 1975, [6] G. Brassard P. Bratley, Algorithmics Theory Practice. Englewood Cliffs, NJ Prentice-Hall, [7] G. C. Clark, Jr., J. B. Cain, Error-Correction Coding for Digital Communications. New York Plenum, [8] B. G. Dorsch, A decoding algorithm for binary block codes J-ary outut channels, IEEE Trans. Inform. Theory, vol. IT-, , May [9] W. Feller, An Introduction to Probability Theory Its Alications. New York Wiley, [1] G. D. Forney, Jr., Coset codes Part II Binary lattices related codes, IEEE Trans. Inform. Theory, vol. 34, , Set [11] Y. S. Han, Efficient soft-decision decoding algorithms for linear block codes using algorithm A 3, Ph.D. dissertation, Sch. Com, Inform. Sci., Syracuse Univ., Syracuse, NY 1344, [1] Y. S. Han C. R. P. Hartmann, Designing efficient maximumlikelihood soft-decision decoding algorithms for linear block codes using algorithm A 3, Sch. Com. Inform. Sci., Syracuse Univ., Syracuse, NY 1344, June 199, Tech. Re. SU-CIS-9-1. [13] Y. S. Han, C. R. P. Hartmann, C.-C. Chen, Efficient riorityfirst search maximum-likelihood soft-decision decoding of linear block codes, IEEE Trans. Inform. Theory, vol. 39, , Set [14] T.-Y. Hwang, Decoding linear block codes for minimizing word error rate, IEEE Trans. Inform. Theory, vol. IT-5, , Nov [15] F. J. MacWilliams N. J. A. Sloane, The Theory of Error-Correcting Codes. New York Elsevier, [16] K. R. Matis J. W. Modestino, Reduced-search soft-decision trellis decoding of linear block codes, IEEE Trans. Inform. Theory, vol. IT-8, , Mar [17] N. J. Nilsson, Princile of Artificial Intelligence. Palo Alto, CA Tioga, 198. [18] J. Snyders, Reduced lists of error atterns for maximum likelihood soft decoding, IEEE Trans. Inform. Theory, vol. 37, , July 1991.

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code. Convolutional Codes Goals Lecture Be able to encode using a convolutional code Be able to decode a convolutional code received over a binary symmetric channel or an additive white Gaussian channel Convolutional

More information

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK Comuter Modelling and ew Technologies, 5, Vol.9, o., 3-39 Transort and Telecommunication Institute, Lomonosov, LV-9, Riga, Latvia MATHEMATICAL MODELLIG OF THE WIRELESS COMMUICATIO ETWORK M. KOPEETSK Deartment

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar

Topic: Lower Bounds on Randomized Algorithms Date: September 22, 2004 Scribe: Srinath Sridhar 15-859(M): Randomized Algorithms Lecturer: Anuam Guta Toic: Lower Bounds on Randomized Algorithms Date: Setember 22, 2004 Scribe: Srinath Sridhar 4.1 Introduction In this lecture, we will first consider

More information

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i Comuting with Haar Functions Sami Khuri Deartment of Mathematics and Comuter Science San Jose State University One Washington Square San Jose, CA 9519-0103, USA khuri@juiter.sjsu.edu Fax: (40)94-500 Keywords:

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

q-ary Symmetric Channel for Large q

q-ary Symmetric Channel for Large q List-Message Passing Achieves Caacity on the q-ary Symmetric Channel for Large q Fan Zhang and Henry D Pfister Deartment of Electrical and Comuter Engineering, Texas A&M University {fanzhang,hfister}@tamuedu

More information

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Improved Capacity Bounds for the Binary Energy Harvesting Channel Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

Training sequence optimization for frequency selective channels with MAP equalization

Training sequence optimization for frequency selective channels with MAP equalization 532 ISCCSP 2008, Malta, 12-14 March 2008 raining sequence otimization for frequency selective channels with MAP equalization Imed Hadj Kacem, Noura Sellami Laboratoire LEI ENIS, Route Sokra km 35 BP 3038

More information

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION

A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST BASED ON THE WEIBULL DISTRIBUTION O P E R A T I O N S R E S E A R C H A N D D E C I S I O N S No. 27 DOI:.5277/ord73 Nasrullah KHAN Muhammad ASLAM 2 Kyung-Jun KIM 3 Chi-Hyuck JUN 4 A MIXED CONTROL CHART ADAPTED TO THE TRUNCATED LIFE TEST

More information

ECE 534 Information Theory - Midterm 2

ECE 534 Information Theory - Midterm 2 ECE 534 Information Theory - Midterm Nov.4, 009. 3:30-4:45 in LH03. You will be given the full class time: 75 minutes. Use it wisely! Many of the roblems have short answers; try to find shortcuts. You

More information

THIS paper is aimed at designing efficient decoding algorithms

THIS paper is aimed at designing efficient decoding algorithms IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999 2333 Sort-and-Match Algorithm for Soft-Decision Decoding Ilya Dumer, Member, IEEE Abstract Let a q-ary linear (n; k)-code C be used

More information

Elementary Analysis in Q p

Elementary Analysis in Q p Elementary Analysis in Q Hannah Hutter, May Szedlák, Phili Wirth November 17, 2011 This reort follows very closely the book of Svetlana Katok 1. 1 Sequences and Series In this section we will see some

More information

Sequential Decoding of Binary Convolutional Codes

Sequential Decoding of Binary Convolutional Codes Sequential Decoding of Binary Convolutional Codes Yunghsiang S. Han Dept. Computer Science and Information Engineering, National Chi Nan University Taiwan E-mail: yshan@csie.ncnu.edu.tw Y. S. Han Sequential

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes

The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes The Maximum-Likelihood Soft-Decision Sequential Decoding Algorithms for Convolutional Codes Prepared by Hong-Bin Wu Directed by Prof. Po-Ning Chen In Partial Fulfillment of the Requirements For the Degree

More information

16. Binary Search Trees

16. Binary Search Trees Dictionary imlementation 16. Binary Search Trees [Ottman/Widmayer, Ka..1, Cormen et al, Ka. 12.1-12.] Hashing: imlementation of dictionaries with exected very fast access times. Disadvantages of hashing:

More information

Outline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding

Outline. EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Simple Error Detection Coding Outline EECS150 - Digital Design Lecture 26 Error Correction Codes, Linear Feedback Shift Registers (LFSRs) Error detection using arity Hamming code for error detection/correction Linear Feedback Shift

More information

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018 Comuter arithmetic Intensive Comutation Annalisa Massini 7/8 Intensive Comutation - 7/8 References Comuter Architecture - A Quantitative Aroach Hennessy Patterson Aendix J Intensive Comutation - 7/8 3

More information

Research of PMU Optimal Placement in Power Systems

Research of PMU Optimal Placement in Power Systems Proceedings of the 5th WSEAS/IASME Int. Conf. on SYSTEMS THEORY and SCIENTIFIC COMPUTATION, Malta, Setember 15-17, 2005 (38-43) Research of PMU Otimal Placement in Power Systems TIAN-TIAN CAI, QIAN AI

More information

16. Binary Search Trees

16. Binary Search Trees Dictionary imlementation 16. Binary Search Trees [Ottman/Widmayer, Ka..1, Cormen et al, Ka. 1.1-1.] Hashing: imlementation of dictionaries with exected very fast access times. Disadvantages of hashing:

More information

Some Unitary Space Time Codes From Sphere Packing Theory With Optimal Diversity Product of Code Size

Some Unitary Space Time Codes From Sphere Packing Theory With Optimal Diversity Product of Code Size IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 5, NO., DECEMBER 4 336 Some Unitary Sace Time Codes From Shere Packing Theory With Otimal Diversity Product of Code Size Haiquan Wang, Genyuan Wang, and Xiang-Gen

More information

Logistics Optimization Using Hybrid Metaheuristic Approach under Very Realistic Conditions

Logistics Optimization Using Hybrid Metaheuristic Approach under Very Realistic Conditions 17 th Euroean Symosium on Comuter Aided Process Engineering ESCAPE17 V. Plesu and P.S. Agachi (Editors) 2007 Elsevier B.V. All rights reserved. 1 Logistics Otimization Using Hybrid Metaheuristic Aroach

More information

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule The Grah Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule STEFAN D. BRUDA Deartment of Comuter Science Bisho s University Lennoxville, Quebec J1M 1Z7 CANADA bruda@cs.ubishos.ca

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

Universal Finite Memory Coding of Binary Sequences

Universal Finite Memory Coding of Binary Sequences Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv

More information

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete

More information

On Wald-Type Optimal Stopping for Brownian Motion

On Wald-Type Optimal Stopping for Brownian Motion J Al Probab Vol 34, No 1, 1997, (66-73) Prerint Ser No 1, 1994, Math Inst Aarhus On Wald-Tye Otimal Stoing for Brownian Motion S RAVRSN and PSKIR The solution is resented to all otimal stoing roblems of

More information

Probability Estimates for Multi-class Classification by Pairwise Coupling

Probability Estimates for Multi-class Classification by Pairwise Coupling Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics

More information

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm Gabriel Noriega, José Restreo, Víctor Guzmán, Maribel Giménez and José Aller Universidad Simón Bolívar Valle de Sartenejas,

More information

1-way quantum finite automata: strengths, weaknesses and generalizations

1-way quantum finite automata: strengths, weaknesses and generalizations 1-way quantum finite automata: strengths, weaknesses and generalizations arxiv:quant-h/9802062v3 30 Se 1998 Andris Ambainis UC Berkeley Abstract Rūsiņš Freivalds University of Latvia We study 1-way quantum

More information

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals

Sampling and Distortion Tradeoffs for Bandlimited Periodic Signals Samling and Distortion radeoffs for Bandlimited Periodic Signals Elaheh ohammadi and Farokh arvasti Advanced Communications Research Institute ACRI Deartment of Electrical Engineering Sharif University

More information

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle] Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear

More information

Autoregressive (AR) Modelling

Autoregressive (AR) Modelling Autoregressive (AR) Modelling A. Uses of AR Modelling () Alications (a) Seech recognition and coding (storage) (b) System identification (c) Modelling and recognition of sonar, radar, geohysical signals

More information

A Parallel Algorithm for Minimization of Finite Automata

A Parallel Algorithm for Minimization of Finite Automata A Parallel Algorithm for Minimization of Finite Automata B. Ravikumar X. Xiong Deartment of Comuter Science University of Rhode Island Kingston, RI 02881 E-mail: fravi,xiongg@cs.uri.edu Abstract In this

More information

Analyses of Orthogonal and Non-Orthogonal Steering Vectors at Millimeter Wave Systems

Analyses of Orthogonal and Non-Orthogonal Steering Vectors at Millimeter Wave Systems Analyses of Orthogonal and Non-Orthogonal Steering Vectors at Millimeter Wave Systems Hsiao-Lan Chiang, Tobias Kadur, and Gerhard Fettweis Vodafone Chair for Mobile Communications Technische Universität

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

s v 0 q 0 v 1 q 1 v 2 (q 2) v 3 q 3 v 4

s v 0 q 0 v 1 q 1 v 2 (q 2) v 3 q 3 v 4 Discrete Adative Transmission for Fading Channels Lang Lin Λ, Roy D. Yates, Predrag Sasojevic WINLAB, Rutgers University 7 Brett Rd., NJ- fllin, ryates, sasojevg@winlab.rutgers.edu Abstract In this work

More information

Sums of independent random variables

Sums of independent random variables 3 Sums of indeendent random variables This lecture collects a number of estimates for sums of indeendent random variables with values in a Banach sace E. We concentrate on sums of the form N γ nx n, where

More information

On the capacity of the general trapdoor channel with feedback

On the capacity of the general trapdoor channel with feedback On the caacity of the general tradoor channel with feedback Jui Wu and Achilleas Anastasooulos Electrical Engineering and Comuter Science Deartment University of Michigan Ann Arbor, MI, 48109-1 email:

More information

Interactive Hypothesis Testing Against Independence

Interactive Hypothesis Testing Against Independence 013 IEEE International Symosium on Information Theory Interactive Hyothesis Testing Against Indeendence Yu Xiang and Young-Han Kim Deartment of Electrical and Comuter Engineering University of California,

More information

Beyond Worst-Case Reconstruction in Deterministic Compressed Sensing

Beyond Worst-Case Reconstruction in Deterministic Compressed Sensing Beyond Worst-Case Reconstruction in Deterministic Comressed Sensing Sina Jafarour, ember, IEEE, arco F Duarte, ember, IEEE, and Robert Calderbank, Fellow, IEEE Abstract The role of random measurement in

More information

44 CHAPTER 5. PERFORMACE OF SMALL SIGAL SETS In digital communications, we usually focus entirely on the code, and do not care what encoding ma is use

44 CHAPTER 5. PERFORMACE OF SMALL SIGAL SETS In digital communications, we usually focus entirely on the code, and do not care what encoding ma is use Performance of small signal sets Chater 5 In this chater, we show how to estimate the erformance of small-to-moderate-sized signal constellations on the discrete-time AWG channel. With equirobable signal

More information

On Code Design for Simultaneous Energy and Information Transfer

On Code Design for Simultaneous Energy and Information Transfer On Code Design for Simultaneous Energy and Information Transfer Anshoo Tandon Electrical and Comuter Engineering National University of Singaore Email: anshoo@nus.edu.sg Mehul Motani Electrical and Comuter

More information

Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim. Institute of Electrical and Electronics Engineers (IEEE)

Amin, Osama; Abediseid, Walid; Alouini, Mohamed-Slim. Institute of Electrical and Electronics Engineers (IEEE) KAUST Reository Outage erformance of cognitive radio systems with Imroer Gaussian signaling Item tye Authors Erint version DOI Publisher Journal Rights Conference Paer Amin Osama; Abediseid Walid; Alouini

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Hotelling s Two- Sample T 2

Hotelling s Two- Sample T 2 Chater 600 Hotelling s Two- Samle T Introduction This module calculates ower for the Hotelling s two-grou, T-squared (T) test statistic. Hotelling s T is an extension of the univariate two-samle t-test

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

Analysis of M/M/n/K Queue with Multiple Priorities

Analysis of M/M/n/K Queue with Multiple Priorities Analysis of M/M/n/K Queue with Multile Priorities Coyright, Sanjay K. Bose For a P-riority system, class P of highest riority Indeendent, Poisson arrival rocesses for each class with i as average arrival

More information

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS

ON THE LEAST SIGNIFICANT p ADIC DIGITS OF CERTAIN LUCAS NUMBERS #A13 INTEGERS 14 (014) ON THE LEAST SIGNIFICANT ADIC DIGITS OF CERTAIN LUCAS NUMBERS Tamás Lengyel Deartment of Mathematics, Occidental College, Los Angeles, California lengyel@oxy.edu Received: 6/13/13,

More information

GIVEN an input sequence x 0,..., x n 1 and the

GIVEN an input sequence x 0,..., x n 1 and the 1 Running Max/Min Filters using 1 + o(1) Comarisons er Samle Hao Yuan, Member, IEEE, and Mikhail J. Atallah, Fellow, IEEE Abstract A running max (or min) filter asks for the maximum or (minimum) elements

More information

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material

Robustness of classifiers to uniform l p and Gaussian noise Supplementary material Robustness of classifiers to uniform l and Gaussian noise Sulementary material Jean-Yves Franceschi Ecole Normale Suérieure de Lyon LIP UMR 5668 Omar Fawzi Ecole Normale Suérieure de Lyon LIP UMR 5668

More information

LDPC codes for the Cascaded BSC-BAWGN channel

LDPC codes for the Cascaded BSC-BAWGN channel LDPC codes for the Cascaded BSC-BAWGN channel Aravind R. Iyengar, Paul H. Siegel, and Jack K. Wolf University of California, San Diego 9500 Gilman Dr. La Jolla CA 9093 email:aravind,siegel,jwolf@ucsd.edu

More information

Lower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data

Lower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data Quality Technology & Quantitative Management Vol. 1, No.,. 51-65, 15 QTQM IAQM 15 Lower onfidence Bound for Process-Yield Index with Autocorrelated Process Data Fu-Kwun Wang * and Yeneneh Tamirat Deartment

More information

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu

More information

Linear diophantine equations for discrete tomography

Linear diophantine equations for discrete tomography Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

Periodic scheduling 05/06/

Periodic scheduling 05/06/ Periodic scheduling T T or eriodic scheduling, the best that we can do is to design an algorithm which will always find a schedule if one exists. A scheduler is defined to be otimal iff it will find a

More information

An Introduction To Range Searching

An Introduction To Range Searching An Introduction To Range Searching Jan Vahrenhold eartment of Comuter Science Westfälische Wilhelms-Universität Münster, Germany. Overview 1. Introduction: Problem Statement, Lower Bounds 2. Range Searching

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM

SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM Journal of ELECTRICAL ENGINEERING, VOL. 63, NO. 1, 2012, 59 64 SOFT DECISION FANO DECODING OF BLOCK CODES OVER DISCRETE MEMORYLESS CHANNEL USING TREE DIAGRAM H. Prashantha Kumar Udupi Sripati K. Rajesh

More information

MATH 2710: NOTES FOR ANALYSIS

MATH 2710: NOTES FOR ANALYSIS MATH 270: NOTES FOR ANALYSIS The main ideas we will learn from analysis center around the idea of a limit. Limits occurs in several settings. We will start with finite limits of sequences, then cover infinite

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Uniform Law on the Unit Sphere of a Banach Space

Uniform Law on the Unit Sphere of a Banach Space Uniform Law on the Unit Shere of a Banach Sace by Bernard Beauzamy Société de Calcul Mathématique SA Faubourg Saint Honoré 75008 Paris France Setember 008 Abstract We investigate the construction of a

More information

On Using FASTEM2 for the Special Sensor Microwave Imager (SSM/I) March 15, Godelieve Deblonde Meteorological Service of Canada

On Using FASTEM2 for the Special Sensor Microwave Imager (SSM/I) March 15, Godelieve Deblonde Meteorological Service of Canada On Using FASTEM2 for the Secial Sensor Microwave Imager (SSM/I) March 15, 2001 Godelieve Deblonde Meteorological Service of Canada 1 1. Introduction Fastem2 is a fast model (multile-linear regression model)

More information

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking Finite-State Verification or Model Checking Finite State Verification (FSV) or Model Checking Holds the romise of roviding a cost effective way of verifying imortant roerties about a system Not all faults

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction GOOD MODELS FOR CUBIC SURFACES ANDREAS-STEPHAN ELSENHANS Abstract. This article describes an algorithm for finding a model of a hyersurface with small coefficients. It is shown that the aroach works in

More information

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points. Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the

More information

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System Vol.7, No.7 (4),.37-38 htt://dx.doi.org/.457/ica.4.7.7.3 Robust Predictive Control of Inut Constraints and Interference Suression for Semi-Trailer System Zhao, Yang Electronic and Information Technology

More information

The decision-feedback equalizer optimization for Gaussian noise

The decision-feedback equalizer optimization for Gaussian noise Journal of Theoretical and Alied Comuter Science Vol. 8 No. 4 4. 5- ISSN 99-634 (rinted 3-5653 (online htt://www.jtacs.org The decision-feedback eualizer otimization for Gaussian noise Arkadiusz Grzbowski

More information

The Knuth-Yao Quadrangle-Inequality Speedup is a Consequence of Total-Monotonicity

The Knuth-Yao Quadrangle-Inequality Speedup is a Consequence of Total-Monotonicity The Knuth-Yao Quadrangle-Ineuality Seedu is a Conseuence of Total-Monotonicity Wolfgang W. Bein Mordecai J. Golin Lawrence L. Larmore Yan Zhang Abstract There exist several general techniues in the literature

More information

The E8 Lattice and Error Correction in Multi-Level Flash Memory

The E8 Lattice and Error Correction in Multi-Level Flash Memory The E8 Lattice and Error Correction in Multi-Level Flash Memory Brian M Kurkoski University of Electro-Communications Tokyo, Japan kurkoski@iceuecacjp Abstract A construction using the E8 lattice and Reed-Solomon

More information

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R.

Correspondence Between Fractal-Wavelet. Transforms and Iterated Function Systems. With Grey Level Maps. F. Mendivil and E.R. 1 Corresondence Between Fractal-Wavelet Transforms and Iterated Function Systems With Grey Level Mas F. Mendivil and E.R. Vrscay Deartment of Alied Mathematics Faculty of Mathematics University of Waterloo

More information

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model Shadow Comuting: An Energy-Aware Fault Tolerant Comuting Model Bryan Mills, Taieb Znati, Rami Melhem Deartment of Comuter Science University of Pittsburgh (bmills, znati, melhem)@cs.itt.edu Index Terms

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1)

1. INTRODUCTION. Fn 2 = F j F j+1 (1.1) CERTAIN CLASSES OF FINITE SUMS THAT INVOLVE GENERALIZED FIBONACCI AND LUCAS NUMBERS The beautiful identity R.S. Melham Deartment of Mathematical Sciences, University of Technology, Sydney PO Box 23, Broadway,

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized

2 K. ENTACHER 2 Generalized Haar function systems In the following we x an arbitrary integer base b 2. For the notations and denitions of generalized BIT 38 :2 (998), 283{292. QUASI-MONTE CARLO METHODS FOR NUMERICAL INTEGRATION OF MULTIVARIATE HAAR SERIES II KARL ENTACHER y Deartment of Mathematics, University of Salzburg, Hellbrunnerstr. 34 A-52 Salzburg,

More information

Supplementary Materials for Robust Estimation of the False Discovery Rate

Supplementary Materials for Robust Estimation of the False Discovery Rate Sulementary Materials for Robust Estimation of the False Discovery Rate Stan Pounds and Cheng Cheng This sulemental contains roofs regarding theoretical roerties of the roosed method (Section S1), rovides

More information

Matching Partition a Linked List and Its Optimization

Matching Partition a Linked List and Its Optimization Matching Partition a Linked List and Its Otimization Yijie Han Deartment of Comuter Science University of Kentucky Lexington, KY 40506 ABSTRACT We show the curve O( n log i + log (i) n + log i) for the

More information

A randomized sorting algorithm on the BSP model

A randomized sorting algorithm on the BSP model A randomized sorting algorithm on the BSP model Alexandros V. Gerbessiotis a, Constantinos J. Siniolakis b a CS Deartment, New Jersey Institute of Technology, Newark, NJ 07102, USA b The American College

More information

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding

Homework Set #3 Rates definitions, Channel Coding, Source-Channel coding Homework Set # Rates definitions, Channel Coding, Source-Channel coding. Rates (a) Channels coding Rate: Assuming you are sending 4 different messages using usages of a channel. What is the rate (in bits

More information

EXACTLY PERIODIC SUBSPACE DECOMPOSITION BASED APPROACH FOR IDENTIFYING TANDEM REPEATS IN DNA SEQUENCES

EXACTLY PERIODIC SUBSPACE DECOMPOSITION BASED APPROACH FOR IDENTIFYING TANDEM REPEATS IN DNA SEQUENCES EXACTLY ERIODIC SUBSACE DECOMOSITION BASED AROACH FOR IDENTIFYING TANDEM REEATS IN DNA SEUENCES Ravi Guta, Divya Sarthi, Ankush Mittal, and Kuldi Singh Deartment of Electronics & Comuter Engineering, Indian

More information

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators S. K. Mallik, Student Member, IEEE, S. Chakrabarti, Senior Member, IEEE, S. N. Singh, Senior Member, IEEE Deartment of Electrical

More information

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS NCCI 1 -National Conference on Comutational Instrumentation CSIO Chandigarh, INDIA, 19- March 1 COMPARISON OF VARIOUS OPIMIZAION ECHNIQUES FOR DESIGN FIR DIGIAL FILERS Amanjeet Panghal 1, Nitin Mittal,Devender

More information

Finding Shortest Hamiltonian Path is in P. Abstract

Finding Shortest Hamiltonian Path is in P. Abstract Finding Shortest Hamiltonian Path is in P Dhananay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune, India bstract The roblem of finding shortest Hamiltonian ath in a eighted comlete grah belongs

More information

arxiv:cond-mat/ v2 25 Sep 2002

arxiv:cond-mat/ v2 25 Sep 2002 Energy fluctuations at the multicritical oint in two-dimensional sin glasses arxiv:cond-mat/0207694 v2 25 Se 2002 1. Introduction Hidetoshi Nishimori, Cyril Falvo and Yukiyasu Ozeki Deartment of Physics,

More information

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel Performance Analysis Introduction Analysis of execution time for arallel algorithm to dertmine if it is worth the effort to code and debug in arallel Understanding barriers to high erformance and redict

More information

Recent Developments in Multilayer Perceptron Neural Networks

Recent Developments in Multilayer Perceptron Neural Networks Recent Develoments in Multilayer Percetron eural etworks Walter H. Delashmit Lockheed Martin Missiles and Fire Control Dallas, Texas 75265 walter.delashmit@lmco.com walter.delashmit@verizon.net Michael

More information

Minimax Design of Nonnegative Finite Impulse Response Filters

Minimax Design of Nonnegative Finite Impulse Response Filters Minimax Design of Nonnegative Finite Imulse Resonse Filters Xiaoing Lai, Anke Xue Institute of Information and Control Hangzhou Dianzi University Hangzhou, 3118 China e-mail: laix@hdu.edu.cn; akxue@hdu.edu.cn

More information

Positive decomposition of transfer functions with multiple poles

Positive decomposition of transfer functions with multiple poles Positive decomosition of transfer functions with multile oles Béla Nagy 1, Máté Matolcsi 2, and Márta Szilvási 1 Deartment of Analysis, Technical University of Budaest (BME), H-1111, Budaest, Egry J. u.

More information

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley

Elements of Asymptotic Theory. James L. Powell Department of Economics University of California, Berkeley Elements of Asymtotic Theory James L. Powell Deartment of Economics University of California, Berkeley Objectives of Asymtotic Theory While exact results are available for, say, the distribution of the

More information

Parallelism and Locality in Priority Queues. A. Ranade S. Cheng E. Deprit J. Jones S. Shih. University of California. Berkeley, CA 94720

Parallelism and Locality in Priority Queues. A. Ranade S. Cheng E. Deprit J. Jones S. Shih. University of California. Berkeley, CA 94720 Parallelism and Locality in Priority Queues A. Ranade S. Cheng E. Derit J. Jones S. Shih Comuter Science Division University of California Berkeley, CA 94720 Abstract We exlore two ways of incororating

More information