Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder

Size: px
Start display at page:

Download "Convolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder"

Transcription

1 Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with signals fro a sall set. Because the trellis is of infinite length this is conceptually different than the signals created in the previous chapter. We the codes generated are linear (the su of any two sequences is also a valid sequence) then the codes are known as convolutional codes. We first discuss convolutional codes, then optiu decoding of convolutional codes, then discuss ways to evaluated the perforance of convolutional codes. Finally we discuss the ore general trellis codes for QAM and SK types of odulation. Unlike block codes, convolutional codes are not of fixed length. The encoder instead processes using a sliding window the inforation bit sequence to produce a channel bit sequence. The window operates on a nuber of inforation bits at a tie to produce a nuber of channel bits. For exaple, the encoder shown below exaines three consecutive inforation bits and produces two channel bits. The encoder then shifts in a new inforation bit and produces another set of two channel bits based on the new inforation bit and the previous two inforation bits. In general the encoder stores M inforation bits. Based on these bits and the current set of k input bits produces n channel bits. The eory of the encoder is M. The constraint length is the largest nuber of consecutive input bits that any particular output depends. In the above exaple the outputs depend on a axiu of consecutive input bits. The rate is k n. The operation to produce the channel bits is a linear cobination of the inforation bits in the encoder. Because of this linearity each output of the encoder is a convolution of the input inforation strea with soe ipulse response of the encoder and hence the nae convolutional codes. VIII- VIII- Exaple: K=,M=, rate / code i c c Figure 95: Convolutional Encoder c c c In this exaple, the input to the encoder is the sequence of inforation sybols I :. The output of the top part of the encoder is : :. The relation between the input and output where M g and the output c is, g c c and the output of the botto part of the decoder is and g i i M l i lg l. Siilarly the relation between the input and i i M l i lg where g g, g and. The above relations are convolutions of the input with the vector g known as the generator of the code. Fro the above equations it is easy to check that the su of any two codewords generated by two inforation sequences corresponds to l i is VIII- VIII-

2 / i c c / / the codeword generated fro the su of the two inforation sequences. Thus the code is linear. Because of this we can assue in our analysis without loss of generality that the all zero inforation sequence (and codeword) is the transitted sequence. The operation of the encoder can be deterined copletely by way of a state transition diagra. The state transition diagra is a directed graph with nodes for each possible encoder content and transition between nodes corresponding to the results of different input bits to the encoder. The transitions are labeled with the output bits of the code. This is shown for the previous exaple. / / / / / Figure 96: State Transition Diagra of Encoder VIII-5 VIII-6 Trellis Section Maxiu Likelihood Sequence Detection of States of a Markov Chain / / Consider a finite state Markov chain. Let x be the sequence of rando variables representing the state at tie. Let x be the initial state of the process p x. Later on we will denote the states by the integers,,...,n. Since this is a Markov process we have that / / / / Also p x p x x x x x x p x px x M x p x p x x x x p x p x x x p x p x x p x / Let w x x be the state transition at tie. There is a one-to-one correspondence / between state sequences and transition sequences w x x,,...,w M x M xm x x xm w w wm VIII-7 VIII-8

3 Observation By soe echanis (e.g. a noisy channel) a noisy version z of the state transition sequence is observed. Based on this noisy version of w we wish to estiate the state sequence x or the transition sequence w. Sincew and x contain the sae inforation we have that where zz eoryless if z zm, xx x p z p z x xm, and ww w M p z p z w w wm. We say a channel is Likelihood Calcluation So given an observation z on a eoryless channel our goal is to find the state sequence x for which the aposteriori probability p x z is largest. This iniizes the probability that we choose the wrong sequence. Thus the optiu (iniu sequence error probability) decoder chooses x which axiizes p x z : i.e ˆx argaxx p x z z argaxx p x arginx arginx log p x z log p z x p x VIII-9 VIII- Markov State and Meoryless Channel Viterbi algorith (dynaic prograing) Using the eoryless property of the channel we obtain p z x M p z k Using the Markov property of the state sequence (with given initial state) yields Define λ w Then as follows: λ w p x M ln p x ˆxargin x p x x M x λ w w ln p z This proble forulation leads to a recursive solution. The recursive solution is called the Viterbi Algorith by counication engineers and is a for of dynaic prograing as studied by control engineers. They are really the sae though. w Let Γ x be the length (optiization criteria) of the shortest (optiu) path to state x at tie. Let ˆx x be the shortest path to state x at tie. Let ˆΓ x be the length of the path to state x at tie that goes through state x at tie. Then the algorith works as follows., tie index, ˆx x Γ x, ˆx x ˆx x Γ x Γ x x arbitrary, x, M M x Storage: Initialization x VIII- VIII-

4 λ w p c p c c c c E w Recursion ˆΓ x x Γ x Γ x inx ˆΓ x x for each x Let ˆx x Justification: arginx ˆΓ x x. ˆx x ˆx x Basically we are interested in finding the shortest length path through the trellis. At tie we find the shortest length paths to each of the possible states at tie by coputing all possible ways of getting to state x ufroastate at tie. If the shortest path (denoted by ˆx u ) to get to x uattie goes through state x vattie (i.e. ˆx u then the corresponding path ˆx v to state x vustbethe shortest path to state v at tie since if there was a shorter path, say x v, to state v at tie then the path x v state u at tie that used this shorter path to state v at tie would be shorter then what we assued was the shortest path). Stated another way if the shortest way of getting to state u at tie is by going through state v at tie then the path used to get to state v at tie ust be the shortest of all paths to state v at tie. x ˆxv u) u to We identify a state transition with the pair of bits transitted. The received pair of decision statistics is our noisy inforation about the transition. Thus p z in this case is ust the transition probabilities fro the input of the channel to the output of the channel. This is because knowing the state transition deterines the channel input. VIII- VIII- Exaple : Binary Syetric Channel (BSC) p p Exaple : Additive white Gaussian noise channel (AWGN) p p n i p z ln p z w w p d H z p p d H z p d H z p N p d H z dh p N dh z N p So that iniizing the etric is the sae as choosing the sequence with the closest Haing distance. z c N c i The noise n i is Gaussian, ean and variance N. The possible inputs to the channel are a finite set of real nubers (e.g u i ). These are obtained by the siple apping E r i VIII-5 VIII-6

5 u Ni u y c. The transition probability is p z u N i πσ exp πσ πσ N exp N exp σ z iu i σ N i σ d E z z iu i where de z z iu i is the squared Euclidean distance between the input and the output of the channel. Thus finding u to axiize p z is equivalent to finding u to iniize de z. u u Thus in these two cases we can equivalently use a distance function (between what is received and what would have been transitted along a given path) as the function we iniize. This should be obvious fro the case of block codes. u VIII-7 VIII-8 Weight Enuerator for Convolutional Codes X Y Z X Y Z In this section we show how to deterine the weight enuerator polynoials for convolutional codes. The weight enuerator polynoial is a ethod for counting the nuber of codewords of a given Haing weight with a certain nuber of input ones and a certain length. It will be useful when error probability bounds are discussed. X Y Z X Y Z Consider the earlier exaple with four states. We would like to deterine the nuber of paths that start in the zero state (say), diverge fro the zero state for soe tie and then reerge with the zero state such that there are a certain nuber of input ones, a certain nuber of output ones and a certain length. To do this let us split the zero state into a begining state and ending state. In addition we label each path with three variables (x z). The power on x is the nuber of input ones; the power on y is the nuber of output ones; the power on z is the length of that path (naely one). To get the paraeters for two consecutive paths we ultiply these variables. X Y Z X Y Z X Y Z Figure 97: State transition diagra of encoder Let T represent the nuber of paths stating in the all zero state and ending in state. This VIII-9 VIII-

6 y 7 y 8 8y 9 y y y y x z includes paths that go through state any nuber of ties. Siilarly let T be the nuber of paths stating fro the state and ending in state. Finally let T be the nuber of paths stating fro the state and ending in state. Then we can write the following equations Then the nuber of paths stating at and ending in is A x solution is z yzt. In this case the T T T xy zxzt xyzt xyzt yzt yzt Fro these equations we can solve for T. The following Maple code solves this proble. with(linalg); :=atrix(,,[[,-x*z,],[-x*y*z,,-x*y*z],[y*z,-,y*z]]); b:=[x*yˆ*z,,]; fxx:=linsolve(,b); fxx:=fxx[]*yˆ*z; fxx:=diff( fxx, x); fxx:=siplify( fxx); fxx5:=eval( fxx, z=); fxx6:=eval( fxx5, x=); wy=taylor(fxx6, y=, 5); A x z xy 5 z xy 5 z xyz z x y 6 z x y 6 z 5 x y 7 z 5 x y 7 z 6 x y 7 z 7 x y 8 z 6x y 8 z 7 Thus there is one path through the trellis with one input one, 5 output ones and length. There is one path diverging and reerging with input ones, length and 6 output ones. The iniu (or free) distance of a convolutional code is the iniu nuber of output ones on any path that diverges fro the all zero state and then reerges. This code has d f of 5. To calculate A l used in the first event error probability bound we calculate A coefficient on y l is A l.. The In order to deterine the bit error probability the next section shows that the polynoial given by w y A x z x VIII- VIII- Exaple: K=7,M=6, rate / code is needed. For this exaple the polynoial is w y y 5 y y 5 y 6 This code has d f of. The weight enuerator polynoial is given as follows. The first event error probability is deterined by a y b y. The bit error probability is deterined by c y b y. VIII- VIII-

7 a y b y y 6y 8 5y y 69y 5y 76y 8 76y y 67y 8y 56y 8 9y 795y 66y 58y 9y y 97y 9y y 58y 5y5 868y56 6 8y 589y y y 8y 6656y y7 76 y 76y y8 6 6 y 6y y 6y 68y 8 59y 689y 8 a y b y 8y6 78y6 y y 56 y 5866y 6 y 66y 68 y 7y 76 y 8y 96y8 y 885y 8y 5y y 6y 67y 6y 75y 87y 6y 8y 76y 56y 5576y 5 9y y 6 y6 y 6 c y 6y y 88y 888y 957y 77y 6675y8 y 87y 5685y56 796y y 7769y 8y 669y 8 y 57y6 59y 89y 9y 6687y y 958y 6 6y 57757y y y y6 7896y y y y y y 78 57y8 6977y y 8 889y y88 687y y 99578y y y 786y 75766y 6569y 6789y y 8 y 67565y 57y 686y 656y 6y VIII-5 VIII y 665y 565y8 7569y 8y 66y 87y6 5y 6y y y y 6 8 The weight enuerator polynoial for deterining the bit error probability is given by c d b d 6y y 569y 659y 6 8 y 6y 677y 76y 99y 8587y 8 Error Bounds for Convolutional Codes We are interested in deterining the probability that the decoder akes an error. We will define several types of errors. Without loss of generality we will assue that the inforation sequence is the all zeros sequence so the transitted codeword is the all zeros codeword. Furtherore we will assue the trellis starts at tie unit. Norally a code is truncated (forced back to the all zero state) after a large nuber of code sybols have been transitted but we don t require this. VIII-7 VIII-8

8 dn d n n d d First Event Error A first event error is said to occur at tie if the all zeros path (correct path) is eliinated for the first tie at tie, that is, if the path with the shortest distance to the state at tie is not the all zeros path and this is the first tie that the all zero path has been eliinated. At tie an incorrect path will be chosen over the correct path if incorrect path received sequence is greater than correct path received sequence. If the incorrect path has (output) weight d then the probability that the incorrect path being ore likely than the all zeros path is denoted as d. This is easily calculated for ost channels since it is ust the error probability of a repetition code of length d. For an additive white Gaussian channel it is given by d Q Ed For a binary syetric channel with crossover probability p it is given by d d n d n d d d p n p p n p The first event error probability at tie, f N dnd odd d p d p d even, can then be bounded (using the union bound) as f E l A l l where A l is the nuber of paths through the trellis with output weight l. This is a union type bound. However, it is also an upper bound since at any finite tie there will only be a finite nuber of incorrect paths that erge with the correct path at tie. We have included in the upper bound paths of all lengths as if the trellis started t at. This akes the bound independent of the tie index. (We will show later how to calculate A l for all l via a generating function). Since each ter in the infinite su is nonnegative, the su is either a finite positive nuber or. For exaple the standard code of constraint length has A l l5 so unless the pairwise error probability decreases faster than l the above bound will be infinite. The pairwise error probability will decrease fast enough for reasonably large signal-to-noise ratio or reasonably sall crossover probability. In general the sequence Al ay have a periodic coponent that is zero but otherwise is a positive increasing sequence. l for reasonable channels is a decreasing sequence. If the channel is fairly noisy then the above upper bound on first event error probability ay converge to soething larger than, even. In this case clearly is an VIII-9 VIII- Bit error probability upper bound on any probability we are interested in, so the above bound can be iproved to f E in l A l l For exaple, the well known constraint length 7, rate / convolutional code has coefficients that grow no faster than 8765 k so that provided l decreases faster than 8765 k the bound above will converge. Since l D p pthe above bound converges for 6. For soft decisions (and additive p white Gaussian noise) De E 6dB. Dl where (for hard decisions) N N and thus the bound converges provided E Below we find an upper bound on the error probability for binary convolutional codes. The generalization to nonbinary codes is straightforward. In order to calculate the bit error probability in decoding between tie and weneed to consider all paths through the trellis that are not in the all zeros state at either tie unit or. We also need to realize that not all incorrect paths will cause an error. First consider a rate n code (i.e. k) so each transition fro one state to the next is deterined by a single bit. To copute an upper bound on the bit error probability we will do a union bound on all paths that cause a bit error. We assue that the trellis started t at. We do this in two steps. First we look at each path diverging and then reerging to the all zero state. This is called an error event. (An error event can only diverge once fro the all zero state). Then su over all possible starting ties (ties when the path diverges fro the all zero state) for each of these error events. So take a particular error event of length l corresponding to a state sequence with i input ones and output ones and let A i l be the nuber of such paths. If the error event started at tie unit then that clearly would cause an error since the input bit need be one upon starting an error event (diverging fro the all zero state). However, if the event ended at tie unit in the all zero state then there would not be a bit error ade since reerging to the all zero state corresponds to an input bit of. Of the l phases that VIII- VIII-

9 r y y y ik overlap with the transition fro to there are exactly i of these that will cause a bit error. So for each error event we need to weight the probability by the nuber of input ones that are on that path. Thus the bit error probability (for k) can be upper bounded by b i l where is the probability of error between two codewords that differ in positions. As before, this bound is independent of the tie index since we have included all paths as if the trellis started t at and goes on t to. If we define the weight enuerator polynoial for a convolutional code as A x z ia i l A i i l lx i y z l and upper bound using the Chernoff or Bhattacharyya bound by D then the upper bound on first event error probability is ust f E A x z x yd z Siilarly the bit error probability can be further upper bounded by b A x z x x yd z As entioned previously the above bounds ay be larger than one (for sall signal-to-noise ratio or high cross over probability). This will happen for a larger range of paraeters when we use the generating function with the Bhattacharyya bound as opposed to ust the union bound. There is a way for certain codes to use ust the union bound for the first say L ters and the Bhattacharyya bound for reaining to get a tighter bound than the bound based on the generating function but without the infinite coputation required for ust the union bound. (See soe proble). The above bounds only are finite for a certain range of the paraeter D depending on the specific code. However for practical codes and reasonable signal-to-noise ratios or crossover probabilities the above bounds are finite. (See another proble). VIII- VIII- Rate k n codes Now consider a rate k n convolutional code. The trellis for such a code has k branches eerging fro each state. We will consider the bit error probability for the r-th bit in each k bit input sequence. Let A i l be the nuber of paths through the trellis with output ones, length l, i r input ones in the r-th input bit ( Then clearly ik A i l i ik:kr iri The bit error probability for the r-th bit is then bounded by b The average bit error probability is b k b k b r r i k k l k i r A i l r k) of the sequence of k bit inputs. A i l ik ik ri i ik k r ik ik l i r A i l i r A i ik l ik l b Now consider the last two sus. Thus i We can write this as ik k i r r Ai k b l l i i k A i k l i r r ik ik b i i i i i i ia i i k i k ia i l w i k i r i k :i ri r i k :i ri i A i i k :i ri l l Ai l l Ai ik ik l VIII-5 VIII-6

10 w D Iproved Union-Bhattacharrya Bound where w i l ia i l We can upper bound the bit error probability by b w w D w D The first bound is the union bound. It is ipossible to exactly evaluate this bound because there are an infinite nuber of ters in the suation. Dropping all but the first N ters gives an approxiation. It ay no longer be an upper bound though. If the weight enuerator is known we can get arbitrarily close to the union bound and still get a bound as follows. b w N w d f N w N w d f N w D N d f w D d f w D VIII-7 VIII-8 Siulation Upper bound N w d f D Bit Error Rate Lower bound The second ter is the Union-Bhattacharyya (U-B) bound. The first ter is clearly less than zero, so we get soething that is tighter than the U-B bound. By chosing N sufficiently large we can soeties get significant iproveents over the U-B bound. Below we show the error probability bounds and siulation for the constraint length (eory ) convolutional code. Note that the upper bound is fiarly tight when the bit error probability is less than Eb/N Figure 98: Error probability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (upperbound, siulation and lower bound). VIII-9 VIII-

11 Exaple: K=7,M=6, rate / code This code is used in any coercial systes including IS-95 Standard for digital cellular. This is also a NASA standard code. b E b /N Figure 99: Error probability of constraint length convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (upperbound, siulation). VIII- VIII- - U pper Bounds on Bit Error robabilityfor Constraint Length 7, Rate / Convolutional Code Union Bound - e,b - Siulation Uncoded BSK -5-6 Hard Decisions Bit Error Rate Soft Decisions E b /N (db) Figure : Error probability of constraint length 7 rate / convolutional codes on an additive white Gaussian noise channel (hard and soft decisions) Eb/N (db) Figure : Error probability of constraint length 7 rate / convolutional codes on an additive white Gaussian noise channel with soft decisions decoding (upperbound, siulation and uncoded). VIII- VIII-

12 Bit Error robability Bit Error robability (Bound) for Constraint Length 9 Rate / Convolutional Code Soft Decisions 6 Hard Decisions 8 E b /N (db) Figure : Error probability of constraint length 9 convolutional codes on an additive white Gaussian noise channel (hard and soft decisions). Meory Generators in octal d f ree A d f ree Rate / axiu free distance codes VIII-5 VIII-6 Meory Generators in octal d f ree A d f ree Rate / axiu free distance codes Note that low rate convolutional codes do not perfor any better than a rate / code convolutional code concatenated with a repetition code. There are better approaches to achieving high distance at low rates. This usually involves concatenating a convolutional code with an orthogonal code as described below (to be added). VIII-7 VIII-8

13 VIII-9 VIII-5 VIII-5 VIII-5

14 l b D 8 6D D D Error Bounds for Convolutional Codes The perforance of convolutional codes can be upper bounded by Standard codes: b d f ree w l D l Exaple Convolutional Code : Constraint length 7, eory 6, 6 state decoder, rate / has the following upper bound. where w l is the average nuber of nonzero inforation bits on paths with Haing distance l and D is a paraeter that depends only on the channel. usually the suation in the upper bound is truncated to soe finite nuber of ters. Exaple. Binary Syetric Channel crossover probability p. D p p Exaple. Additive White Gaussian Noise channel De E N b D D 6D 6 There is a chip ade by Qualco and Stanford Telecounications that operates at data rates on the order of Mbits/second that will do encoding and decoding. Exaple Convolutional Code : Constraint length 9, eory 8, 56 state decoder, rate / b 8D 79D 6 55D 8 Exaple Convolutional Code : Constraint length 9, eory 8, 56 state decoder, rate / erforance Exaples: Generally hard decisions requires db ore signal energy than soft decisions for the sae bit error probability. Also soft decisions is only about.5db better than 8 level quantization. 95D 56D 7D 6 VIII-5 VIII-5 No bit error Trellis VIII-55 VIII-56

15 No bit error Bit error VIII-57 VIII-58 No bit error Bit error VIII-59 VIII-6

16 Codes for Multiaplitude signals Bit error In this section we consider coding for ultiaplitude signals. The application of this is to bandwidth constrained channels. We can consider as a baseline a two diensional odulation syste transitting a sybols per second. If each sybol represents bits of inforation then the data rate is 96 bits per second. So we would like to have ore signals per diension in order to increase the data rate. However, we ust try to keep the signals as far apart fro each other as possible (in oder to keep the error rate low). So an increase of the size of the signal constellation for fixed iniu distance would likely increase the total signal energy transitted. The codes (signal sets) constructed are not linear in nature so the application of linear block codes is not very productive. -ary signal sets Consider a -ary QAM signal set shown below. The average energy is. The iniu distance is and the rate is 5 bits/diension. Clearly this is a nonlinear code in that the su of two codewords is not a codeword. VIII-6 VIII correct errors). However, this iproved perforance is at the expense of lower data rate (we ust transit the redundant bits). A second possible way of iproving the perforance (adding redundancy) is to increase the alphabet size of the signal set but then only allow certain subsets of all possible signal sequences. We will show that considerable gains are possible with this approach. So first we consider expanding the constellation size. 6-ary signal sets Consider a 6-ary QAM signal set shown below. The average energy is. -7 There are several possible ways to iprove the perforance of the constellation. First, one could add redundancy (e.g. use a binary code and ake hard decisions and use the code to VIII-6 VIII-6

17 VIII-65 VIII-67 Modified QAM (used in aradyne.kbit ode). This has average energy of.975. The following hexagonal constellation has energy 5.5 but each interior point now has 6 neighbors copared to the four neighbors for the rectangular structures. VIII-66 VIII-68

18 Consider now coding for 6QAM (and coparing it to an uncoded QAM signal set). Consider the following block code. Divide the points in the constellation into two subsets called A and B as shown below. A B A B A B A B B A B A B A B A A B A B A B A B B A B A B A B A A B A B A B A B B A B A B A B A A B A B A B A B B A B A B A B A VIII-69 VIII-7 The code is then described by vectors of length two where it is required that the coponents coe fro the sae set. Thus we can either have two signals fro subset A or two signals fro subset B. The Euclidean distance is calculated as follows. Consider the following two codewords. c a and where a a A and a a. Then a c a d E c a c 8 Siilar calculation holds for points in subset B. Also consider and where a i A and b i B. Then c a a c b d E c b c 8 transits bits/ diensions or.75 bits/diension. The original signal set has on the average.75 nearest neighbors per signal point. We calculate the nuber of nearest neighbors for the code as follow. Consider the nearest neighbors to the codeword a where a is an interior point of the constellation and is in subset A. Then a nearest neighbor is of the for a. There are four choices for a. This is the sae as the original constellation. Now consider a to be one of the exterior points (but not a corner point). Then a a there are only two nearest neighbors (as opposed to three for the original constellation). Now consider a to be a corner point. Corner points have only one nearest neighbor. Thus the average nuber of nearest neighbors is calculated to be 6 6 Thus we have gained a factor of two in Euclidean distance copared to 6QAM and have reduced the average nuber of nearest neighbors. Consider now further dividing the constellation. 65 Thus the distance between two points is twice the distance of the original signal set. The original signal set transitted 6 bits/ diensions or bits/diension. The new signal set VIII-7 VIII-7

19 c A B A B A B A B C D C D C D C D A B A B A B A B C D C D C D C D A B A B A B A B of 6). A block code for this signal partition consists of codewords of the for (A,A,A,A) (B,B,B,B) (A,A,D,D) (B,B,C,C) (A,D,A,D) (B,C,B,C) (A,D,D,A) (B,C,C,B) (D,A,A,D) (C,B,B,C) (D,A,D,A) (C,B,C,B) (D,D,A,A) (C,C,B,B) (D,D,D,D) (C,C,C,C) C D C D C D C D A B A B A B A B C D C D C D C D The iniu distance between points in subset A is now (or a iniu distance squared That is the coponents are either all fro the sets A and D or all fro the sets C and B. The nuber of ties fro any set is even. The iniu distance of this code/odulation is deterined as follows. Two codewords of the for AA A but differing in exactly one position has squared Euclidean distance of 6. Two codewords for the for AA A and AA D have squared Euclidean distance of 8+8=6. Two codewords of the for AA A and BB B have squared Euclidean distance of +++=6. Thus it is easy to verify the iniu squared Euclidean distance of this code is 6 or ties larger than 6 QAM. The nuber of bits per diension is calculated as bits to deterine a codeword and D A B A A VIII-7 VIII-7 bits to deterine which point in a subset to take. Thus to chose the four subsets requires 6 bits. Thus we have 6+= bits in 8 diensions or a rate of.5 bits per diension. We could copare this to a QAM syste which also has.5 bits/diension. The iniu distance squared of QAM is and the signal power is (copared to for 6QAM). Thus we have increased the signal power by a factor of but have increased the squared-euclidean distance by a factor of. The net coding gain is or db. (Can you calculate the nuber of nearest neighbors?) Thus when coparing a coded syste with a certain constellation and an uncoded syste with soe other constellation the coding gain is defined as Coding Gain d E c Trellis Codes for 8-SK Suppose we want to transit bits per diensions (one I-Q sybol). This is easy with -SK. The odulation uses one of signals at four different phases. The constellation is shown below. where c u is the power (or energy) of the coded (uncoded) signal set and d E corresponding Euclidean distance. d E u u de c u is the The probability that we confuse signal with signal is signal is Q d σ where d and σn. VIII-75 VIII-76

20 sin π 8-SK Constellation One way to iprove the perforance is to use soe sort of coding. That is, add redunandant bits and use the distance of the code to protect fro errors. However, for a fixed bandwidth (odulation sybol duration) adding an error control code will decrease the inforation rate. We would like to keep the inforation rate constant but decrease the required energy. Suppose we add ore signal points but then only allow certain points be transitted. So, for exaple, consider doubling the nuber of points to 8 fro and then using a trellis to decide which points to transit as shown below. d 7 d 7 d VIII-77 VIII-78 8-SK Distance The iniu distance can be calculated as follows. Clearly the distance ust be less than the distance between two identical nodes via parallel paths. The distance between two identical nodes on parallel paths is always. The distance between two paths that diverge at soe point and then reerge later as shown in the previous figures is calculated as: d d Thus the iniu distance is. This is a factor of larger than the case of -SK but we have transission at the sae inforation rate. This is essentially a db gain (reduction of energy) at the sae inforation rate but with higher receiver coplexity. VIII-79 VIII-8

21 x x w α Miniu Bit Error robability Decoding for Convolutional Codes on this noisy version of we wish to deterine the following probabilities p x x z J reviously we derived the optial decoder for iniizing the codeword error probability for a convolutional code. That is iniizing the probability that the decoder chooses the wrong inforation sequence. In this section we derive an algorith for iniizing the probability of bit error. Consider a finite state Markov chain with state space. Let x be the sequence of rando variables representing the state at tie. Let x be the initial state of the process with p x and let x J be the final state. Since this is a Markov process we have that p x x x x x p x x M Let wx be the state transition at tie. There is a one-to-one correspondence between state sequences and transition sequences. That is the two sequences x x x contain the sae inforation. Let u l ku k ul denote a sequence. By soe echanis (e.g. a noisy channel) a noisy version of the state transition sequence is observed. Based z xj w w wj and p x These two quantities can be calculated fro σ λ by appropriate noralization. p x p x x z J x x z J z J z J x σ l λ l λ l k σ l k z J VIII-8 VIII-8 Now let α x γ z β x z z J x x We can calculate λ We can calculate σ as follows. λ β as follows. z J z J x x α z z J β γ x We now develop recursions for α x and β α z x. For x J we have z x z α x M M M z x x γ The boundary conditions are given as z x x z α z x z σ x z J z J x x x z J x z z x z x z x z α Here we are assuing the Markov chain starts in state and ends in state at tie J ( x J ). VIII-8 VIII-8

22 x x x β β w x w The recursion for β is given as follows. β z J M x M M M The boundary condition is x z J z J β x x z J γ β J x z x x x z x z x w w w x z x z z x w x w w x x x x x The first ter is the transition probability of the channel. The second ter is the output of the encoder when transitioning fro state to state. The last ter is the probability of going to state fro state. This will be either a nonzero constant (e.g. /) or zero. The algorith works as follows. First initialize α and β. After receiving the vector z zj perfor the recursion on α and β. Then cobine α and β to deterine λ and σ. Noralize to deterine the desired probabilities. Finally we can calculate γ as follows γ x z VIII-85 VIII-86 Now consider a convolutional code which is used to transit inforation. The input sequence to the encoder is u uj wherezeroshavebeenappendedto the input sequence to force the encoder to the all zero state at tie J. We wish to deterine the iniu bit error probability decision rule for bit u. The input sequence deterines a state transition sequence x J. The state sequence deterines the output code sybols c cn. The output sybols are odulated and the received and a decision statistic is derived for each coded sybol via a channel p z c. Based on observing r we wish to deterine the optiu rule for deciding if u or u. The optial decision rule is to copute the log-likelihood ratio and copare that to zero λlog log log log p u z p u z :u p x px :u :u σ :u σ :u :u α α γ γ z z Turbo Codes Inforation RSC Interleaver RSC Figure : Turbo Code Encoder VIII-87 VIII-88

23 Deinterleaver Decoder Interleaver Interleaver Decoder Figure : Recursive Systeatic Encoder Figure 5: Decoding Architecture VIII-9 VIII-9 VIII-89 VIII-9

24 VIII-9

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code.

Convolutional Codes. Lecture 13. Figure 93: Encoder for rate 1/2 constraint length 3 convolutional code. Convolutional Codes Goals Lecture Be able to encode using a convolutional code Be able to decode a convolutional code received over a binary symmetric channel or an additive white Gaussian channel Convolutional

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

ARTICLE IN PRESS. Murat Hüsnü Sazlı a,,canişık b. Syracuse, NY 13244, USA

ARTICLE IN PRESS. Murat Hüsnü Sazlı a,,canişık b. Syracuse, NY 13244, USA S1051-200406)00002-9/FLA AID:621 Vol ) [+odel] P1 1-7) YDSPR:3SC+ v 153 Prn:13/02/2006; 15:33 ydspr621 by:laurynas p 1 Digital Signal Processing ) wwwelsevierco/locate/dsp Neural network ipleentation of

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies

Approximation in Stochastic Scheduling: The Power of LP-Based Priority Policies Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing

More information

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields

Finite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields Finite fields I talked in class about the field with two eleents F 2 = {, } and we ve used it in various eaples and hoework probles. In these notes I will introduce ore finite fields F p = {,,...,p } for

More information

Ch 12: Variations on Backpropagation

Ch 12: Variations on Backpropagation Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers

Ocean 420 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers Ocean 40 Physical Processes in the Ocean Project 1: Hydrostatic Balance, Advection and Diffusion Answers 1. Hydrostatic Balance a) Set all of the levels on one of the coluns to the lowest possible density.

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact Physically Based Modeling CS 15-863 Notes Spring 1997 Particle Collision and Contact 1 Collisions with Springs Suppose we wanted to ipleent a particle siulator with a floor : a solid horizontal plane which

More information

SPECTRUM sensing is a core concept of cognitive radio

SPECTRUM sensing is a core concept of cognitive radio World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile

More information

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels

Impact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels This full text paper was peer reviewed at the direction of IEEE Counications Society subject atter experts for publication in the IEEE ICC 9 proceedings Ipact of Iperfect Channel State Inforation on ARQ

More information

Support Vector Machines. Goals for the lecture

Support Vector Machines. Goals for the lecture Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

Boosting with log-loss

Boosting with log-loss Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13

CSE525: Randomized Algorithms and Probabilistic Analysis May 16, Lecture 13 CSE55: Randoied Algoriths and obabilistic Analysis May 6, Lecture Lecturer: Anna Karlin Scribe: Noah Siegel, Jonathan Shi Rando walks and Markov chains This lecture discusses Markov chains, which capture

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

Least Squares Fitting of Data

Least Squares Fitting of Data Least Squares Fitting of Data David Eberly, Geoetric Tools, Redond WA 98052 https://www.geoetrictools.co/ This work is licensed under the Creative Coons Attribution 4.0 International License. To view a

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

ECS455: Chapter 5 OFDM

ECS455: Chapter 5 OFDM ECS455: Chapter 5 OFDM 5.4 Cyclic Prefix (CP) 1 Dr.Prapun Suksopong prapun.co/ecs455 Office Hours: BKD 3601-7 Tuesday 9:30-10:30 Friday 14:00-16:00 Three steps towards odern OFDM 1. Mitigate Multipath

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

Convex Programming for Scheduling Unrelated Parallel Machines

Convex Programming for Scheduling Unrelated Parallel Machines Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

Efficient Filter Banks And Interpolators

Efficient Filter Banks And Interpolators Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical

More information

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011

Page 1 Lab 1 Elementary Matrix and Linear Algebra Spring 2011 Page Lab Eleentary Matri and Linear Algebra Spring 0 Nae Due /03/0 Score /5 Probles through 4 are each worth 4 points.. Go to the Linear Algebra oolkit site ransforing a atri to reduced row echelon for

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks with Order-Optimal Per-Flow Delay A Low-Coplexity Congestion Control and Scheduling Algorith for Multihop Wireless Networks with Order-Optial Per-Flow Delay Po-Kai Huang, Xiaojun Lin, and Chih-Chun Wang School of Electrical and Coputer

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Introduction to Discrete Optimization

Introduction to Discrete Optimization Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and

More information

Research Article Rapidly-Converging Series Representations of a Mutual-Information Integral

Research Article Rapidly-Converging Series Representations of a Mutual-Information Integral International Scholarly Research Network ISRN Counications and Networking Volue 11, Article ID 5465, 6 pages doi:1.54/11/5465 Research Article Rapidly-Converging Series Representations of a Mutual-Inforation

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3

More information

1 Proof of learning bounds

1 Proof of learning bounds COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #4 Scribe: Akshay Mittal February 13, 2013 1 Proof of learning bounds For intuition of the following theore, suppose there exists a

More information

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER

ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER IEPC 003-0034 ANALYSIS OF HALL-EFFECT THRUSTERS AND ION ENGINES FOR EARTH-TO-MOON TRANSFER A. Bober, M. Guelan Asher Space Research Institute, Technion-Israel Institute of Technology, 3000 Haifa, Israel

More information

Rateless Codes for MIMO Channels

Rateless Codes for MIMO Channels Rateless Codes for MIMO Channels Marya Modir Shanechi Dept EECS, MIT Cabridge, MA Eail: shanechi@itedu Uri Erez Tel Aviv University Raat Aviv, Israel Eail: uri@engtauacil Gregory W Wornell Dept EECS, MIT

More information

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs

Algorithms for parallel processor scheduling with distinct due windows and unit-time jobs BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and

More information

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals

Estimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals Estiation of ADC Nonlinearities fro the Measureent in Input Voltage Intervals M. Godla, L. Michaeli, 3 J. Šaliga, 4 R. Palenčár,,3 Deptartent of Electronics and Multiedia Counications, FEI TU of Košice,

More information

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm

Symbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN

INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN V.A. Koarov 1, S.A. Piyavskiy 2 1 Saara National Research University, Saara, Russia 2 Saara State Architectural University, Saara, Russia Abstract. This article

More information

Maximum a Posteriori Decoding of Turbo Codes

Maximum a Posteriori Decoding of Turbo Codes Maxiu a Posteriori Decoing of Turbo Coes by Bernar Slar Introuction The process of turbo-coe ecoing starts with the foration of a posteriori probabilities (APPs) for each ata bit, which is followe by choosing

More information

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science

A Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes

Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes Low-coplexity, Low-eory EMS algorith for non-binary LDPC codes Adrian Voicila,David Declercq, François Verdier ETIS ENSEA/CP/CNRS MR-85 954 Cergy-Pontoise, (France) Marc Fossorier Dept. Electrical Engineering

More information

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel

Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel 1 Iterative Decoding of LDPC Codes over the q-ary Partial Erasure Channel Rai Cohen, Graduate Student eber, IEEE, and Yuval Cassuto, Senior eber, IEEE arxiv:1510.05311v2 [cs.it] 24 ay 2016 Abstract In

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Principal Components Analysis

Principal Components Analysis Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

Curious Bounds for Floor Function Sums

Curious Bounds for Floor Function Sums 1 47 6 11 Journal of Integer Sequences, Vol. 1 (018), Article 18.1.8 Curious Bounds for Floor Function Sus Thotsaporn Thanatipanonda and Elaine Wong 1 Science Division Mahidol University International

More information

Finite-State Markov Modeling of Flat Fading Channels

Finite-State Markov Modeling of Flat Fading Channels International Telecounications Syposiu ITS, Natal, Brazil Finite-State Markov Modeling of Flat Fading Channels Cecilio Pientel, Tiago Falk and Luciano Lisbôa Counications Research Group - CODEC Departent

More information

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models

Compression and Predictive Distributions for Large Alphabet i.i.d and Markov models 2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511

More information

Multi-Scale/Multi-Resolution: Wavelet Transform

Multi-Scale/Multi-Resolution: Wavelet Transform Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the

More information

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links

Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Homework 3 Solutions CSE 101 Summer 2017

Homework 3 Solutions CSE 101 Summer 2017 Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing

More information

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks

Optimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks 1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster

More information

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization

Use of PSO in Parameter Estimation of Robot Dynamics; Part One: No Need for Parameterization Use of PSO in Paraeter Estiation of Robot Dynaics; Part One: No Need for Paraeterization Hossein Jahandideh, Mehrzad Navar Abstract Offline procedures for estiating paraeters of robot dynaics are practically

More information

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding

Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding IEEE TRANSACTIONS ON INFORMATION THEORY (SUBMITTED PAPER) 1 Design of Spatially Coupled LDPC Codes over GF(q) for Windowed Decoding Lai Wei, Student Meber, IEEE, David G. M. Mitchell, Meber, IEEE, Thoas

More information

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy

Nonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo

More information

Lecture 21. Interior Point Methods Setup and Algorithm

Lecture 21. Interior Point Methods Setup and Algorithm Lecture 21 Interior Point Methods In 1984, Kararkar introduced a new weakly polynoial tie algorith for solving LPs [Kar84a], [Kar84b]. His algorith was theoretically faster than the ellipsoid ethod and

More information

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1.

Handout 7. and Pr [M(x) = χ L (x) M(x) =? ] = 1. Notes on Coplexity Theory Last updated: October, 2005 Jonathan Katz Handout 7 1 More on Randoized Coplexity Classes Reinder: so far we have seen RP,coRP, and BPP. We introduce two ore tie-bounded randoized

More information

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number

A Generalized Permanent Estimator and its Application in Computing Multi- Homogeneous Bézout Number Research Journal of Applied Sciences, Engineering and Technology 4(23): 5206-52, 202 ISSN: 2040-7467 Maxwell Scientific Organization, 202 Subitted: April 25, 202 Accepted: May 3, 202 Published: Deceber

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)

A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words) 1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu

More information

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials

Fast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Error Exponents in Asynchronous Communication

Error Exponents in Asynchronous Communication IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,

More information

A remark on a success rate model for DPA and CPA

A remark on a success rate model for DPA and CPA A reark on a success rate odel for DPA and CPA A. Wieers, BSI Version 0.5 andreas.wieers@bsi.bund.de Septeber 5, 2018 Abstract The success rate is the ost coon evaluation etric for easuring the perforance

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Figure 1: Equivalent electric (RC) circuit of a neurons membrane Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of

More information