Generative Model for Burst Error Characterization in Narrowband Indoor Powerline Channel

Size: px
Start display at page:

Download "Generative Model for Burst Error Characterization in Narrowband Indoor Powerline Channel"

Transcription

1 1 Generative Model for Burst Error Characterization in Narrowband Indoor Powerline Channel Emilio Balda, Xabier Insausti, Josu Bilbao and Pedro M. Crespo Abstract This paper presents a generative model for burst characterization of the underlying error profiles obtained from the Narrowband Indoor Powerline Channel. Using error sequences measured from the transmission lin, a generative model that produces error sequences of any length, with similar relevant statistics, is obtained. I. INTRODUCTION Powerline networs present an interesting alternative for no-new-wires scenarios were there is an electrical power distribution wiring available [1]. Unlie other communication channels, the powerline channel (PLC cannot be modeled as an additive white Gaussian noise channel [2] [4] [5] [6]. Some of the different types of noise within this channel include periodic impulsive noise generated by zero crossing at 50/60 Hz and asynchronous impulsive noise due to switching transients in the networ [2] [6]. This noise type has a time-varying behavior that produces noise bursts, on a scale from µs to ms, with significant implications on powerline communications. From recorded error sequences, generative models capable of generating binary error sequences with similar statistical distribution can be developed. These models allow studies of the channel to be performed without having the channel available. Obtaining these models involves two steps [3]: 1 guessing a model with enough degrees of freedom to be able to resemble the channel behavior and 2 developing methods to parametrize the model from the recorded error sequences. A generative model provides a method for generating long error sequences with reduced computational load when compared with the standard method of obtaining them by computer simulation of the overall communication lin [3]. This advantage is of great importance for error correcting codes testing on video and/or audio applications transmitted through the channel. Since long transmitted sequences are required for a good quality of the received signal, a generative model will substantially reduce the simulation time for performance evaluation of different channel coding schemes. The model should resemble the relevant statistics for channel coding schemes development. In this paper we will design a generative model, called hidden Marov generative model (HGM, for the Narrowband Indoor Powerline channel, which is particularly suitable for the characterization of channels with error bursts and which is based on hidden Marov models (HMM. This HGM will be inspired in the one designed in [3] for the indoor radio channel. Finally, we will use error profiles measured in [1] to train our model. II. PREVIOUS DEFINITIONS A HMM is an extension of the classical Marov Model in the sense that the observation symbols generated are probabilistic functions of the current state rather than deterministic 1. This means that the resulting HMM is a doubly embedded stochastic process with an uderlying stochastic proccess that is not observable [3]. A HMM is characterized by the following parameters: 1 K, the number of states of the model. We denote the set of all states as S S 1, S 2,... S K. Let q t S be a random variable that indicates the state of the HMM at time t N. 2 D, the number of distinct observation symbols that the model can generate. The set of all the symbols is denoted by V v 1, v 2,..., v D. Let r t V be a random variable that indicates the observation symbol generated at time t N. 3 Let a ij, 1 i, j K be the probability of transition from the state S i to the state S j, that is a ij Prq t+1 = S j q t = S i, 1 i, j K. (1 We define the state transition probability matrix A R K K as the matrix whose element at row i and column j is a ij. 4 Let b j, 1 j K, 1 D be the probability of getting the output observation symbol v in state S j, that is b j Prr t = v q t = S j, 1 j K, 1 D. (2 We define the observation symbol probability distribution matrix B R K D as the matrix whose element at row j and column is b j. 5 Let π i be the probability for the initial state to be S i, that is π i Prq 1 = S i, 1 i K. (3 We define the initial state distribution vector π R 1 K as the row vector whose element at column i is π i. In the sequel, we will denote a HMM with the triple λ = A, B, π. 1 In the classical Marov Model each state is associated to a single observation symbol.

2 2 A. Burst Definition A sequence of n bits a = (a 1,..., a n 0, 1 n will be sent through the PL channel, receiving another sequence a = (a 1,..., a n 0, 1 n at the output. Let the error profile sequence x = (x 1,..., x n 0, 1 n of the channel be x = a a = (a 1 a 1,..., a n a n (4 where denotes the mod-2 addition (i.e. the xor operator 2. Let T N be a parameter. For the sae of notation, we will denote [x] q p (x p,..., x q, 1 p < q n. A burst is defined as a sub-sequence of x where there is at least one error every T bits. Let b (T be the -th burst of the sequence x, defined as where i (T 1 = argmin i N:i>1 f (T 1 = argmin i N:i>i (T 1 i (T = argmin i N:i>f (T 1 f (T = argmin i N:i>i (T b (T [x] f (T i (T (5 [x]i = 1, [x] i+1 0 (6 [x]i = 1, [x] i+1 = 0 (7 [x]i = 1, [x] i+1 0, > 1 (8 [x]i = 1, [x] i+1 = 0, > 1 (9 (10 Consequently, let b (T : 1 N be the set of all the bursts within x. In Figure 1 we can see how the -th burst within x would loo lie. B. Partition of the Burst Set Let (T f (T i (T + 1 (13 be the length in bits of the burst b (T parameter. For each b (T and the length where 3,l [ The notation L l (T i=l (l 1+1 of the vector b (T and L N be a, 1 N, we define a sequence of that sequence as L (,1,...,, b (T ] i [ b (T ] i (14 N D(L,T (15, 1 l. (16 stands for the value at the position i. In words, c(l,t indicates the number of as shown in is also nown as the compact errors on each bloc of length L of the burst b (T Figure 2. This sequence format version of b (T [3]. Let C : 1 N be the set of all the compact format bursts. Now we will characterize each burst in compact format by its pea number of errors PNE N, that is PNE max l=1,...,,l, 1 N. (17 Note that the PNE is always equal or lower than L. Fig. 1. The -th burst within x for a selected T. Now, let z (T q z (T be the q-th non-burst sequence defined as q [x] i(t q+1 1 f q (T +1 1 q N 1, (11 where we discard the bits before the first burst and after the last one. The set of non-burst sequences will be denoted as A z (T q : 1 q N 1. Consequently, the length in bits of a non-burst sequence z (T q is defined as Λ q = i (T q+1 f (T q 1. (12 2 In other words, when x i is equal to one an error has occurred and when x i = 0 it has not. Fig. 2. The compact format version c (T,L of the burst b (T. Also PNE = 3. The set C will be partitioned into M + 1 sets: Z, W 1, W 2,..., W M defined as Z C : < l (0 min (18 and W i C : l (0 min, ξ(i 1 < PNE ξi, (19 where ξ = L (0 M and l min N is a parameter. 3 We assume that b (T is zero-padded up to length L.

3 3 III. MODELING THE ERROR PROFILES WITH THE PROPOSED HIDDEN GENERATIVE MODEL The HGM that will be designed in this paper is inspired in [3] and it consists on concatenating sub-models of different behaviors of the channel into one Marov-lie global model. This global model will have different states and state transition probabilities, and each state will contain a sub-model that will generate a sequence of bits that emulates the error profile of the corresponding set. In Figure 3(a we can see how the HGM model will loos lie. The behavior of the bursts within sets W 1, W 2,..., W M will be modeled in Section III-A using HMMs and those sets will be called burst classes (class 1 to class M. Furthermore, the behavior of the bursts in set Z will be modeled in Section III-B, and the behavior of the non-burst sequences in set A will be modeled in Section III-C. Finally, the transition probabilities of the HGM P A,1, P A,2,..., P A,M, P A,Z shown in Figure 3(a are computed as P A,m = W m C P A,Z = Z C, where denotes the cardinal of a set. 1 m M, Fig. 3. (a Concatenation of different HMM sub-models. (b Example of left-right HMM with five states and maximum step size d of two. A. Modeling the behavior of bursts within the classes W 1,..., W M A class W m, 1 m M, will be characterized by the following statistics: 1 PNE m : the mean PNE value, that is PNE m : W m PNE. (20 W m 2 D m : the mean D m value, that is : W m W m. (21 3 E m : the mean number of errors inside a burst, that is E m : (L,T D W m W m l=1,l. (22 Let a left-right HMM with maximum step size d N be a HMM λ = A, B, π where: 1 The state transition probability matrix A R K K is upper triangular, that is a ij = 0, j < i. 2 sequences have finite length. The symbol generated from the last state S K is the last symbol of the sequence. In other words, a left-right model continues generating symbols until the last state is reached. 3 d N is the maximum step size of the left-right model (i.e. the possibility of having transitions from the current state to at most d units forward, + d, that is a ij = 0, j > i + d. An example of a left-right HMM is depicted in Figure 3(b. To model the behavior of the bursts within the class W m (1 m M we will use left-right HMMs. We denote λ m = A m, B m, π m to the left-right HMM corresponding to the class W m. The first initial distribution state vector π m for a model λ m that comes into mind is π m = (1, 0, 0,..., 0 as in [3]. Nevertheless, in order to increase the degrees of freedom of the model we will choose a π m identical to the first row of the state transition probability distribution matrix A m of the corresponding sub-model 4. Now the first state of the model could be any of the d m +1 first states, being d m the maximum step size of the model λ m. Let l (m min be the minimum burst length value present within the class W m, that is l (m min min : W m 1 m M. (23 A model λ m corresponding to the class W m should: 1 be able to generate bursts with length equal to or higher than l (m min, 2 not be able to generate bursts with length lower than l (m min. The possible number of states (given a d m value for this model λ m depends on these two restrictions. Let N m be the number of states of the model λ m. If N m is too large, the model λ m will not fulfill the first condition. Liewise, if N m is too small, the model λ m will not fulfill the second condition. Therefore, N m should be between two finite values: ( 2 + d m l (m min N m 1 + d m l (m min + 1. (24 To obtain the values for A m and B m of λ m we will train this model using the well nown Baum-Welch algorithm - referencia-. This algorithm needs an initial estimated model λ m = A m, B m and the training burst sequences 4 Therefore, the sub-model λ m is fully characterised as λ m = A m, B m

4 4 W m to generate a model λ m iteratively. To obtain the sought model λ m we design the following algorithm: Algorithm 1: 1 Compute l (m min corresponding to the class W m according to (23. 2 Assign N m = 2 + d m l (m min. 3 Generate an initial estimated model λ m = A m, B m where the rows of B m follow an uniform distribution and A m follows the rules of a left-right HMM. The non-zero values within the rows of A m also follow an uniform distribution. 4 Use the Baum-Welch algorithm with input parameter λ m to obtain a left-right HMM λ m. 5 Using λ m we generate a large number R of bursts ĉ (L,T r, 1 r R. Let Ŵm ĉ (L,T r : 1 r R be the set of these generated bursts. We compute the corresponding parameters PNE m, D m, and Ê m of Ŵm as in (20, (21, and (22 respectively. 6 For a fixed relative error ɛ > 0, we say that the left-right HMM λ m is good enough if all of the following holds: PNE m PNE m < ɛ PNE m, D m D m < ɛ D m, Ê m E m < ɛ E m. (25 7 If the left-right HMM ( is not good enough, then: If N m < 1 + l (m min + 1 d m then, increase N m by one and go to 3. ( If N m = 1 + l (m min + 1 d m then, the class W m is partitioned into two sub-classes 5. Therefore, increase the total number of classes M by one and go to 1. Remar 1: In the event that Algorithm 1 does not converge, the allowed relative error ɛ should be increased. B. Modeling the behavior of the bursts within Z Let l(x, x N be the distribution of the lengths defined in (13 associated to the bursts Z, that is : Z, = x l(x. (26 Z Let q(x, 1 x L be the distribution of the observation symbols,l ( associated to the bursts =,1,...,,l,..., Z, that is, (, l : c (T,L Z,,l = x q(x (, l :. (27 Z The sub-model corresponding to the set Z will be characterized by these two distributions. Therefore, this model generates 5 The bursts of the former class W m with length lower than D m will form one sub-class and the other ones will form the other sub-class. length l bursts, where l is distributed according to l(x, and each burst has q errors randomly situated every L bits, where q is distributed according to q(x. C. Modeling the behavior of the non-burst sequences within A The first parameter of this model is the statistical distribution r(x, x N of the length of the non-burst sequences Λ, defined in (12, that is q : z (T q A, Λ q = x r(x. (28 A In the model corresponding to the set A we will emulate the impulsive periodic noise generated by the zero crossings of the power line at frequency f c (f c is either 50 or 60 Hz. Let, T s be the bit rate of the transmission lin in bits/sec. Since there are two zero crossings every 1 f c seconds, the model should introduce impulsive periodic noise every Ts 2f c bits. Suppose that the training error profile sequence x introduces such periodic impulsive error with a probability P s. Therefore, our model will introduce an error every Ts 2f c bits with probability P s. D. Selection criteria of the Parameters T, L To select a proper value for L we must have in mind that the compact format bursts should reflect how the error density varies within that burst. If the chosen L is too small, most of the symbols inside the compact format bursts will tae the maximum value (i.e., L. This produces a saturation effect that destroys the property of of representing the density behavior. On the other hand, if L is too large, the compact format bursts will have short lengths that will not represent the density behavior either. A large L would also imply a loss on the model s accuracy due to the bac conversion from compact format symbols to bits (recall that bits inside a generated bloc of length L are randomly situated. Since a burst is defined as having at least one error every T bits, it seems reasonable for L to have a smaller value than T. This is because in average, every bloc of L bits will have at least one error. On the other hand, since we do not want to include the noise generated by zero crossings at f c Hz inside the bursts, the value of T should verify that T < T s 2f c. (29 Knowing this boundary, we can iterate for the selection of T or guess a reasonable value. IV. SIMULATION RESULTS The error profile sequence x of length n = bits used to train the HGM was measured in [1], where the transmission was done at a bit rate T s = 4800 bits/sec in a powerline with zero crossing frequency f c = 50 Hz 6. 6 Therefore, as shown in (29, T should be lower than 48 bits.

5 5 A. Parameters Selection The maximum step size of all the left-right HMMs considered in our HGM will be d m = 3 for 1 m M (as in [3] to avoid increasing the complexity of the model. We will choose the largest T in such a way that the length in bits of the longest burst does not exceed 48 bits. In our case this leads to T = 25. Iterating over different values of L, the best results are obtained for L = 9. In Figure 4 we can see the distributions of the lengths and the PNE values obtained for the selected L and T. Since we have small burst lengths, l (0 min = 2 (the lowest possible value for l (0 min such that Z is not empty is selected. Also, given the small PNE values (when compared to L, M = L is chosen to produce the maximum number of classes. Consequently, ξ = 1. A relative error tolerance ɛ = 0.1 is chosen to train the HGM. B. Obtained HGM The obtained HGM model has 5 sub-models. These submodels correspond to sets A, Z, W 1, W 2 and W 3 and the transition probabilities are shown in Table I. 4 Bit error interarrival period distribution BEIP(m: the probability of having exactly m consecutive error-free bits between one error and the next one (shown in Figure 9. Pr( 0 m m Fig. 7. Error-free interval distribution Pr 0 m 1. TABLE I HGM TRANSITION PROBABILITIES P A,Z P A, P A, P A, In Figure 5 we can see the distributions l(x and q(x that characterize the sub-model corresponding to the set Z computed according to (26 and (27 respectively. In Table II we can see the characteristics of the obtained sub-models corresponding to the classes W 1, W 2, and W 3. TABLE II SUB-MODELS CORRESPONDING TO SETS W m CHARACTERISTICS m N m l (0 min PNE m D m Ê m C. Comparison Between Relevant Statistics For the comparison between the obtained HGM model and the real measures we select statistics that are considered relevant in [3] and [1]. The selected statistics are as follows: 1 Error-free interval distribution Pr0 m 1: probability of having at least m consecutive error-free bits after an error (shown in Figure 7. 2 Error cluster distribution Pr1 m 0: the probability of having exactly m consecutive errors after an error-free bit (shown in Figure 8. 3 P (v, distribution: probability that a bloc of v bits contains exactly errors. In Figure 6, P (v, is displayed for bloc lengths of 50, 100, 200, and 400. P ( 1 m m Fig. 8. Normalized cluster distribution Pr1 m 0. BEIP ( m m Fig. 9. Bit error interarrival period distribution BEIP(m.

6 6 PNE Distribution D (L,T Distribution PNE (L,T D (a (b Fig. 4. 4(a: Burst pea number of errors PNE distribution. 4(b Burst length distribution. l(x q(x x (a x (b Fig. 5. 5(a: l(x distribution. 5(b: q(x distribution. REFERENCES [1] Josu Bilbao, Aitor Calvo, and Igor Armendariz. Fast characterization method and error sequence analysis for narrowband indoor powerline channel. Proc. IEEE, año?? [2] X. Gu. Time frequency analysis of noise generated by electrical loads in PLC. pages [3] D. Javier Garcia-Frías and D. Pedro Crespo. Hidden Marov Models for Burst Error Characterization in Indoor Radio Channels. Proc. IEEE, vol. 46(no. 4, [4] T. Yamazato M. Katayama and H. Oada. A mathematical model of noise in narrowband powerline communication systems [5] A. Daba M. Nassar. Cyclostationary noise modelling in narrowband powerline communication for smart grid applications. ICASSP, [6] M. Zimmermann and K. Dostert. Analysis and Modeling of Impulsive Noise in Broad-Band Powerline Communications. volume 44, pages

7 7 P(50, P(100, (a (b P(200, P(400, (c (d Fig. 6. 6(a: probability that a bloc of 50 bits contains exactly errors. 6(b: probability that a bloc of 100 bits contains exactly errors. 6(c: probability that a bloc of 200 bits contains exactly errors. 6(d: probability that a bloc of 400 bits contains exactly errors.

STATISTICAL MODELING OF ASYNCHRONOUS IMPULSIVE NOISE IN POWERLINE COMMUNICATION NETWORKS

STATISTICAL MODELING OF ASYNCHRONOUS IMPULSIVE NOISE IN POWERLINE COMMUNICATION NETWORKS STATISTICAL MODELING OF ASYNCHRONOUS IMPULSIVE NOISE IN POWERLINE COMMUNICATION NETWORKS Marcel Nassar, Kapil Gulati, Yousof Mortazavi, and Brian L. Evans Department of Electrical and Computer Engineering

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

Lecture 14 October 22

Lecture 14 October 22 EE 2: Coding for Digital Communication & Beyond Fall 203 Lecture 4 October 22 Lecturer: Prof. Anant Sahai Scribe: Jingyan Wang This lecture covers: LT Code Ideal Soliton Distribution 4. Introduction So

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

EUSIPCO

EUSIPCO EUSIPCO 3 569736677 FULLY ISTRIBUTE SIGNAL ETECTION: APPLICATION TO COGNITIVE RAIO Franc Iutzeler Philippe Ciblat Telecom ParisTech, 46 rue Barrault 753 Paris, France email: firstnamelastname@telecom-paristechfr

More information

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage 1 Minimum Repair andwidth for Exact Regeneration in Distributed Storage Vivec R Cadambe, Syed A Jafar, Hamed Malei Electrical Engineering and Computer Science University of California Irvine, Irvine, California,

More information

Parametric Models Part III: Hidden Markov Models

Parametric Models Part III: Hidden Markov Models Parametric Models Part III: Hidden Markov Models Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2014 CS 551, Spring 2014 c 2014, Selim Aksoy (Bilkent

More information

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t

Statistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t Markov Chains. Statistical Problem. We may have an underlying evolving system (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t Consecutive speech feature vectors are related

More information

Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution 1 2

Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution 1 2 Submitted to IEEE Trans. Inform. Theory, Special Issue on Models, Theory and odes for elaying and ooperation in ommunication Networs, Aug. 2006, revised Jan. 2007 esource Allocation for Wireless Fading

More information

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture)

ECE 564/645 - Digital Communications, Spring 2018 Homework #2 Due: March 19 (In Lecture) ECE 564/645 - Digital Communications, Spring 018 Homework # Due: March 19 (In Lecture) 1. Consider a binary communication system over a 1-dimensional vector channel where message m 1 is sent by signaling

More information

Newton-like method with diagonal correction for distributed optimization

Newton-like method with diagonal correction for distributed optimization Newton-lie method with diagonal correction for distributed optimization Dragana Bajović Dušan Jaovetić Nataša Krejić Nataša Krlec Jerinić February 7, 2017 Abstract We consider distributed optimization

More information

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps

Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps 2012 IEEE International Symposium on Information Theory Proceedings Efficient Decoding of Permutation Codes Obtained from Distance Preserving Maps Yeow Meng Chee and Punarbasu Purkayastha Division of Mathematical

More information

Correcting Bursty and Localized Deletions Using Guess & Check Codes

Correcting Bursty and Localized Deletions Using Guess & Check Codes Correcting Bursty and Localized Deletions Using Guess & Chec Codes Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University serge..hanna@rutgers.edu, salim.elrouayheb@rutgers.edu Abstract

More information

Pair Hidden Markov Models

Pair Hidden Markov Models Pair Hidden Markov Models Scribe: Rishi Bedi Lecturer: Serafim Batzoglou January 29, 2015 1 Recap of HMMs alphabet: Σ = {b 1,...b M } set of states: Q = {1,..., K} transition probabilities: A = [a ij ]

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Chec Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, IIT, Chicago sashann@hawiitedu, salim@iitedu Abstract We consider the problem of constructing

More information

An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters

An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters IEEE/ACM TRANSACTIONS ON NETWORKING An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters Mehrnoosh Shafiee, Student Member, IEEE, and Javad Ghaderi, Member, IEEE

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information

11.3 Decoding Algorithm

11.3 Decoding Algorithm 11.3 Decoding Algorithm 393 For convenience, we have introduced π 0 and π n+1 as the fictitious initial and terminal states begin and end. This model defines the probability P(x π) for a given sequence

More information

Newton-like method with diagonal correction for distributed optimization

Newton-like method with diagonal correction for distributed optimization Newton-lie method with diagonal correction for distributed optimization Dragana Bajović Dušan Jaovetić Nataša Krejić Nataša Krlec Jerinić August 15, 2015 Abstract We consider distributed optimization problems

More information

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga.

Turbo Codes. Manjunatha. P. Professor Dept. of ECE. June 29, J.N.N. College of Engineering, Shimoga. Turbo Codes Manjunatha. P manjup.jnnce@gmail.com Professor Dept. of ECE J.N.N. College of Engineering, Shimoga June 29, 2013 [1, 2, 3, 4, 5, 6] Note: Slides are prepared to use in class room purpose, may

More information

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012 Applications of Robust Optimization in Signal Processing: Beamforg and Power Control Fall 2012 Instructor: Farid Alizadeh Scribe: Shunqiao Sun 12/09/2012 1 Overview In this presentation, we study the applications

More information

On modeling network congestion using continuous-time bivariate Markov chains

On modeling network congestion using continuous-time bivariate Markov chains On modeling networ congestion using continuous-time bivariate Marov chains Brian L. Mar and Yariv Ephraim Dept. of Electrical and Computer Engineering George Mason University, MS 1G5 44 University Drive,

More information

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer

A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS Michael Lunglmayr, Martin Krueger, Mario Huemer Michael Lunglmayr and Martin Krueger are with Infineon Technologies AG, Munich email:

More information

Dept. of Linguistics, Indiana University Fall 2009

Dept. of Linguistics, Indiana University Fall 2009 1 / 14 Markov L645 Dept. of Linguistics, Indiana University Fall 2009 2 / 14 Markov (1) (review) Markov A Markov Model consists of: a finite set of statesω={s 1,...,s n }; an signal alphabetσ={σ 1,...,σ

More information

Hidden Markov Models. x 1 x 2 x 3 x K

Hidden Markov Models. x 1 x 2 x 3 x K Hidden Markov Models 1 1 1 1 2 2 2 2 K K K K x 1 x 2 x 3 x K HiSeq X & NextSeq Viterbi, Forward, Backward VITERBI FORWARD BACKWARD Initialization: V 0 (0) = 1 V k (0) = 0, for all k > 0 Initialization:

More information

VID3: Sampling and Quantization

VID3: Sampling and Quantization Video Transmission VID3: Sampling and Quantization By Prof. Gregory D. Durgin copyright 2009 all rights reserved Claude E. Shannon (1916-2001) Mathematician and Electrical Engineer Worked for Bell Labs

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

On the Worst-case Communication Overhead for Distributed Data Shuffling

On the Worst-case Communication Overhead for Distributed Data Shuffling On the Worst-case Communication Overhead for Distributed Data Shuffling Mohamed Adel Attia Ravi Tandon Department of Electrical and Computer Engineering University of Arizona, Tucson, AZ 85721 E-mail:{madel,

More information

Cryptography Lecture 3. Pseudorandom generators LFSRs

Cryptography Lecture 3. Pseudorandom generators LFSRs Cryptography Lecture 3 Pseudorandom generators LFSRs Remember One Time Pad is ideal With OTP you need the same transmission capacity via an already secure channel for the key as you can then secure via

More information

Towards control over fading channels

Towards control over fading channels Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti

More information

On the Secrecy Capacity of the Z-Interference Channel

On the Secrecy Capacity of the Z-Interference Channel On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer

More information

An Introduction to Low Density Parity Check (LDPC) Codes

An Introduction to Low Density Parity Check (LDPC) Codes An Introduction to Low Density Parity Check (LDPC) Codes Jian Sun jian@csee.wvu.edu Wireless Communication Research Laboratory Lane Dept. of Comp. Sci. and Elec. Engr. West Virginia University June 3,

More information

Hidden Markov Models. x 1 x 2 x 3 x K

Hidden Markov Models. x 1 x 2 x 3 x K Hidden Markov Models 1 1 1 1 2 2 2 2 K K K K x 1 x 2 x 3 x K Viterbi, Forward, Backward VITERBI FORWARD BACKWARD Initialization: V 0 (0) = 1 V k (0) = 0, for all k > 0 Initialization: f 0 (0) = 1 f k (0)

More information

ADMM and Fast Gradient Methods for Distributed Optimization

ADMM and Fast Gradient Methods for Distributed Optimization ADMM and Fast Gradient Methods for Distributed Optimization João Xavier Instituto Sistemas e Robótica (ISR), Instituto Superior Técnico (IST) European Control Conference, ECC 13 July 16, 013 Joint work

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Bounds on Mutual Information for Simple Codes Using Information Combining

Bounds on Mutual Information for Simple Codes Using Information Combining ACCEPTED FOR PUBLICATION IN ANNALS OF TELECOMM., SPECIAL ISSUE 3RD INT. SYMP. TURBO CODES, 003. FINAL VERSION, AUGUST 004. Bounds on Mutual Information for Simple Codes Using Information Combining Ingmar

More information

Markov decision processes with threshold-based piecewise-linear optimal policies

Markov decision processes with threshold-based piecewise-linear optimal policies 1/31 Markov decision processes with threshold-based piecewise-linear optimal policies T. Erseghe, A. Zanella, C. Codemo Dept. of Information Engineering, University of Padova, Italy Padova, June 2, 213

More information

MANY papers and books are devoted to modeling fading

MANY papers and books are devoted to modeling fading IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 9, DECEMBER 1998 1809 Hidden Markov Modeling of Flat Fading Channels William Turin, Senior Member, IEEE, Robert van Nobelen Abstract Hidden

More information

Double Embedded Processes Based Hidden Markov Models for Binary Digital Wireless Channels

Double Embedded Processes Based Hidden Markov Models for Binary Digital Wireless Channels Double Embedded Processes Based Hidden Markov Models for Binary Digital Wireless Channels Omar S Salih #, Cheng-Xiang Wang #2,DavidILaurenson 3 # Joint Research Institute for Signal and Image Processing,

More information

An Evolutionary Programming Based Algorithm for HMM training

An Evolutionary Programming Based Algorithm for HMM training An Evolutionary Programming Based Algorithm for HMM training Ewa Figielska,Wlodzimierz Kasprzak Institute of Control and Computation Engineering, Warsaw University of Technology ul. Nowowiejska 15/19,

More information

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing

Lecture 7. Union bound for reducing M-ary to binary hypothesis testing Lecture 7 Agenda for the lecture M-ary hypothesis testing and the MAP rule Union bound for reducing M-ary to binary hypothesis testing Introduction of the channel coding problem 7.1 M-ary hypothesis testing

More information

THE frequency band designated for the automation and

THE frequency band designated for the automation and IEEE TRASACTIOS O COMMUICATIOS, VOL 63, O 4, APRIL 205 9 On the Capacity of arrowband PLC Channels ir Shlezinger, Student Member, IEEE, and Ron Dabora, Senior Member, IEEE Abstract Power line communications

More information

Hidden Markov Model and Speech Recognition

Hidden Markov Model and Speech Recognition 1 Dec,2006 Outline Introduction 1 Introduction 2 3 4 5 Introduction What is Speech Recognition? Understanding what is being said Mapping speech data to textual information Speech Recognition is indeed

More information

Lecture 16: Introduction to Neural Networks

Lecture 16: Introduction to Neural Networks Lecture 16: Introduction to Neural Networs Instructor: Aditya Bhasara Scribe: Philippe David CS 5966/6966: Theory of Machine Learning March 20 th, 2017 Abstract In this lecture, we consider Bacpropagation,

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Bidirectional Representation and Backpropagation Learning

Bidirectional Representation and Backpropagation Learning Int'l Conf on Advances in Big Data Analytics ABDA'6 3 Bidirectional Representation and Bacpropagation Learning Olaoluwa Adigun and Bart Koso Department of Electrical Engineering Signal and Image Processing

More information

An Introduction to (Network) Coding Theory

An Introduction to (Network) Coding Theory An to (Network) Anna-Lena Horlemann-Trautmann University of St. Gallen, Switzerland April 24th, 2018 Outline 1 Reed-Solomon Codes 2 Network Gabidulin Codes 3 Summary and Outlook A little bit of history

More information

Revision of Lecture 4

Revision of Lecture 4 Revision of Lecture 4 We have discussed all basic components of MODEM Pulse shaping Tx/Rx filter pair Modulator/demodulator Bits map symbols Discussions assume ideal channel, and for dispersive channel

More information

Bit Error Rate Estimation for a Joint Detection Receiver in the Downlink of UMTS/TDD

Bit Error Rate Estimation for a Joint Detection Receiver in the Downlink of UMTS/TDD in Proc. IST obile & Wireless Comm. Summit 003, Aveiro (Portugal), June. 003, pp. 56 60. Bit Error Rate Estimation for a Joint Detection Receiver in the Downlink of UTS/TDD K. Kopsa, G. atz, H. Artés,

More information

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,...

Encoder. Encoder 2. ,...,u N-1. 0,v (0) ,u 1. ] v (0) =[v (0) 0,v (1) v (1) =[v (1) 0,v (2) v (2) =[v (2) (a) u v (0) v (1) v (2) (b) N-1] 1,... Chapter 16 Turbo Coding As noted in Chapter 1, Shannon's noisy channel coding theorem implies that arbitrarily low decoding error probabilities can be achieved at any transmission rate R less than the

More information

Using Hidden Markov Models as a Statistical Process Control Technique: An Example from a ML 5 Organization

Using Hidden Markov Models as a Statistical Process Control Technique: An Example from a ML 5 Organization Using Hidden Markov Models as a Statistical Process Control Technique: An Example from a ML 5 Organization Bob Moore, Senior Principal Process Engineer, Business, Inc. (BTI) Ray Luke, Engineer, Raytheon

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

POWER ALLOCATION AND OPTIMAL TX/RX STRUCTURES FOR MIMO SYSTEMS

POWER ALLOCATION AND OPTIMAL TX/RX STRUCTURES FOR MIMO SYSTEMS POWER ALLOCATION AND OPTIMAL TX/RX STRUCTURES FOR MIMO SYSTEMS R. Cendrillon, O. Rousseaux and M. Moonen SCD/ESAT, Katholiee Universiteit Leuven, Belgium {raphael.cendrillon, olivier.rousseaux, marc.moonen}@esat.uleuven.ac.be

More information

Digital Phase-Locked Loop and its Realization

Digital Phase-Locked Loop and its Realization Proceedings of the 9th WSEAS International Conference on APPLIE INFORMATICS AN COMMUNICATIONS (AIC '9) igital Phase-Loced Loop and its Realiation Tsai-Sheng Kao 1, Sheng-Chih Chen 2, Yuan-Chang Chang 1,

More information

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms

Shankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms Recognition of Visual Speech Elements Using Adaptively Boosted Hidden Markov Models. Say Wei Foo, Yong Lian, Liang Dong. IEEE Transactions on Circuits and Systems for Video Technology, May 2004. Shankar

More information

Least-Squares Performance of Analog Product Codes

Least-Squares Performance of Analog Product Codes Copyright 004 IEEE Published in the Proceedings of the Asilomar Conference on Signals, Systems and Computers, 7-0 ovember 004, Pacific Grove, California, USA Least-Squares Performance of Analog Product

More information

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors

Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors Model-based Correlation Measure for Gain and Offset Nonuniformity in Infrared Focal-Plane-Array Sensors César San Martin Sergio Torres Abstract In this paper, a model-based correlation measure between

More information

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.

ECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11. ECE47/57 - Lecture Image Compression Fundamentals and Lossless Compression Techniques /03/ Roadmap Preprocessing low level Image Enhancement Image Restoration Image Segmentation Image Acquisition Image

More information

Statistical NLP: Hidden Markov Models. Updated 12/15

Statistical NLP: Hidden Markov Models. Updated 12/15 Statistical NLP: Hidden Markov Models Updated 12/15 Markov Models Markov models are statistical tools that are useful for NLP because they can be used for part-of-speech-tagging applications Their first

More information

CS 7180: Behavioral Modeling and Decision- making in AI

CS 7180: Behavioral Modeling and Decision- making in AI CS 7180: Behavioral Modeling and Decision- making in AI Learning Probabilistic Graphical Models Prof. Amy Sliva October 31, 2012 Hidden Markov model Stochastic system represented by three matrices N =

More information

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land

SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land SIPCom8-1: Information Theory and Coding Linear Binary Codes Ingmar Land Ingmar Land, SIPCom8-1: Information Theory and Coding (2005 Spring) p.1 Overview Basic Concepts of Channel Coding Block Codes I:

More information

Optimal matching in wireless sensor networks

Optimal matching in wireless sensor networks Optimal matching in wireless sensor networks A. Roumy, D. Gesbert INRIA-IRISA, Rennes, France. Institute Eurecom, Sophia Antipolis, France. Abstract We investigate the design of a wireless sensor network

More information

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR

The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR The Capacity Region of the Gaussian Cognitive Radio Channels at High SNR 1 Stefano Rini, Daniela Tuninetti and Natasha Devroye srini2, danielat, devroye @ece.uic.edu University of Illinois at Chicago Abstract

More information

Time-Varying Systems and Computations Lecture 3

Time-Varying Systems and Computations Lecture 3 Time-Varying Systems and Computations Lecture 3 Klaus Diepold November 202 Linear Time-Varying Systems State-Space System Model We aim to derive the matrix containing the time-varying impulse responses

More information

Novel Quantization Strategies for Linear Prediction with Guarantees

Novel Quantization Strategies for Linear Prediction with Guarantees Novel Novel for Linear Simon Du 1/10 Yichong Xu Yuan Li Aarti Zhang Singh Pulkit Background Motivation: Brain Computer Interface (BCI). Predict whether an individual is trying to move his hand towards

More information

An Uplink-Downlink Duality for Cloud Radio Access Network

An Uplink-Downlink Duality for Cloud Radio Access Network An Uplin-Downlin Duality for Cloud Radio Access Networ Liang Liu, Prati Patil, and Wei Yu Department of Electrical and Computer Engineering University of Toronto, Toronto, ON, 5S 3G4, Canada Emails: lianguotliu@utorontoca,

More information

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9

MATH3302 Coding Theory Problem Set The following ISBN was received with a smudge. What is the missing digit? x9139 9 Problem Set 1 These questions are based on the material in Section 1: Introduction to coding theory. You do not need to submit your answers to any of these questions. 1. The following ISBN was received

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

Multiple Sequence Alignment using Profile HMM

Multiple Sequence Alignment using Profile HMM Multiple Sequence Alignment using Profile HMM. based on Chapter 5 and Section 6.5 from Biological Sequence Analysis by R. Durbin et al., 1998 Acknowledgements: M.Sc. students Beatrice Miron, Oana Răţoi,

More information

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n

These outputs can be written in a more convenient form: with y(i) = Hc m (i) n(i) y(i) = (y(i); ; y K (i)) T ; c m (i) = (c m (i); ; c m K(i)) T and n Binary Codes for synchronous DS-CDMA Stefan Bruck, Ulrich Sorger Institute for Network- and Signal Theory Darmstadt University of Technology Merckstr. 25, 6428 Darmstadt, Germany Tel.: 49 65 629, Fax:

More information

Efficient FPGA Implementations and Cryptanalysis of Automata-based Dynamic Convolutional Cryptosystems

Efficient FPGA Implementations and Cryptanalysis of Automata-based Dynamic Convolutional Cryptosystems Efficient FPGA Implementations and Cryptanalysis of Automata-based Dynamic Convolutional Cryptosystems Dragoş Trincă Department of Computer Science and Engineering University of Connecticut Storrs CT 06269

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

A Computationally Efficient Block Transmission Scheme Based on Approximated Cholesky Factors

A Computationally Efficient Block Transmission Scheme Based on Approximated Cholesky Factors A Computationally Efficient Block Transmission Scheme Based on Approximated Cholesky Factors C. Vincent Sinn Telecommunications Laboratory University of Sydney, Australia cvsinn@ee.usyd.edu.au Daniel Bielefeld

More information

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington

This research was partially supported by the Faculty Research and Development Fund of the University of North Carolina at Wilmington LARGE SCALE GEOMETRIC PROGRAMMING: AN APPLICATION IN CODING THEORY Yaw O. Chang and John K. Karlof Mathematical Sciences Department The University of North Carolina at Wilmington This research was partially

More information

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Shahid Mehraj Shah and Vinod Sharma Department of Electrical Communication Engineering, Indian Institute of

More information

L23: hidden Markov models

L23: hidden Markov models L23: hidden Markov models Discrete Markov processes Hidden Markov models Forward and Backward procedures The Viterbi algorithm This lecture is based on [Rabiner and Juang, 1993] Introduction to Speech

More information

Hidden Markov Models

Hidden Markov Models Hidden Markov Models Slides revised and adapted to Bioinformática 55 Engª Biomédica/IST 2005 Ana Teresa Freitas Forward Algorithm For Markov chains we calculate the probability of a sequence, P(x) How

More information

arxiv: v1 [cs.sy] 9 Apr 2017

arxiv: v1 [cs.sy] 9 Apr 2017 Quantized Innovations Bayesian Filtering Chun-Chia HuangRobert R.Bitmead a Department of Mechanical & Aerospace Engineering University of California San Diego 9 Gilman Drive La Jolla CA 99- USA. arxiv:7.6v

More information

POSSIBILITIES OF MMPP PROCESSES FOR BURSTY TRAFFIC ANALYSIS

POSSIBILITIES OF MMPP PROCESSES FOR BURSTY TRAFFIC ANALYSIS The th International Conference RELIABILITY and STATISTICS in TRANSPORTATION and COMMUNICATION - 2 Proceedings of the th International Conference Reliability and Statistics in Transportation and Communication

More information

Cryptography and Security Final Exam

Cryptography and Security Final Exam Cryptography and Security Final Exam Solution Serge Vaudenay 29.1.2018 duration: 3h no documents allowed, except one 2-sided sheet of handwritten notes a pocket calculator is allowed communication devices

More information

Asymptotically optimal induced universal graphs

Asymptotically optimal induced universal graphs Asymptotically optimal induced universal graphs Noga Alon Abstract We prove that the minimum number of vertices of a graph that contains every graph on vertices as an induced subgraph is (1 + o(1))2 (

More information

THE ANALYTICAL DESCRIPTION OF REGULAR LDPC CODES CORRECTING ABILITY

THE ANALYTICAL DESCRIPTION OF REGULAR LDPC CODES CORRECTING ABILITY Transport and Telecommunication Vol. 5, no. 3, 04 Transport and Telecommunication, 04, volume 5, no. 3, 77 84 Transport and Telecommunication Institute, Lomonosova, Riga, LV-09, Latvia DOI 0.478/ttj-04-005

More information

The Effect of Memory Order on the Capacity of Finite-State Markov and Flat-Fading Channels

The Effect of Memory Order on the Capacity of Finite-State Markov and Flat-Fading Channels The Effect of Memory Order on the Capacity of Finite-State Markov and Flat-Fading Channels Parastoo Sadeghi National ICT Australia (NICTA) Sydney NSW 252 Australia Email: parastoo@student.unsw.edu.au Predrag

More information

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series

Recall: Modeling Time Series. CSE 586, Spring 2015 Computer Vision II. Hidden Markov Model and Kalman Filter. Modeling Time Series Recall: Modeling Time Series CSE 586, Spring 2015 Computer Vision II Hidden Markov Model and Kalman Filter State-Space Model: You have a Markov chain of latent (unobserved) states Each state generates

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information

Algebraic Multiuser Space Time Block Codes for 2 2 MIMO

Algebraic Multiuser Space Time Block Codes for 2 2 MIMO Algebraic Multiuser Space Time Bloc Codes for 2 2 MIMO Yi Hong Institute of Advanced Telecom. University of Wales, Swansea, UK y.hong@swansea.ac.u Emanuele Viterbo DEIS - Università della Calabria via

More information

Hidden Markov Models. Dr. Naomi Harte

Hidden Markov Models. Dr. Naomi Harte Hidden Markov Models Dr. Naomi Harte The Talk Hidden Markov Models What are they? Why are they useful? The maths part Probability calculations Training optimising parameters Viterbi unseen sequences Real

More information

Massachusetts Institute of Technology

Massachusetts Institute of Technology Massachusetts Institute of Technology 6.867 Machine Learning, Fall 2006 Problem Set 5 Due Date: Thursday, Nov 30, 12:00 noon You may submit your solutions in class or in the box. 1. Wilhelm and Klaus are

More information

CODING schemes for overloaded synchronous Code Division

CODING schemes for overloaded synchronous Code Division IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 60, NO, NOVEM 0 345 Transactions Letters Uniquely Decodable Codes with Fast Decoder for Overloaded Synchronous CDMA Systems Omid Mashayekhi, Student Member, IEEE,

More information

Chapter 7: Channel coding:convolutional codes

Chapter 7: Channel coding:convolutional codes Chapter 7: : Convolutional codes University of Limoges meghdadi@ensil.unilim.fr Reference : Digital communications by John Proakis; Wireless communication by Andreas Goldsmith Encoder representation Communication

More information

Introduction to Randomized Algorithms: Quick Sort and Quick Selection

Introduction to Randomized Algorithms: Quick Sort and Quick Selection Chapter 14 Introduction to Randomized Algorithms: Quick Sort and Quick Selection CS 473: Fundamental Algorithms, Spring 2011 March 10, 2011 14.1 Introduction to Randomized Algorithms 14.2 Introduction

More information

Hidden Markov Models,99,100! Markov, here I come!

Hidden Markov Models,99,100! Markov, here I come! Hidden Markov Models,99,100! Markov, here I come! 16.410/413 Principles of Autonomy and Decision-Making Pedro Santana (psantana@mit.edu) October 7 th, 2015. Based on material by Brian Williams and Emilio

More information

Lecture 4 : Introduction to Low-density Parity-check Codes

Lecture 4 : Introduction to Low-density Parity-check Codes Lecture 4 : Introduction to Low-density Parity-check Codes LDPC codes are a class of linear block codes with implementable decoders, which provide near-capacity performance. History: 1. LDPC codes were

More information

On (Weighted) k-order Fuzzy Connectives

On (Weighted) k-order Fuzzy Connectives Author manuscript, published in "IEEE Int. Conf. on Fuzzy Systems, Spain 2010" On Weighted -Order Fuzzy Connectives Hoel Le Capitaine and Carl Frélicot Mathematics, Image and Applications MIA Université

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Forecasting Wind Ramps

Forecasting Wind Ramps Forecasting Wind Ramps Erin Summers and Anand Subramanian Jan 5, 20 Introduction The recent increase in the number of wind power producers has necessitated changes in the methods power system operators

More information

Layered Synthesis of Latent Gaussian Trees

Layered Synthesis of Latent Gaussian Trees Layered Synthesis of Latent Gaussian Trees Ali Moharrer, Shuangqing Wei, George T. Amariucai, and Jing Deng arxiv:1608.04484v2 [cs.it] 7 May 2017 Abstract A new synthesis scheme is proposed to generate

More information

Performance Analysis of a Threshold-Based Relay Selection Algorithm in Wireless Networks

Performance Analysis of a Threshold-Based Relay Selection Algorithm in Wireless Networks Communications and Networ, 2010, 2, 87-92 doi:10.4236/cn.2010.22014 Published Online May 2010 (http://www.scirp.org/journal/cn Performance Analysis of a Threshold-Based Relay Selection Algorithm in Wireless

More information