SOLVING massive-scale computational tasks for machine

Size: px
Start display at page:

Download "SOLVING massive-scale computational tasks for machine"

Transcription

1 Coded Distributed Computing over Pacet Erasure Channels Dong-Jun Han, Student Member, IEEE, Jy-yong Sohn, Student Member, IEEE, Jaeyun Moon, Fellow, IEEE arxiv:9.36v [cs.it] Jan 29 Abstract Coded computation is a framewor which provides redundancy in distributed computing systems to speed up largescale tass. Although most existing wors assume an error-free scenarios in a master-worer setup, the lin failures are common in current wired/wireless networs. In this paper, we consider the straggler problem in coded distributed computing with lin failures, by modeling the lins between the master node and worer nodes as pacet erasure channels. When the master fails to detect the received signal, retransmission is required for each worer which increases the overall run-time to finish the tas. We first investigate the expected overall run-time in this setting using an (n, maximum distance separable (MDS code. We obtain the lower and upper bounds on the latency in closed-forms and give guidelines to design MDS code depending on the pacet erasure probability. Finally, we consider a setup where the number of retransmissions is limited due to the bandwidth constraint. By formulating practical optimization problems related to latency, transmission bandwidth and probability of successful computation, we obtain achievable performance curves as a function of pacet erasure probability. Index Terms Distributed computing, Pacet erasure channels I. INTRODUCTION SOLVING massive-scale computational tass for machine learning algorithms and data analytics is one of the most rewarding challenges in the era of big data. Modern largescale computational tass cannot be solved in a single machine (or worer anymore. Enabling large-scale computations, distributed computing systems such as MapReduce [], Apache Spar [2] and Amazon EC2 have received significant attention in recent years. In distributed computing systems, the largescale computational tass are divided into several subtass, which are computed by different worers in parallel. This parallelization helps to reduce the overall run-time to finish the tas and thus enables to handle large-scale computing tass. However, it is observed that worers that are significantly slow than the average, called stragglers, critically slow down the computing time in practical distributed computing systems. To alleviate the effect of stragglers, the authors of [3] proposed a new framewor called coded computation, which uses error correction codes to provide redundancy in distributed matrix multiplications. By applying maximum distance separable (MDS codes, it was shown that total computation time of the coded scheme can achieve an order-wise improvement over uncoded scheme. More recently, coded computation has been applied to various other distributed computing scenarios. In The authors are with the School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST, Daejoen 344, South Korea ( djhan93@aist.ac.r; jysohn@aist.ac.r; jmoon@aist.edu. [4], the authors proposed the use of product codes for highdimensional matrix multiplication. The authors of [5] also targets high-dimensional matrix multiplication by proposing the use of polynomial codes. The use of short dot product for computing large linear transforms is proposed in [6], and Gradient coding is proposed with applications to distributed gradient descent in [7] [9]. Codes for convolution and Fourier transform are studied in [] and [], respectively. To compensate the drawbac of previous studies which completely ignore the wor done by stragglers, the authors of [2], [3] introduce the idea of exploiting the wor completed by the stragglers. Mitigating the effect of stragglers over more practical settings are also being studied in recent years. The authors of [4] propose an algorithm for speeding up distributed matrix multiplication over heterogeneous clusters, where a variety of computing machines coexist with different capabilities. In [5], a hierarchical coding scheme is proposed to combat stragglers in practical distributed computing systems with multiple racs. The authors of [6] also consider a setting where the worers are connected wirelessly to the master. Codes are also introduced in various studies targeting data shuffling applications [7] [22] in MapReduce style setups. In [2], a unifed coding framewor is proposed for distributed computing, and trade-off between latency of computation and load of communication is studied. A scalable framewor is studied in [2] for minimizing the communication bandwidth in wireless distributed computing scenario. In [22], fundamental limits of data shuffling is investigated for distributed learning. A. Motivation In most of the existing studies, it is assumed that the lins between the master node and worer nodes are error-free. However, lin failures and device failures are common in current wired/wireless networs. It is reported in [23] that the transmitted pacets often get lost in current data centers by various factors such as congestion or load balancer issues in current data centers. In wireless networs, the transmitted pacets can be lost by channel fading or interference. Based on these observations, the authors of [24] considered lin failures in the context of distributed storage systems. In this paper, we consider lin failures for modern distributed computing systems. The transmission failure at a specific worer necessitates pacet retransmission, which would in turn increase the overall run-time for the given tas. Moreover, when the bandwidth of the system is limited, the master would not be able to collect

2 2 timely results from the worers due to lin failures. A reliable solution to deal with these problems is required in practical distributed computing systems. B. Contribution In this paper, we model the lins between the master node and worer nodes as pacet erasure channels, which is inspired by the lin failures in practical distributed computing systems. We consider straggler problems in matrixvector multiplication which is a ey computation in many machine learning algorithms. We first separate the run-time at each worer into computation time and communication time. This approach is also taen in the previous wor [6] on coded computation over wireless networs, under the errorfree assumption. Compared to the setting in [6], in our wor, pacet loss and retransmissions are considered. Thus, the communication time at each worer becomes a random variable which depends on the pacet erasure probability. We analyze the latency in this setting using an (n, MDS code. By considering the average computation/communication time and erasure probability, we obtain the overall run-time of the tas by analyzing the continuous-time Marov chain for maximum (when each node computes a single inner product. We also find the lower and upper bounds on the latency in closed-forms. Based on the closed-form expressions we give insights on the bounds and show that the overall runtime becomes log(n times faster than the uncoded scheme by using (n, MDS code. Then for general, we give guidelines to design (n, MDS code depending on the erasure probability ɛ. Finally, we consider a setup where the number of retransmissions is limited due to the bandwidth constraint. By formulating practical optimization problems related to latency, transmission bandwidth and probability of successful computation, we obtain achievable performance curves as a function of pacet erasure probability. C. Organization and Notations The rest of this paper is organized as follows. In Section II, we describe the master-worer setup with lin failures and define a problem statement. Latency analysis for maximum is performed in Section III, and the case for general is discussed in Section IV. Then in Section V, analysis is performed under the setting of limited number of retransmissions. Finally, concluding remars are drawn in Section VI. The definitions and notations used in this paper are summarized as follows. For n random variables, the th order statistic is denoted by the th smallest one of n. We denote the set {, 2,...n} by [n] for n N, where N is the set of positive integers. We also denote the th order statistics of n independent random variables (X, X 2,...X n by X (. We denote f(n = Θ(g(n if there exist c, c 2, n > such that n > n, c g(n f(n c 2 g(n. II. PROBLEM STATEMENT Consider a master-worer setup in wired/wireless networ with n worer nodes connected to a single master node. For the wireless scenario, frequency division multiple access (FDMA can be used so that the signals sent from the worers are detected simultaneously at the master without interference. We study a matrix-vector multiplication problem, where the goal is to compute the output y = Ax for a given matrix A R m d and an input vector x R d. The matrix A is divided into submatrices to obtain A i R m d, i []. Then, an (n, MDS code is applied to construct Ãi R m d, i [n], which are distributed across the n worer nodes (n >. For a given input vector x, each worer i computes Ãix and sends the result to the master. By receiving out of n results, the master node can recover the desired output Ax. We assume that the required time at each worer to compute a single inner product of vectors of size d follows an exponential distribution with rate µ. Since m inner products are to be computed each worer, similar to the models in [4], [6], we assume that X i, the computation time of worer i follows an exponential distribution with rate µ m : Pr[X i t] = e m µt. ( After completing the given tas, each worer transmits its results to the master through memoryless pacet erasure channels where the pacets are erased independently with probability ɛ. For simplicity, we assume that each pacet contains a single inner product. When a pacet fails to be detected at the master, the corresponding pacet is retransmitted. Otherwise, the worer transmits the next pacet to the master. The wor is finished when the master successfully detects the results of out of n worers. The communication time for j th transmission of the r th inner product at each worer, Y j,r, is also modeled as an exponential distribution (as the same as the communication time model in [5] with rate µ 2. Then the average time for one pacet transmission becomes µ 2. Since each worer transmits m inner products, the total communication time of worer i, denoted by S i, is written as S i = m N r Y j,r (2 r= j= where Pr[Y j,r t] = e µ2t and N r is a geometric random variable with success probability ɛ, i.e., Pr[N r = l] = ( ɛɛ l. We define the overall run-time of the tas as the sum of computation time and communication time to finish the tas. Now the overall run-time using (n, MDS code is written as T = th min(x i + S i. (3 We are interested in the expected overall run-time E[T ]. III. LATENCY ANALYSIS FOR = m We start by reviewing some useful results on the order statistics. The expected value of th order statistic of n exponential random variables of rate µ is Hn H n µ, where H = i= i. For = n, the th order statistic becomes H n µ. The value H n can be approximated as H n log(n + η for large n where η is a fixed constant.

3 3 In this section, we assume that each worer is capable of computing a single inner product, i.e., = m. This is a realistic assumption especially when a massive number of Internet of Things (IoT devices having small storage size coexist in the networ for wireless distributed computing [25]. The case for general will be discussed in Section IV. A. Exact Overall Run-Time: Marov Chain Analysis We derive the overall run-time in terms µ, µ 2, ɛ, n. We first state the following result on the communication time. Lemma. Assuming = m, the random variable S i follows exponential distribution with rate ( ɛµ 2, i.e., Pr[S i t] = e ( ɛµ2t. Proof: See Appendix A. Remar : By erasure probability ɛ and retransmission, Lemma implies that the rate of communication time at each worer decreases by a factor of ɛ compared to the errorfree scenario. That is, the expected communication time at each worer increases by a factor of ɛ. Based on Lemma, we obtain the following theorem which shows that E[T ] can be obtained by analyzing the hitting time of a continuous-time Marov chain. Theorem. Consider a continuous-time Marov chain C defined over 2-dimensional state space (u, v {,,..., n} {,,..., }, where the transition rates are defined as follows. The transition rate from state (u, v to state (u +, v is (n uµ if v u < n, and otherwise. The transition rate from state (u, v to state (u, v + is (u v( ɛµ 2 if v < min(u,, and otherwise. Then, the expected hitting time of C from state (, to the set of states (u, v with v = becomes the expected overall run-time E[T ]. Proof: For a given time t, define integers u, v as u = {j [n] : X j t} (4 v = {i [n] : X i + S i t} (5 Here, u can be viewed as the number of worers that finished computation by time t. For v, it is the number of worers that successfully transmitted the computation result to the master by time t. From these definitions, v can be rewritten as v = {i [u] : X i + S i t}. (6 From E[T ] = E[ th min(x i + S i ], it can be seen that the expected overall run-time is equal to the expected arrival time from (, to any (u, v with v =. Thus, we define the state of the Marov chain (u, v, over space {,,..., n} {,,..., }. By defining u, v as above, the transition rates can be described as follows. By (4, for a given t, there are n u number of integers j [n] satisfying X j > t at state (u, v. Since X j follows exponential distribution with rate µ, state transistion from (u, v to (u+, v occurs with rate (n uµ. The transition from (u, v from (u+, v is possible for u v, by the definitions (4, (5. Fig. : State transition diagram example with n = 5, = 2. Now we consider the transition rate from (u, v to (u, v+ for a fixed t. Among u number of integers i satisfying X i t, there are v of them which satisfy X i + S i t at state (u, v. Recall that S i follows exponential distribution with rate ( ɛµ 2 from Lemma. Thus, state transition from (u, v to (u, v + occurs with rate (u v( ɛµ 2. Note that v {,,..., min(u, }. The described Marov chain is identical to C, which completes the proof. For an arbitrary state (u, v, the first component u can be viewed as the number of worers finished the computation, and the second component v is the number of worers that successfully transmitted their result to the master. Fig. shows an example of the transition state diagram assuming n = 5, = 2. From the initial state (,, the given tas is finished when the Marov chain visits one of the states with v = 2 for the first time. Since the computational results of a worer are transmitted after the worer finishes the whole computation, u v always holds. The expected hitting time of the Marov chain can be computed performing the first-step analysis [26]. B. Lower and Upper Bounds While the exact E[T ] is not derived in a closed-form, we provide the closed-form expressions of the lower and upper bounds of E[T ] in the following theorem. Theorem 2. Assuming = m, the expected overall run-time E[T ] is lower bounded as E[T ] L, where L = max( H n H n +i + H n H n i, (7 µ ( ɛµ 2 which reduces to + H n H n, ɛ µ nµ ( ɛµ 2 µ 2 L = H n H n +, otherwise. µ n( ɛµ 2 Moreover, the expected overall run-time E[T ] is upper bounded as E[T ] U, where U = ( min (H n H i + H n H n i. (9 \[ ] µ ( ɛµ 2 Proof: See Appendix C. From (7 and (9, it can be seen that the lower and upper bounds of E[T ] are decreasing functions of ɛ. Note that ɛ only decreases the terms of communication time (not computation time by a factor of ɛ as in (7 and (9. The lower bound

4 4 9 Expected overall run-time, E[T] Upper bound Marov analysis Lower bound Expected overall run-time, E[T] Upper bound Marov analysis Lower bound Expected overall run-time, E[T] n (a µ =, µ 2 =. Upper bound Marov analysis Lower bound n (b µ =, µ 2 =. Fig. 2: Bounds of E[T ] for = m = 5, ɛ =.. in (7 can be rewritten as ( with two regimes classifed by equation ɛ = µ µ 2. This equation is equivalent to = ( µ ( ɛµ 2 which means that the average computation time is equal to the average communication time with retransmission. For ɛ µ µ 2, the first term of the lower bound in ( is the expected value of minimum computation time among n worers, while the second term is the expected value of the th smallest communication time among n worers. This is the case where all the worers finish the computation by time nµ and then start communication simultaneously, which can be viewed as an obvious lower bound. We show in Appendix C that the derived lower bound ( is the tightest among candidates of the bounds we obtained in (7. For the upper bound, by defining H =, we have U = Hn µ + Hn H n ( ɛµ 2 by inserting i = at (9. The first term Hn µ is the expected value of maximum computation time among n worers and the second term is the expected value of the th smallest communication time among n worers. Again, this is an obvious upper bound as it can be interpreted as the case where all n worers starts communication simultaneously right after the slowest worer finishes its computation. The derived upper bound includes this ind of scenario Fig. 3: Bounds of E[T ] versus ɛ with = m = 5, µ =, µ 2 =. Assume a practical scenario where the size of the tas scales linearly with the number of worers, i.e., = Θ(n. Note that = m in this section. Then for large, we have H n log(n+η for a fixed constant η, which leads to H n H n log n n. Thus, the lower bound can be rewritten as n + log( nµ ( ɛµ 2 n, ɛ µ µ 2 L = ( n log( µ n +, otherwise, n( ɛµ 2 as grows. It can be easily shown that L = Θ(. More specifically, assuming a fixed code rate R = n, the expression of ( settles down to L = ( ɛµ 2 log( µ R, if ɛ µ 2 and L = µ log( R, otherwise, as n goes to infinity. For the upper bound, we first define From f U i f U = H n H i µ + H n H n i ( ɛµ 2. (2 =, f U is minimized by setting i = i, where i = µ + ( ɛµ 2 n µ + ( ɛµ 2. (3 Since i = Θ( and n i = Θ(, we can approximate H n H i log n i and H n H n i log n n i for large. Thus, the upper bound in (9 can be rewritten as U = n log( µ i + n log(. (4 ( ɛµ 2 n i It can be again shown that U = Θ(. Finally, we have E[T ] = Θ(. (5 Note that inserting i = i or i = i at (2 maes the upper bound since i should be an integer. However, the ceiling or floor notation can be ignored for the asymptotic regime. Fig. 2 shows the lower and upper bounds along with the hitting time of the Marov chain, with varying number of worers n. The number of inner products to be computed is assumed to be m = 5. Erasure probability is assumed to be ɛ =.. It can be seen that the overall run-time to finish the tas decreases as n increases. Based on the Marov analysis

5 5 and derived bounds, one can find the minimum number of worers n to meet the latency constraint of the given system. Fig. 3 shows the bounds of E[T ] with different erasure probability ɛ. We see by Figs. 2 and 3 that the lower bound is generally tighter than the upper bound. C. Comparison with Uncoded Scheme In this subsection, we give comparison between the coded and uncoded schemes. For the uncoded scheme, the overall worload ( = m inner products are equally distributed to n worers. Therefore, the worload at each worer becomes less than a single inner product. More specifically, the number of multiplications and additions computed at each worer are reduced by a factor of n compared to the MDS-coded scheme. The master node has to wait for the results from all n worers to finish the tas. Since the worload at each worer is decreased by a factor of n compared to the coded scheme, the computation time at each worer follows exponential distribution with rate n µ. For the communication time, we model the communication time for a single inner product at each worer as exponential distribution with rate n µ 2, by assuming that the pacet length to be transmitted at each worer also decreases by a factor of n. The erasure probability should be also changed depending on the pacet length. Assume that the pacet is lost if at least one bit is erroneous. Then, by defining ɛ b as the fixed bit-error rate (BER of the system, the pacet erasure probability ɛ for the uncoded scheme can be written as ɛ = ( ɛ b n l (6 where l is the pacet length of the coded scheme. Note that the erasure probability for the coded scheme ɛ is ɛ = ( ɛ b l. (7 The overall run-time of the uncoded scheme denoted by T uncoded can be written as T uncoded = max(x i + S i ( where Pr[X i t] = e n µt and Pr[S i t] = e n ( ɛµ2t by Lemma. Similar to the proof of Theorem 2, we have L uncoded E[T uncoded ] U uncoded where L uncoded = max( (H n H i + (H n H n i nµ n( ɛ µ 2 n ( H n + nµ = ( ɛ, ɛ µ µ 2 µ 2 n (H n + µ n( ɛ, otherwise, µ 2 U uncoded = n (H n H n + µ ( ɛ. (9 µ 2 It can be easily seen that L uncoded = Θ(log(n, U uncoded = Θ(log(n for = m = Θ(n, which leads to E[T uncoded ] = Θ(log(n. (2 Comparing (5 and (2 shows that the overall run-time becomes Θ(log(n times faster by introducing redundancy: E[T uncoded ] E[T ] = Θ(log(n. (2 This result is consistent with the previous wors [3], [4] which compared coded and uncoded schemes in an error-free scenario. IV. LATENCY FOR GENERAL In this section, we give discussions for general. We provide the following theorem, which shows the lower and upper bounds of E[T ] for general. Theorem 3. Let G i be an erlang random variable with shape parameter m and rate parameter, i.e., Pr[G i t] = m p= p! e t t p. Then for general, we have L E[T ] U where ( m(hn H n +i L = max (22 U = µ + E[G (i] ( ɛµ 2 ( m(hn H i min + E[G (i]. (23 \[ ] µ ( ɛµ 2 Proof: See Appendix C. As illustrated in [27], the expected value of i th order statistic E[G (i ] can be written as (24 shown at the top of next page. Here, Γ is a gamma function and a q (x, y is the coefficient of t q in the expansion of ( x j= tj j! y for any integer x, y. For the uncoded scheme, we have L uncoded E[T uncoded ] U uncoded where ( m(hn H i L uncoded = max + E[G (i ] nµ ( ɛµ 2 (25 U uncoded = m(h n H n nµ + E[G (i ] ( ɛµ 2 (26 which are obtained by inserting = n at (22 and (23, respectively. Here, G i is an erlang random variable with shape parameter m n and rate parameter, where E[G (i] can be obtained by inserting = n at (24. While finding the exact expression of E[T ] for general remains open, we provide numerical results to gain insights on designing (n, MDS code. We plot the upper bound of E[T ] versus code rate R = n with n =, m = 5, µ =, µ 2 = in Fig. 4. It can be seen that the optimal code rate R can be designed to minimize the upper bound U on E[T ] depending on the erasure probability ɛ. Fig. 5 compares the performance between the achievable E[T ] (obtained by Monte Carlo simulations and the MDS code with rate R. It can be seen that the performance of the MDS code with rate R is very close to the achievable E[T ], showing the effectiveness of the proposed design. For a positive integer z, Γ(z = (z! is satisfied.

6 6 n! i ( i E[G (i ] = (i!(n i!γ( m ( p p p= ( m (n i+p q= a q ( m, n i + p Γ( m + q + (n i + p + m +q+ ( =. =.3 = =, =3 =2, =7 =4, =6.6 4 P s Code rate, R Fig. 4: Upper bound U on E[T ] versus code rate R with n =, m = 5, µ =, µ 2 =. Expected overall run-time, E[T] Achievable E[T] using MDS code MDS code with code rate R * Fig. 5: Latency performance on the design targeting upper bound minimization. Here, we assume n =, m = 5, µ =, µ 2 =. V. ANALYSIS UNDER THE SETTING OF LIMITED NUMBER OF RETRANSMISSIONS A. Probability of Successful Computation So far, we focused on latency analysis over pacet erasure channels under the assumption that there is no limit on the number of transmissions at each worer. The number of transmissions can be potentially infinite on the analysis above. However, in practice, the number of transmissions cannot be potentially infinite due to the constraint on the transmission bandwidth. With limited number of transmissions, the overall computation can fail, i.e., the master node may not receive enough results (less than from the worers due to lin failures. Thus, we consider the success probability of a given tas in this section. We first analyze probability of success at a specific worer. Let γ be the maximum number of pacets that is allowed to be transmitted at a specific worer. That is, the number of transmissions at each worer is limited to γ. When Fig. 6: P s versus ɛ assuming n = 4, m = 2. The scheme which uses less transmission bandwidth can have higher probability of success. Moreover, the scheme with higher probability of success in one region may have lower probability of success in the other region. the number of worers is given by n, the total transmission bandwidth of the networ is then limited to nγl, where l is the pacet length. For a successful computation at a specific worer node, the master node should successfully receive m out of γ pacets from that worer. Thus, probability of success at a specific worer denoted by p is obtained as ( m ( p = ( ɛ m + ( ɛ m γ ɛ +... }{{} γ m failure }{{} failure γ m ( m = + i i i= ( ɛ m ɛ γ m }{{} γ m failures ( ɛ m ɛ i. (27 To complete the overall tas, out of n worers should successfully deliver their results to the master. Based on p, the probability of success for the overall computation, denoted by P s, can be written as n ( n P s = ( p n i p i. (2 i i= It can be seen from (27 that increasing also increases probability of success p at each worer. This is because increasing decreases the number of pacets to be transmitted at each worer, which increases p for a given number of transmissions γ. However, larger increases the number of results to be detected at the master node for decoding the overall computational results. That is, for a given p, increasing decreases P s from (2. Now the question is, how to design optimal? How about choosing the appropriate value of γ? We address these issues in Section V-C.

7 7 For given number of worers n = 4 and worload m = 2, Fig. 6 shows P s versus erasure probability ɛ with different and γ values. It can be seen that even the case with = 2 uses less bandwidth compared to the case with =, it has strictly better probability of success for all ɛ values. This is because the case with = has more pacets to be transmitted compared to the case with = 2. It can be also seen that for ɛ >.2, the case with = 4 has higher P s compared to the case with = which uses twice as many transmission bandwidth as the case with = 4. This example motivates us to consider the problem of choosing the best and γ for different ɛ values under bandwidth and probability of success constraints. B. Latency The overall run-time to finish the tas should be reconsidered with limited number of transmissions. Let T be the overall run-time for a given γ. Note that if the computation fails for a given γ, we say that the overall run-time is infinity. Thus, if P s <, the expected overall run-time E[T ] always becomes infinity. Therefore, in this setup, instead of the expected value E[T ], it is more reasonable to consider probability that the tas would be finished within a certain required time τ, Pr[T τ]. We first rewrite T from (3 for a given γ. Since the term γ only gives effect on the communication time at each worer (not the computation time, T can be written as T = th min(x i + S i. (29 As in (3, the computation time at worer i denoted by X i, follows exponential distribution with rate µ. The communication time at worer i denoted by S i is written as C i S i = Y j, (3 j= where Y j follows exponential distribution with rate µ 2 and C i is the number of transmissions at a specific worer until it successfully transmit m number of pacets, which are all independent for different worers. With probability p, the worer fails to transmit the m pacets so that C i is undefined and S i becomes infinity. Thus, we have random variable C i as follows: m m + w.p. C i =. ( γ γ w.p. γ m undefined w.p. p. w.p. ( ɛ m ( m ( ɛ m ɛ ( ɛ m ɛ γ m (3 P s = =3 =4 Optimal bound for =3 Optimal bound for =4 Optimal bound for = Fig. 7: P s versus γ assuming n = 4, m = 2, ɛ =.3. The figure shows the minimum γ satisfying P s δ for different values, where δ is set to.. Based on C i, random variable S i becomes erlang( m, µ 2 w.p. ( ɛ m erlang( m ( m +, µ 2 w.p. ( ɛ m ɛ S i. ( γ erlang(γ, µ 2 w.p. γ m = w.p. p. ( ɛ m ɛ γ m (32 Remar 2: As γ increases, the probability Pr[T τ] also increases for a given τ, since increasing γ increases the probability of success. The probability Pr[T τ] converges to Pr[T τ] as γ increases, where T is the asymptotic overall run-time defined in (3. C. Optimal Resource Allocation in Practical Scenarios For given constraints on maximum number of transmissions γ and probability of successful computation P s, we would lie to minimize T (α which is defined as the minimum overall runtime that we can guarantee with probability α, as follows: T (α = min{τ : Pr[T τ] α}. (33 For a given number of worers n, the optimization problem is formulated as min,γ (34 subject to γ γ t, (35 P s δ, (36 where γ t is the transmission bandwidth limit and δ shows how P s is close to ( δ. While it is not straightforward to find the closed-form solution of the above problem, optimal can be found by combining the results of Figs 7 and. We first observe Fig. 7, which shows the probability successful computation P s as a function of γ. The parameters are set as n = 4, m = 2, ɛ =.3. Obviosly, P s increases

8 Pr[T' ] Optimal point P s Fig. : Pr[T τ] versus assuming n = 4, m = 2, γ = 3, τ =.6, ɛ =.3, µ =, µ 2 = 5. Pr[T' ] =5 =2 = Fig. 9: Pr[T τ t] versus assuming n = 4, m = 2, τ t =.6, ɛ =.3, µ =, µ 2 = 5. as γ increases. In addition, since there are m pacets to be transmitted at each worer, we have P s = for γ < m. The optimal bounds shown in Fig. 7 are the minimum γ values that satisfy P s δ, where δ =.. Given a success probability constraint P s δ and bandwidth constraint γ γ t, one can find the candidates of values that satisfy the constraints. For example, if γ t = 3, = 3 and = 4 become possible candidates for the optimal solution while = does not. Note that in Fig. 7, increasing (or decreasing does not always indicate higher probability of success. Fig. shows Pr[T τ] versus assuming γ = 3, τ =.6, where other parameters are set to be same as in Fig. 7. Among the candidates obtained from Fig. 7, one can find the optimal that minimizes T (α. By setting α =.3 and τ =.6, it can be seen from Fig. that Pr[T τ] α holds only for = 2, which means that = 2 is the optimal solution. Note that that minimizes γ to satisfy P s δ (in Fig. 7 does not always minimizes T (α, which motivated us to formulate the problem (while it is not shown in Fig. 7, the case for = 2 has lower performance compared to the case for = 3. For given constraints on probability of success and latency, Fig. : P s versus assuming n = 4, m = 2, ɛ =.3, γ =. one can also minimize γ: min γ (37 subject to P s δ, (3 T (α τ t. (39 where τ t is the required latency constraint. For this optimization problem, it can be seen that the constraint T (α τ t is equivalent to the constraint Pr[T τ t ] α. From Fig. 7, candidates of (, γ pairs satisfying P s δ can be obtained. Among these candidates, one can find the best that minimizes γ from Fig. 9, which shows Pr[T τ t ] versus γ with different values, where τ t is set to.6. With small number of retransmissions, = 3 gives the best performance while for γ, it does not. This is because the case with = 3 has smaller number of pacets to be transmitted compared to others, which gives higher P s for small γ. As γ grows, the probability Pr[T τ t ] converges to Pr[T τ t ] for each. At last, we formulate the following problem, max s,γ (4 subject to γ γ t, (4 T (α τ t. (42 aiming to maximize the probability of success with transmission bandwidth and latency constraints. We first find the candidates of that satisfy T (α τ t from Fig. 9, and then choose optimal from Fig. which shows P s as a function of. D. Achievable Curves Based on the solutions to the optimization problems, optimal curves as a function of ɛ are obtained in Fig., where the parameters are set as n = 4, m = 2, µ =, µ 2 = 5, γ t = 7, τ t =, α =.5, δ =.. Fig. (a shows the achievable T (α as a function of ɛ. We observe that the achievable T (α increases as ɛ grows. From Fig. (b which shows the optimal γ ɛ curve, we observe that achievable γ also increases as ɛ grows, which indicates that a larger transmission bandwidth is required for higher erasure

9 9 T ( P s (a Optimal T (α ɛ curve (b Optimal γ ɛ curve (c Optimal P s ɛ curve. Fig. : Optimal curves assuming n = 4, m = 2, µ =, µ 2 = 5, γ t = 7, τ t =, α =.5, δ =.. probability. Finally in Fig. (c, we can see that there exists a sharp threshold ɛ t that maes P s = for ɛ > ɛ t since the latency constraint cannot be satisfied anymore for ɛ greater than some threshold ɛ t. VI. CONCLUDING REMARKS We studied the problem of coded computation assuming lin failures, which are a fact of life in current wired data centers and wireless networs. With maximum, we showed that the asymptotic expected latency is equivalent to analyzing a continuous-time Marov chain. Closed-form expressions of lower and upper bounds on the expected latency are derived based on some ey inequalities. For general, we show that the performance of the coding scheme which minimizes the upper bound nearly achieves the optimal run-time, which shows the effectiveness of the proposed design. Finally, under the setting of limited number of retransmissions, we formulated practical optimization problems and obtained achievable curves as a function of pacet erasure probability ɛ. Considering the lin failures in hierarchical computing systems [5] or in secure distributed computing [2] are interesting and important topics for future research. APPENDIX A PROOF OF LEMMA Assuming = m (i.e., r =, we have S i = N j= Y j,. For fixed integer q, q j= Y j, which is the sum of q exponential random variables with rate µ 2, follows erlang distribution with shape parameter q and rate parameter µ 2 : Pr[ q j= q Y j, t] = p! e µ2t (µ 2 t p. (43 q= p= Now we can write q Pr[S i t] = Pr[N = q]pr[ Y j, t] = q= j= q (( ɛɛ q ( p! e µ2t (µ 2 t p p= q = e µ2t (( ɛɛ q p! (µ 2t p = e µ2t = e µ2t q= p= p= q=p+ p= = e ( ɛµ2t which completes the proof. A. Lower bound p! (ɛµ 2t p APPENDIX B PROOF OF THEOREM 2 (( ɛɛ q p! (µ 2t p We first prove the following proposition, which is illustrated as in Fig. 2. Proposition. For any two sequences {a i }, {b i }, th min(a i + b i max(a ( i+ + b (i. (44 Proof: Define the ordering vector π Π, where Π is the set of permutation of [n], i.e., Π = {π = [π,..., π n ] : π i [n] i [n], π i π j for i j}. (45 Note that the th smallest value out of n can be always rewritten as the maximum of smallest values. Define π =

10 Maximum of sum of each pair = Lower bound of Fig. 2: Illustration of Proposition. [π,..., π n ] Π to mae the set of values {a πi +b πi } i= be the smallest among {a i +b i } n i=. In addition, define π, π 2 Π to mae a πi = a (πi, and b πi = b (πi, be satisfied i [], where π = [π,,..., π n, ], π 2 = [π,2,..., π n,2 ]. Then, th min(a i + b i can be rewritten as th min (a i + b i = max(a πi + b πi (46 = max(a (πi, + b (πi,2. (47 Now define a new set of ordering vectors Π Π, as Π = {π Π : π i [] i []}. Then we can always construct π, π 2 Π such that π i, πi,, π i,2 πi,2 for all i [], which is easy to proof. Then by adequately selecting π Π, (47 is lower bounded as max (a (πi, + b (πi,2 max(a (π i, + b (π i,2 (4 = max(a (π i + b (i. (49 Equation (4 implies that the lower bound of max(a (πi, + b (πi,2 can be obtained by properly selecting the pairs {a (π i, + b (π i,2 } i= all in the left region of Fig. 2, and taing the maximum. This value max(a (π i, + b (π i,2 can be simply rewritten as (49. Now we want to show that for any π Π, (49 cannot be smaller than max(a ( i+ + b (i, which is the lower bound illustrated in Fig. 2. Define L as L = max(a ( i+ + b (i = a ( j+ + b (j (5 for some j []. Suppose there exist L and an ordering vector π = [π,..., π n] Π such that max(a (π i + b (i = L < L. (5 We will show that the assumption of (5 is a contradiction, which completes the proof. By (5, a (π j + b (j < a ( j+ + b (j should be satisfied for all j []. This implies π j < j + for all j [] \ [j ], which is a contradiction since one-to-one mapping between j and π j cannot be constructed for j []. Since Pr[X i t] = e µt and Pr[S i t] = e ( ɛµ2t for = m, we have E[X (i ] = H n H n i µ (52 E[S (i ] = H n H n i ( ɛµ 2. (53 for all i [n]. Then, the lower bound is derived as E[T ] = E[ th min(x i + S i ] (a (b = (c E[max(X ( i+ + S (i ] max( H n H n +i + H n H n i µ ( ɛµ 2 H n H n + H n H n, ɛ µ µ ( ɛµ 2 µ 2 H n H n + H n H n, otherwise µ ( ɛµ 2 where (a holds from Proposition and (b holds from the Jensen s inequality. Now we prove (c. Let f L = H n H n +i µ + H n H n i ( ɛµ 2. (54 Assuming n is large and is linear in n, we can approximate H n log(n, H n log(n, and H n +i log(n + i. Then, 2 f L i 2 = (n + i 2 µ + ( ɛ(n i 2 µ 2 > (55 which implies f L is a convex function of i. Since i [], the integer i maximizing f L is either i = or i =. Thus, (c is satisfied which completes the proof. B. Upper bound For the upper bound, we use the following proposition which is illustrated in Fig. 3. Proposition 2. For any two sequences {a i }, {b i }, th min(a i + b i min (a (n+ i + b (i. (56 \[ ] Proof: We use the similar idea as the proof of Proposition. Consider a set of ordering vectors Π defined in (45. Note that the th smallest value out of n can be always rewritten as the minimum of n + largest values. We define π = [π,..., π n ] Π to mae the set of values {a πi + b πi } n i= be the n + largest among {a i + b i } n i=. In addition, define π, π 2 Π to mae a πi = a (πi, and b πi = b (πi, be satisfied i [n] \ [ ], where π = [π,,..., π n, ], π 2 = [π,2,..., π n,2 ]. Then, th min(a i + b i can be rewritten as th min(a i + b i = min (a π i + b πi (57 \[ ] = min \[ ] (a (π i, + b (πi,2. (5 Now define another set of ordering vectors Π Π as Π = {π Π : π i [n] \ [ ] i [n] \ [ ]}. Then we can

11 Minimum of sum of each pair = Upper bound of Fig. 3: Illustration of Proposition 2. APPENDIX C PROOF OF THEOREM 3 Since Pr[X i t] = e m µt for general, we have E[X (i ] = m(h n H n i µ (66 for all i [n]. For communication time S i, we can first see from Lemma that random variable N r j= Y j,r follows exponential distribution with rate ( ɛµ 2 for all r [ m ]. Therefore, S i = m Nr r= j= Y j,r which is the sum of m independent exponential random variables with rate ( ɛµ 2, follows erlang distribution with shape parameter m and rate parameter ( ɛµ 2 : always construct π, π 2 Π such that π i, πi,, π i,2 πi,2 for all i [n] \ [ ], which can be proved easily. Then by adequately selecting π Π, (5 is upper bounded as min (a (π i, + b (πi,2 min (a (πi, \[ ] \[ ] + b (π i,2 (59 = min \[ ] (a (π i + b (i (6 Equation (59 implies that the upper bound of min (a (π i, + b (πi,2 can be obtained by properly \[ ] selecting the n + pairs {a (π i, + b (π i,2 } n i= all in the right region of Fig. 3, and taing the minimum. This value min (a (πi, \[ ] + b (π i,2 can be simply rewritten as (6. Now we want to show that for any π Π, (6 cannot be larger than min (a (n+ i + b (i, which is the upper \[ ] bound illustrated in Fig. 3. Define U as U = min (a (n+ i + b (i = a (n+ j + b (j (6 \[ ] for some j [n] \ [ ]. Suppose there exist U and an ordering vector π = [π,..., π n] Π such that min (a (π i \[ ] + b (i = U > U. (62 We show that the assumption of (62 is a contradiction. By (62, a (π j + b (j > a (n+ j + b (j should be satisfied for all j [n] \ [ ]. This implies π j > n + j for all j [j] \ [ ], which is a contradiction since one-toone mapping between j and π j cannot be constructed for j [n] \ [ ]. Now from (52 and (53, the upper bound is derived as E[T ] = E[ th min(x i + S i ] (63 E[ min (X (n+ i + S (i ] (64 (d \[ ] (e min (H n H i + H n H n i (65 \[ ] µ ( ɛµ 2 where (d holds from Proposition 2 and (e holds from the Jensen s inequality. Pr[S i t] = m p= p! e ( ɛµ2t [( ɛµ 2 t] p. (67 By the property of erlang distribution, S i can be rewritten as S i = G i ( ɛµ 2 (6 where G i is an erlang random variable with shape parameter m and rate parameter. We tae this approach since closedform expression of E[G (i ] exists as (24. Now the lower bound is derived as E[T ] = E[ th min(x i + S i ] (a (b E[max(X ( i+ + S (i ] ( m(hn H n +i max µ + E[G (i] ( ɛµ 2 where (a holds from Proposition and (b holds from the Jensen s inequality. For the upper bound, we write E[T ] = E[ th min(x i + S i ] (69 E[ min (d (e (X (n+ i + S (i ] \[ ] (7 ( m(hn H i + E[G (i] µ ( ɛµ 2 (7 min \[ ] where (d holds from Proposition 2 and (e holds from the Jensen s inequality. REFERENCES [] J. Dean and S. Ghemawat, Mapreduce: simplified data processing on large clusters, Communications of the ACM, vol. 5, no., pp. 73, 2 [2] M. Zaharia, M. Chowdhury, M. J. Franlin, S. Shener, and I. Stoica, Spar: cluster computing with woring sets., in Proc. 2nd USENIX Worshop on Hot Topics in Cloud Computing (HotCloud, 2. 2 [3] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, Speeding up distributed machine learning using codes, IEEE Trans. Inf. Theory, vol. 64, no. 3, pp , Mar. 2. [4] K. Lee, C. Suh, and K. Ramchandran, High-dimensional coded matrix multiplication, in Proc. IEEE ISIT, Jun. 27, pp [5] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, Polynomial codes: an optimal design for high-dimensional coded matrix multiplication, in Proc. Adv. Neural Inf. Process. Syst. (NIPS, Long Beach, CA, USA, Dec. 27

12 [6] S. Dutta, V. Cadambe, and P. Grover, Short-dot: Computing large linear transforms distributedly using coded short dot products, in Advances In Neural Information Processing Systems, pp. 2922, 26. [7] R. Tandon, Q. Lei, A. G. Dimais, and N. Karampatziais, Gradient coding: Avoiding stragglers in distributed learning, in Proc. ICML, 27, pp [] L. Chen, et al., DRACO: Byzantine-resilient distributed training via redundant gradients, in Proc. ICML, 2. [9] M. Ye and E. Abbe, Communication-computation efficient gradient coding, in Proc. ICML, 2. [] S. Dutta, V. cadambe, and P. Grover, Coded convolution for parallel and distributed computing within a deadline, in Proc. IEEE ISIT, Jun. 27, pp [] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, Coded Fourier transform, in Proc. Allerton Conf., 27. [2] N. Ferdinand, and S. C. Draper, Hierarchical Coded Computation, in Proc. IEEE ISIT, Jun. 2. [3] S. Kiani, N. Ferdinand, and S. C. Draper, Exploitation of Stragglers in Coded Computation, in Proc. IEEE ISIT, Jun. 2. [4] A. Reisizadeh, S. Praash, R. Pedarsani, and S. Avestimehr, Coded computation over heterogeneous clusters, in Proc. IEEE ISIT, 27. [5] H. Par, K. Lee, J. Sohn, C. Suh, and J. Moon, Hierarchical coding for distributed computing, in Proc. IEEE ISIT, Jun. 2. [6] A. Reisizadeh and R. Pedarsani, Latency analysis of coded computation schemes over wireless networs, in Proc. Allerton Conf., 27. [7] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, Coded MapReduce, in Proc. Allerton Conf., 25. [] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, A fundamental tradeoff between computation and communication in distributed computing, IEEE Trans. Inf. Theory, vol. 64, no., pp. 9-2, 2. [9] M. Attia and R. Tandon, Information theoretic limits of data shuffling for distributed learning, in Proc. IEEE Globecom, 26 [2] S. Li, M. A. Maddah-Ali, and A. S. Avestimehr, A unified coding framewor for distributed computing with straggler servers, in Proc. IEEE Globecom Worshops, 26. [2] S. Li, Q. Yu, M. A. Maddah-Ali, A. S. Avestimehr, A scalable framewor for wireless distributed computing, IEEE/ACM Transactions on Networing, vol. 25, no. 5, pp , 27. [22] M. Attia and R. Tandon, Information theoretic limits of data shuffling for distributed learning, arxiv preprint arxiv:69.5, 26 [23] P. Gill, N. jain, and N. Nagappan, Understanding networ failures in data centers: Measurement, analysis, and implications, in ACM SIGCOMM Comput. Commun. Rev., vol. 4, no. 4, pp , 2. [24] M. Gerami, M. Xiao, J. Li, C. Fishione, and Z. Lin, Repair for distributed storage systems with pacet erasure channels and dedicated nodes for repair, IEEE Trans. Commun., vol. 64, no. 4, pp , 26. [25] D. Datla, X. Chen, T. Tsou, S. Raghunandan, S. S. Hasan, J. H. Reed, C. B. Dietrich, T. Bose, B. Fette, and J.-H. Kim, Wireless distributed computing: a survey of research challenges, IEEE Communications Magazine, vol. 5, no., pp , Jan. 22. [26] P. Bremaud, Marov chains: Gibbs fields, Monte Carlo simulation, and queues. Springer Science Business Media, 23, vol. 3. [27] S. S. Gupta, Order statistics from the Gamma distribution, Technometrics, vol. 2, no. 2, pp , 96. [2] R. Bitar, P. Parag, and S. E. Rouayheb, Minimizing latency for secure distributed computing, in Proc. IEEE ISIT, 27. 2

On the Worst-case Communication Overhead for Distributed Data Shuffling

On the Worst-case Communication Overhead for Distributed Data Shuffling On the Worst-case Communication Overhead for Distributed Data Shuffling Mohamed Adel Attia Ravi Tandon Department of Electrical and Computer Engineering University of Arizona, Tucson, AZ 85721 E-mail:{madel,

More information

Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication

Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication 1 Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication Qian Yu, Mohammad Ali Maddah-Ali, and A. Salman Avestimehr Department of Electrical Engineering, University of Southern

More information

Coded Computation for Multicore Setups

Coded Computation for Multicore Setups Coded Computation for Multicore Setups Kangwoo Lee KAIST w1jjang@aistacr Ramtin Pedarsani UC Santa Barbara ramtin@eceucsbedu Dimitris Papailiopoulos UW-Madison dimitris@papailio Kannan Ramchandran UC Bereley

More information

Private Secure Coded Computation

Private Secure Coded Computation Private Secure Coded Computation Minchul Kim and Jungwoo Lee Seoul National University Department of Electrical and Computer Engineering 08826 Seoul, Korea Email: kmc1222@cml.snu.ac.kr, junglee@snu.ac.kr

More information

Fundamental Tradeoff between Computation and Communication in Distributed Computing

Fundamental Tradeoff between Computation and Communication in Distributed Computing Fundamental Tradeoff between Computation and Communication in Distributed Computing Songze Li, Mohammad Ali Maddah-Ali, and A. Salman Avestimehr Department of Electrical Engineering, University of Southern

More information

Gradient Coding: Mitigating Stragglers in Distributed Gradient Descent.

Gradient Coding: Mitigating Stragglers in Distributed Gradient Descent. Gradient Coding: Mitigating Stragglers in Distributed Gradient Descent. Alex Dimakis joint work with R. Tandon, Q. Lei, UT Austin N. Karampatziakis, Microsoft Setup Problem: Large-scale learning. Given

More information

Hierarchical Coding for Distributed Computing

Hierarchical Coding for Distributed Computing Hierarchical Coding for Distributed Computing Hyegyeong Park, Kangwook Lee, Jy-yong Sohn, Changho Suh and Jaekyun Moon School of Electrical Engineering, Korea Advanced Institute of Science and Technology

More information

An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters

An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters IEEE/ACM TRANSACTIONS ON NETWORKING An Improved Bound for Minimizing the Total Weighted Completion Time of Coflows in Datacenters Mehrnoosh Shafiee, Student Member, IEEE, and Javad Ghaderi, Member, IEEE

More information

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage

Minimum Repair Bandwidth for Exact Regeneration in Distributed Storage 1 Minimum Repair andwidth for Exact Regeneration in Distributed Storage Vivec R Cadambe, Syed A Jafar, Hamed Malei Electrical Engineering and Computer Science University of California Irvine, Irvine, California,

More information

Coded Computation over Heterogeneous Clusters

Coded Computation over Heterogeneous Clusters 1 Coded Computation over Heterogeneous Clusters Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr University of California, Santa Barbara University of Southern California

More information

Practical Polar Code Construction Using Generalised Generator Matrices

Practical Polar Code Construction Using Generalised Generator Matrices Practical Polar Code Construction Using Generalised Generator Matrices Berksan Serbetci and Ali E. Pusane Department of Electrical and Electronics Engineering Bogazici University Istanbul, Turkey E-mail:

More information

Scheduling Coflows in Datacenter Networks: Improved Bound for Total Weighted Completion Time

Scheduling Coflows in Datacenter Networks: Improved Bound for Total Weighted Completion Time 1 1 2 Scheduling Coflows in Datacenter Networs: Improved Bound for Total Weighted Completion Time Mehrnoosh Shafiee and Javad Ghaderi Abstract Coflow is a recently proposed networing abstraction to capture

More information

Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction

Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction K V Rashmi, Nihar B Shah, and P Vijay Kumar, Fellow, IEEE Abstract Regenerating codes

More information

CodedSketch: A Coding Scheme for Distributed Computation of Approximated Matrix Multiplications

CodedSketch: A Coding Scheme for Distributed Computation of Approximated Matrix Multiplications 1 CodedSketch: A Coding Scheme for Distributed Computation of Approximated Matrix Multiplications Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali arxiv:1812.10460v1 [cs.it] 26 Dec 2018 Abstract In this

More information

Linear Exact Repair Rate Region of (k + 1, k, k) Distributed Storage Systems: A New Approach

Linear Exact Repair Rate Region of (k + 1, k, k) Distributed Storage Systems: A New Approach Linear Exact Repair Rate Region of (k + 1, k, k) Distributed Storage Systems: A New Approach Mehran Elyasi Department of ECE University of Minnesota melyasi@umn.edu Soheil Mohajer Department of ECE University

More information

Multicast With Prioritized Delivery: How Fresh is Your Data?

Multicast With Prioritized Delivery: How Fresh is Your Data? Multicast With Prioritized Delivery: How Fresh is Your Data? Jing Zhong, Roy D Yates and Emina Solanin Department of ECE, Rutgers University, {ingzhong, ryates, eminasolanin}@rutgersedu arxiv:885738v [csit

More information

Upper Bounds on the Capacity of Binary Intermittent Communication

Upper Bounds on the Capacity of Binary Intermittent Communication Upper Bounds on the Capacity of Binary Intermittent Communication Mostafa Khoshnevisan and J. Nicholas Laneman Department of Electrical Engineering University of Notre Dame Notre Dame, Indiana 46556 Email:{mhoshne,

More information

On MBR codes with replication

On MBR codes with replication On MBR codes with replication M. Nikhil Krishnan and P. Vijay Kumar, Fellow, IEEE Department of Electrical Communication Engineering, Indian Institute of Science, Bangalore. Email: nikhilkrishnan.m@gmail.com,

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Chec Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, IIT, Chicago sashann@hawiitedu, salim@iitedu Abstract We consider the problem of constructing

More information

How to Optimally Allocate Resources for Coded Distributed Computing?

How to Optimally Allocate Resources for Coded Distributed Computing? 1 How to Optimally Allocate Resources for Coded Distributed Computing? Qian Yu, Songze Li, Mohammad Ali Maddah-Ali, and A. Salman Avestimehr Department of Electrical Engineering, University of Southern

More information

On the Optimal Recovery Threshold of Coded Matrix Multiplication

On the Optimal Recovery Threshold of Coded Matrix Multiplication 1 On the Optimal Recovery Threshold of Coded Matrix Multiplication Sanghamitra Dutta, Mohammad Fahim, Farzin Haddadpour, Haewon Jeong, Viveck Cadambe, Pulkit Grover arxiv:1801.10292v2 [cs.it] 16 May 2018

More information

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel

Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Secure Degrees of Freedom of the MIMO Multiple Access Wiretap Channel Pritam Mukherjee Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 074 pritamm@umd.edu

More information

STAIRCASE CODES FOR FAST AND SECURE DISTRIBUTED COMPUTING

STAIRCASE CODES FOR FAST AND SECURE DISTRIBUTED COMPUTING SIP Seminar of INSPIRE Lab, Rutgers New Jersey STAIRCASE CODES FOR FAST AND SECURE DISTRIBUTED COMPUTING Rawad Bitar Illinois Ins/tute of Technology Rutgers University Collaborators Parimal Parag and Salim

More information

Robust Network Codes for Unicast Connections: A Case Study

Robust Network Codes for Unicast Connections: A Case Study Robust Network Codes for Unicast Connections: A Case Study Salim Y. El Rouayheb, Alex Sprintson, and Costas Georghiades Department of Electrical and Computer Engineering Texas A&M University College Station,

More information

Fractional Repetition Codes For Repair In Distributed Storage Systems

Fractional Repetition Codes For Repair In Distributed Storage Systems Fractional Repetition Codes For Repair In Distributed Storage Systems 1 Salim El Rouayheb, Kannan Ramchandran Dept. of Electrical Engineering and Computer Sciences University of California, Berkeley {salim,

More information

Index Coding With Erroneous Side Information

Index Coding With Erroneous Side Information Index Coding With Erroneous Side Information Jae-Won Kim and Jong-Seon No, Fellow, IEEE 1 Abstract arxiv:1703.09361v1 [cs.it] 28 Mar 2017 In this paper, new index coding problems are studied, where each

More information

Optimal power-delay trade-offs in fading channels: small delay asymptotics

Optimal power-delay trade-offs in fading channels: small delay asymptotics Optimal power-delay trade-offs in fading channels: small delay asymptotics Randall A. Berry Dept. of EECS, Northwestern University 45 Sheridan Rd., Evanston IL 6008 Email: rberry@ece.northwestern.edu Abstract

More information

Lagrange Coded Computing: Optimal Design for Resiliency, Security, and Privacy

Lagrange Coded Computing: Optimal Design for Resiliency, Security, and Privacy 1 Lagrange Coded Computing: Optimal Design for Resiliency, Security, and Privacy Qian Yu, Songze Li, Netanel Raviv, Seyed Mohammadreza Mousavi Kalan, Mahdi Soltanolkotabi, and A. Salman Avestimehr Department

More information

Security in Locally Repairable Storage

Security in Locally Repairable Storage 1 Security in Locally Repairable Storage Abhishek Agarwal and Arya Mazumdar Abstract In this paper we extend the notion of locally repairable codes to secret sharing schemes. The main problem we consider

More information

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks

Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Continuous-Model Communication Complexity with Application in Distributed Resource Allocation in Wireless Ad hoc Networks Husheng Li 1 and Huaiyu Dai 2 1 Department of Electrical Engineering and Computer

More information

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel

Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Achieving Shannon Capacity Region as Secrecy Rate Region in a Multiple Access Wiretap Channel Shahid Mehraj Shah and Vinod Sharma Department of Electrical Communication Engineering, Indian Institute of

More information

FEEDBACK does not increase the capacity of a discrete

FEEDBACK does not increase the capacity of a discrete 1 Sequential Differential Optimization of Incremental Redundancy Transmission Lengths: An Example with Tail-Biting Convolutional Codes Nathan Wong, Kasra Vailinia, Haobo Wang, Sudarsan V. S. Ranganathan,

More information

Decodability Analysis of Finite Memory Random Linear Coding in Line Networks

Decodability Analysis of Finite Memory Random Linear Coding in Line Networks Decodability Analysis of Finite Memory Random Linear Coding in Line Networks Nima Torabkhani, Faramarz Fekri School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta GA 30332,

More information

Sparse and low-rank decomposition for big data systems via smoothed Riemannian optimization

Sparse and low-rank decomposition for big data systems via smoothed Riemannian optimization Sparse and low-rank decomposition for big data systems via smoothed Riemannian optimization Yuanming Shi ShanghaiTech University, Shanghai, China shiym@shanghaitech.edu.cn Bamdev Mishra Amazon Development

More information

Performance Limits of Coded Caching under Heterogeneous Settings Xiaojun Lin Associate Professor, Purdue University

Performance Limits of Coded Caching under Heterogeneous Settings Xiaojun Lin Associate Professor, Purdue University Performance Limits of Coded Caching under Heterogeneous Settings Xiaojun Lin Associate Professor, Purdue University Joint work with Jinbei Zhang (SJTU), Chih-Chun Wang (Purdue) and Xinbing Wang (SJTU)

More information

Explicit MBR All-Symbol Locality Codes

Explicit MBR All-Symbol Locality Codes Explicit MBR All-Symbol Locality Codes Govinda M. Kamath, Natalia Silberstein, N. Prakash, Ankit S. Rawat, V. Lalitha, O. Ozan Koyluoglu, P. Vijay Kumar, and Sriram Vishwanath 1 Abstract arxiv:1302.0744v2

More information

Progress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes

Progress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes Progress on High-rate MSR Codes: Enabling Arbitrary Number of Helper Nodes Ankit Singh Rawat CS Department Carnegie Mellon University Pittsburgh, PA 523 Email: asrawat@andrewcmuedu O Ozan Koyluoglu Department

More information

Channel Probing in Communication Systems: Myopic Policies Are Not Always Optimal

Channel Probing in Communication Systems: Myopic Policies Are Not Always Optimal Channel Probing in Communication Systems: Myopic Policies Are Not Always Optimal Matthew Johnston, Eytan Modiano Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,

More information

Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient

Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Distributed Data Storage with Minimum Storage Regenerating Codes - Exact and Functional Repair are Asymptotically Equally Efficient Viveck R Cadambe, Syed A Jafar, Hamed Maleki Electrical Engineering and

More information

An Extended Fano s Inequality for the Finite Blocklength Coding

An Extended Fano s Inequality for the Finite Blocklength Coding An Extended Fano s Inequality for the Finite Bloclength Coding Yunquan Dong, Pingyi Fan {dongyq8@mails,fpy@mail}.tsinghua.edu.cn Department of Electronic Engineering, Tsinghua University, Beijing, P.R.

More information

Correcting Bursty and Localized Deletions Using Guess & Check Codes

Correcting Bursty and Localized Deletions Using Guess & Check Codes Correcting Bursty and Localized Deletions Using Guess & Chec Codes Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University serge..hanna@rutgers.edu, salim.elrouayheb@rutgers.edu Abstract

More information

On the Secrecy Capacity of the Z-Interference Channel

On the Secrecy Capacity of the Z-Interference Channel On the Secrecy Capacity of the Z-Interference Channel Ronit Bustin Tel Aviv University Email: ronitbustin@post.tau.ac.il Mojtaba Vaezi Princeton University Email: mvaezi@princeton.edu Rafael F. Schaefer

More information

LDPC Code Design for Distributed Storage: Balancing Repair Bandwidth, Reliability and Storage Overhead

LDPC Code Design for Distributed Storage: Balancing Repair Bandwidth, Reliability and Storage Overhead LDPC Code Design for Distributed Storage: 1 Balancing Repair Bandwidth, Reliability and Storage Overhead arxiv:1710.05615v1 [cs.dc] 16 Oct 2017 Hyegyeong Park, Student Member, IEEE, Dongwon Lee, and Jaekyun

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel

The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel The Capacity of the Semi-Deterministic Cognitive Interference Channel and its Application to Constant Gap Results for the Gaussian Channel Stefano Rini, Daniela Tuninetti, and Natasha Devroye Department

More information

Guess & Check Codes for Deletions, Insertions, and Synchronization

Guess & Check Codes for Deletions, Insertions, and Synchronization Guess & Check Codes for Deletions, Insertions, and Synchronization Serge Kas Hanna, Salim El Rouayheb ECE Department, Rutgers University sergekhanna@rutgersedu, salimelrouayheb@rutgersedu arxiv:759569v3

More information

Distributed storage systems from combinatorial designs

Distributed storage systems from combinatorial designs Distributed storage systems from combinatorial designs Aditya Ramamoorthy November 20, 2014 Department of Electrical and Computer Engineering, Iowa State University, Joint work with Oktay Olmez (Ankara

More information

Approximately achieving the feedback interference channel capacity with point-to-point codes

Approximately achieving the feedback interference channel capacity with point-to-point codes Approximately achieving the feedback interference channel capacity with point-to-point codes Joyson Sebastian*, Can Karakus*, Suhas Diggavi* Abstract Superposition codes with rate-splitting have been used

More information

Relaying a Fountain code across multiple nodes

Relaying a Fountain code across multiple nodes Relaying a Fountain code across multiple nodes Ramarishna Gummadi, R.S.Sreenivas Coordinated Science Lab University of Illinois at Urbana-Champaign {gummadi2, rsree} @uiuc.edu Abstract Fountain codes are

More information

Linear Programming Bounds for Robust Locally Repairable Storage Codes

Linear Programming Bounds for Robust Locally Repairable Storage Codes Linear Programming Bounds for Robust Locally Repairable Storage Codes M. Ali Tebbi, Terence H. Chan, Chi Wan Sung Institute for Telecommunications Research, University of South Australia Email: {ali.tebbi,

More information

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes A Piggybacing Design Framewor for Read-and Download-efficient Distributed Storage Codes K V Rashmi, Nihar B Shah, Kannan Ramchandran, Fellow, IEEE Department of Electrical Engineering and Computer Sciences

More information

Variable-Rate Universal Slepian-Wolf Coding with Feedback

Variable-Rate Universal Slepian-Wolf Coding with Feedback Variable-Rate Universal Slepian-Wolf Coding with Feedback Shriram Sarvotham, Dror Baron, and Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University, Houston, TX 77005 Abstract

More information

On the Existence of Optimal Exact-Repair MDS Codes for Distributed Storage

On the Existence of Optimal Exact-Repair MDS Codes for Distributed Storage On the Existence of Optimal Exact-Repair MDS Codes for Distributed Storage Changho Suh Kannan Ramchandran Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report

More information

Private Information Retrieval from Coded Databases

Private Information Retrieval from Coded Databases Private Information Retrieval from Coded Databases arim Banawan Sennur Ulukus Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742 kbanawan@umdedu ulukus@umdedu

More information

Algebraic Gossip: A Network Coding Approach to Optimal Multiple Rumor Mongering

Algebraic Gossip: A Network Coding Approach to Optimal Multiple Rumor Mongering Algebraic Gossip: A Networ Coding Approach to Optimal Multiple Rumor Mongering Supratim Deb, Muriel Médard, and Clifford Choute Laboratory for Information & Decison Systems, Massachusetts Institute of

More information

A Simple Model for the Window Size Evolution of TCP Coupled with MAC and PHY Layers

A Simple Model for the Window Size Evolution of TCP Coupled with MAC and PHY Layers A Simple Model for the Window Size Evolution of TCP Coupled with and PHY Layers George Papageorgiou, John S. Baras Institute for Systems Research, University of Maryland, College Par, MD 20742 Email: gpapag,

More information

Fundamental Limits on Latency in Small-Cell Caching Systems: An Information-Theoretic Analysis

Fundamental Limits on Latency in Small-Cell Caching Systems: An Information-Theoretic Analysis Fundamental Limits on Latency in Small-ell aching Systems: An Information-Theoretic Analysis Seyyed Mohammadreza Azimi, Osvaldo Simeone, and Ravi Tandon Abstract aching of popular multimedia content at

More information

Secure RAID Schemes from EVENODD and STAR Codes

Secure RAID Schemes from EVENODD and STAR Codes Secure RAID Schemes from EVENODD and STAR Codes Wentao Huang and Jehoshua Bruck California Institute of Technology, Pasadena, USA {whuang,bruck}@caltechedu Abstract We study secure RAID, ie, low-complexity

More information

Distributed Storage Systems with Secure and Exact Repair - New Results

Distributed Storage Systems with Secure and Exact Repair - New Results Distributed torage ystems with ecure and Exact Repair - New Results Ravi Tandon, aidhiraj Amuru, T Charles Clancy, and R Michael Buehrer Bradley Department of Electrical and Computer Engineering Hume Center

More information

Wideband Fading Channel Capacity with Training and Partial Feedback

Wideband Fading Channel Capacity with Training and Partial Feedback Wideband Fading Channel Capacity with Training and Partial Feedback Manish Agarwal, Michael L. Honig ECE Department, Northwestern University 145 Sheridan Road, Evanston, IL 6008 USA {m-agarwal,mh}@northwestern.edu

More information

Gaussian Relay Channel Capacity to Within a Fixed Number of Bits

Gaussian Relay Channel Capacity to Within a Fixed Number of Bits Gaussian Relay Channel Capacity to Within a Fixed Number of Bits Woohyuk Chang, Sae-Young Chung, and Yong H. Lee arxiv:.565v [cs.it] 23 Nov 2 Abstract In this paper, we show that the capacity of the threenode

More information

Cooperative Communication with Feedback via Stochastic Approximation

Cooperative Communication with Feedback via Stochastic Approximation Cooperative Communication with Feedback via Stochastic Approximation Utsaw Kumar J Nicholas Laneman and Vijay Gupta Department of Electrical Engineering University of Notre Dame Email: {ukumar jnl vgupta}@ndedu

More information

arxiv: v1 [cs.it] 16 Jan 2013

arxiv: v1 [cs.it] 16 Jan 2013 XORing Elephants: Novel Erasure Codes for Big Data arxiv:13013791v1 [csit] 16 Jan 2013 ABSTRACT Maheswaran Sathiamoorthy University of Southern California msathiam@uscedu Alexandros G Dimais University

More information

12.4 Known Channel (Water-Filling Solution)

12.4 Known Channel (Water-Filling Solution) ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity

More information

Guesswork Subject to a Total Entropy Budget

Guesswork Subject to a Total Entropy Budget Guesswork Subject to a Total Entropy Budget Arman Rezaee, Ahmad Beirami, Ali Makhdoumi, Muriel Médard, and Ken Duffy arxiv:72.09082v [cs.it] 25 Dec 207 Abstract We consider an abstraction of computational

More information

Throughput-Delay Analysis of Random Linear Network Coding for Wireless Broadcasting

Throughput-Delay Analysis of Random Linear Network Coding for Wireless Broadcasting Throughput-Delay Analysis of Random Linear Network Coding for Wireless Broadcasting Swapna B.T., Atilla Eryilmaz, and Ness B. Shroff Departments of ECE and CSE The Ohio State University Columbus, OH 43210

More information

Information Theory Meets Game Theory on The Interference Channel

Information Theory Meets Game Theory on The Interference Channel Information Theory Meets Game Theory on The Interference Channel Randall A. Berry Dept. of EECS Northwestern University e-mail: rberry@eecs.northwestern.edu David N. C. Tse Wireless Foundations University

More information

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels

Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels 1 Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels Shunsuke Horii, Toshiyasu Matsushima, and Shigeichi Hirasawa arxiv:1508.01640v2 [cs.it] 29 Sep 2015 Abstract In this paper,

More information

Markov Chain Model for ALOHA protocol

Markov Chain Model for ALOHA protocol Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability

More information

Coding problems for memory and storage applications

Coding problems for memory and storage applications .. Coding problems for memory and storage applications Alexander Barg University of Maryland January 27, 2015 A. Barg (UMD) Coding for memory and storage January 27, 2015 1 / 73 Codes with locality Introduction:

More information

Sector-Disk Codes and Partial MDS Codes with up to Three Global Parities

Sector-Disk Codes and Partial MDS Codes with up to Three Global Parities Sector-Disk Codes and Partial MDS Codes with up to Three Global Parities Junyu Chen Department of Information Engineering The Chinese University of Hong Kong Email: cj0@alumniiecuhkeduhk Kenneth W Shum

More information

Coded Caching for Files with Distinct File Sizes

Coded Caching for Files with Distinct File Sizes Coded Caching for Files with Distinct File Sizes Jinbei Zhang, Xiaojun Lin, Chih-Chun Wang, Xinbing Wang Shanghai Jiao Tong University Purdue University The Importance of Caching Server cache cache cache

More information

STATE AND OUTPUT FEEDBACK CONTROL IN MODEL-BASED NETWORKED CONTROL SYSTEMS

STATE AND OUTPUT FEEDBACK CONTROL IN MODEL-BASED NETWORKED CONTROL SYSTEMS SAE AND OUPU FEEDBACK CONROL IN MODEL-BASED NEWORKED CONROL SYSEMS Luis A Montestruque, Panos J Antsalis Abstract In this paper the control of a continuous linear plant where the sensor is connected to

More information

IN this paper, we consider the capacity of sticky channels, a

IN this paper, we consider the capacity of sticky channels, a 72 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 1, JANUARY 2008 Capacity Bounds for Sticky Channels Michael Mitzenmacher, Member, IEEE Abstract The capacity of sticky channels, a subclass of insertion

More information

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback

Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Sum Capacity of General Deterministic Interference Channel with Channel Output Feedback Achaleshwar Sahai Department of ECE, Rice University, Houston, TX 775. as27@rice.edu Vaneet Aggarwal Department of

More information

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014.

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014. EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014 Midterm Exam 2 Last name First name SID Rules. DO NOT open the exam until instructed

More information

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying

A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying Ahmad Abu Al Haija, and Mai Vu, Department of Electrical and Computer Engineering McGill University Montreal, QC H3A A7 Emails: ahmadabualhaija@mailmcgillca,

More information

Information in Aloha Networks

Information in Aloha Networks Achieving Proportional Fairness using Local Information in Aloha Networks Koushik Kar, Saswati Sarkar, Leandros Tassiulas Abstract We address the problem of attaining proportionally fair rates using Aloha

More information

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds.

Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huschka, and S. E. Chick, eds. Proceedings of the 2016 Winter Simulation Conference T. M. K. Roeder, P. I. Frazier, R. Szechtman, E. Zhou, T. Huscha, and S. E. Chic, eds. OPTIMAL COMPUTING BUDGET ALLOCATION WITH EXPONENTIAL UNDERLYING

More information

Giuseppe Bianchi, Ilenia Tinnirello

Giuseppe Bianchi, Ilenia Tinnirello Capacity of WLAN Networs Summary Ł Ł Ł Ł Arbitrary networ capacity [Gupta & Kumar The Capacity of Wireless Networs ] Ł! Ł "! Receiver Model Ł Ł # Ł $%&% Ł $% '( * &%* r (1+ r Ł + 1 / n 1 / n log n Area

More information

2DI90 Probability & Statistics. 2DI90 Chapter 4 of MR

2DI90 Probability & Statistics. 2DI90 Chapter 4 of MR 2DI90 Probability & Statistics 2DI90 Chapter 4 of MR Recap - Random Variables ( 2.8 MR)! Example: X is the random variable corresponding to the temperature of the room at time t. x is the measured temperature

More information

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing

MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing MAT 585: Johnson-Lindenstrauss, Group testing, and Compressed Sensing Afonso S. Bandeira April 9, 2015 1 The Johnson-Lindenstrauss Lemma Suppose one has n points, X = {x 1,..., x n }, in R d with d very

More information

Binary MDS Array Codes with Optimal Repair

Binary MDS Array Codes with Optimal Repair IEEE TRANSACTIONS ON INFORMATION THEORY 1 Binary MDS Array Codes with Optimal Repair Hanxu Hou, Member, IEEE, Patric P. C. Lee, Senior Member, IEEE Abstract arxiv:1809.04380v1 [cs.it] 12 Sep 2018 Consider

More information

Coding for loss tolerant systems

Coding for loss tolerant systems Coding for loss tolerant systems Workshop APRETAF, 22 janvier 2009 Mathieu Cunche, Vincent Roca INRIA, équipe Planète INRIA Rhône-Alpes Mathieu Cunche, Vincent Roca The erasure channel Erasure codes Reed-Solomon

More information

Weakly Secure Data Exchange with Generalized Reed Solomon Codes

Weakly Secure Data Exchange with Generalized Reed Solomon Codes Weakly Secure Data Exchange with Generalized Reed Solomon Codes Muxi Yan, Alex Sprintson, and Igor Zelenko Department of Electrical and Computer Engineering, Texas A&M University Department of Mathematics,

More information

Coded Caching with Low Subpacketization Levels

Coded Caching with Low Subpacketization Levels Electrical and Computer Engineering Conference Papers, Posters and Presentations Electrical and Computer Engineering 06 Coded Caching with Low Subpacetization Levels Li Tang Iowa State University, litang@iastateedu

More information

2012 IEEE International Symposium on Information Theory Proceedings

2012 IEEE International Symposium on Information Theory Proceedings Decoding of Cyclic Codes over Symbol-Pair Read Channels Eitan Yaakobi, Jehoshua Bruck, and Paul H Siegel Electrical Engineering Department, California Institute of Technology, Pasadena, CA 9115, USA Electrical

More information

Slotted Aloha for Networked Base Stations with Spatial and Temporal Diversity

Slotted Aloha for Networked Base Stations with Spatial and Temporal Diversity Slotted Aloha for Networked Base Stations with Spatial and Temporal Diversity Dušan Jakovetić, Dragana Bajović BioSense Center, University of Novi Sad, Novi Sad, Serbia Email: {djakovet, dbajovic}@uns.ac.rs

More information

Cooperative HARQ with Poisson Interference and Opportunistic Routing

Cooperative HARQ with Poisson Interference and Opportunistic Routing Cooperative HARQ with Poisson Interference and Opportunistic Routing Amogh Rajanna & Mostafa Kaveh Department of Electrical and Computer Engineering University of Minnesota, Minneapolis, MN USA. Outline

More information

Joint FEC Encoder and Linear Precoder Design for MIMO Systems with Antenna Correlation

Joint FEC Encoder and Linear Precoder Design for MIMO Systems with Antenna Correlation Joint FEC Encoder and Linear Precoder Design for MIMO Systems with Antenna Correlation Chongbin Xu, Peng Wang, Zhonghao Zhang, and Li Ping City University of Hong Kong 1 Outline Background Mutual Information

More information

Performance Analysis of a System with Bursty Traffic and Adjustable Transmission Times

Performance Analysis of a System with Bursty Traffic and Adjustable Transmission Times Performance Analysis of a System with Bursty Traffic and Adjustable Transmission Times Nikolaos Pappas Department of Science and Technology, Linköping University, Sweden E-mail: nikolaospappas@liuse arxiv:1809085v1

More information

On the Capacity of Wireless 1-Hop Intersession Network Coding A Broadcast Packet Erasure Channel Approach

On the Capacity of Wireless 1-Hop Intersession Network Coding A Broadcast Packet Erasure Channel Approach 1 On the Capacity of Wireless 1-Hop Intersession Networ Coding A Broadcast Pacet Erasure Channel Approach Chih-Chun Wang, Member, IEEE Abstract Motivated by practical wireless networ protocols, this paper

More information

Regenerating Codes and Locally Recoverable. Codes for Distributed Storage Systems

Regenerating Codes and Locally Recoverable. Codes for Distributed Storage Systems Regenerating Codes and Locally Recoverable 1 Codes for Distributed Storage Systems Yongjune Kim and Yaoqing Yang Abstract We survey the recent results on applying error control coding to distributed storage

More information

arxiv: v2 [cs.it] 17 Jan 2018

arxiv: v2 [cs.it] 17 Jan 2018 K-means Algorithm over Compressed Binary Data arxiv:1701.03403v2 [cs.it] 17 Jan 2018 Elsa Dupraz IMT Atlantique, Lab-STICC, UBL, Brest, France Abstract We consider a networ of binary-valued sensors with

More information

arxiv: v1 [cs.dc] 17 Apr 2018

arxiv: v1 [cs.dc] 17 Apr 2018 Communication-Aware Scheduling of Serial Tass for Dispersed Computing Chien-Sheng Yang University of Southern California chienshy@uscedu Ramtin Pedarsani University of California, Santa Barbara ramtin@eceucsbedu

More information

The Satellite - A marginal allocation approach

The Satellite - A marginal allocation approach The Satellite - A marginal allocation approach Krister Svanberg and Per Enqvist Optimization and Systems Theory, KTH, Stocholm, Sweden. {rille},{penqvist}@math.th.se 1 Minimizing an integer-convex function

More information

Distributed Source Coding Using LDPC Codes

Distributed Source Coding Using LDPC Codes Distributed Source Coding Using LDPC Codes Telecommunications Laboratory Alex Balatsoukas-Stimming Technical University of Crete May 29, 2010 Telecommunications Laboratory (TUC) Distributed Source Coding

More information

Network Coding for Computing

Network Coding for Computing Networ Coding for Computing Rathinaumar Appuswamy, Massimo Franceschetti, Nihil Karamchandani, and Kenneth Zeger Abstract The following networ computation problem is considered A set of source nodes in

More information

Resource Allocation for Video Streaming in Wireless Environment

Resource Allocation for Video Streaming in Wireless Environment Resource Allocation for Video Streaming in Wireless Environment Shahrokh Valaee and Jean-Charles Gregoire Abstract This paper focuses on the development of a new resource allocation scheme for video streaming

More information

P e = 0.1. P e = 0.01

P e = 0.1. P e = 0.01 23 10 0 10-2 P e = 0.1 Deadline Failure Probability 10-4 10-6 10-8 P e = 0.01 10-10 P e = 0.001 10-12 10 11 12 13 14 15 16 Number of Slots in a Frame Fig. 10. The deadline failure probability as a function

More information