Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks

Size: px
Start display at page:

Download "Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks"

Transcription

1 Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität Braunschweig Schleinitzstr. 22, Braunschweig, Germany Abstract Scalar non-uniform Lloyd-Max quantization (LMQ) is optimized for the minimum mean squared error satisfying the centroid condition, stating that the reconstruction level is the centroid of the area of the signal probability density function (PDF) in the respective quantization interval. For a given signal PDF, reconstruction levels are therefore time-invariant. Without predictive means at the encoder, scalar LMQ does not exploit source correlation at all. With the purpose of improving LMQ performance for correlated processes, we propose a novel scalar decoding approach employing a receiver-sided predictor inside a feedback loop. Based on the standard LMQ decision levels and utilizing the prediction error PDF shifted by the predictor output, a receiver-sided time-variant quantization codebook can be generated according to the centroid condition. Moreover, we give an analytical derivation of the variance of the prediction error. Simulations over an additive white Gaussian noise (AWGN) channel in error-free and error-prone transmission conditions show that the proposed approach outperforms the standard LMQ by about 1.0 db in terms of signal-to-noise ratio (SNR), especially for low rate quantization and highly correlated source processes. I. INTRODUCTION The Lloyd-Max quantizer (LMQ) is a well known scalar non-uniform quantizer optimized for the minimum mean squared error (MMSE) [1, 2]. Quantization codebook entries (i.e., reconstruction levels) turn out to be the centroid, or center of mass, of the signal probability density function (PDF) in the related quantization intervals, often called the centroid condition [3]. A direct consequence is that once the signal PDF is fixed, reconstruction levels are also time-invariant. Three categories of quantization approaches are identified in [4]: (1) Scalar quantization (SQ) [1, 2], or vector quantization (VQ) [5]; (2) quantizers with fixed-rate coding having the same codeword length [6 8], or quantizers with variable-rate coding having variable codeword length [9, 10]; (3) quantizers being memoryless having a fixed codebook [1, 2, 11], or those having memory utilizing the source correlation with either a time-invariant or a time-variant codebook [12 14]. It is well known that considerable redundancy is observed in speech anmages [4], with redundancy referring to statistical source correlation. Efficient quantization of correlated processes asks for VQ [5] or SQ with memory utilizing the statistical properties of the input signals (e.g., predictive quantization [15, 16] and transform coding [17]). Our study focuses on fixed-rate scalar quantization using adaptive codebooks, where the transmitter is assumed to be given and memoryless, but the receiver may now be freely designed, e.g., by including memory. Classical predictive quantization, typically employing the same predictor both at the transmitter and the receiver side, is widely usen differential pulse code modulation (DPCM) [13, 14] and adaptive differential pulse code modulation (AD- PCM) [18 20]. In the transmitter, the prediction error, which is the difference between the original signal ants predicted signal, is quantized. The transmitter-sided predictive estimation can either depend on the past original/unquantized signal (open-loop predictive quantization), or on the past reconstructed/quantized signal (closed-loop predictive quantization) with the quantizer being inside the prediction feedback loop. If the source process is an N-th order autoregressive process (AR(N)) with zero-mean Gaussian innovation, the optimal transmitter-sided predictor coefficients in open-loop predictive quantization are the same as the AR(N) source process coefficients. As a result, the variance of the transmitter-sided prediction error is identical to the variance of the innovation. However, the optimal predictor coefficients and the optimal prediction error variance in a closed-loop scheme are not supposed to be the same as the values in the open-loop scheme [3, 21]. As the prediction error is the input to the quantizer, also the scaling of the quantization codebook is further influenced and different either. As opposed to the negligible high rate quantization in the prediction loop, the above effects are even more prominent for low rate quantization [3, 21]. In order to improve performance of a given non-predictive scalar LMQ for correlated source processes, we propose a new decoding approach taking advantage of the source correlation from a Gaussian AR(N) process model. Different to predictive quantization, we assume a standard LMQ without any predictor in the encoder and employ a predictor in a feedback loop only in the decoder. In this approach, new reconstruction levels of the quantization codebook in the decoder can be generated on the basis of the prediction error PDF shifted by the predicted sample, resulting in a time-variant receiver-sided codebook. For similar reasons as outlined above, due to the feedback loop, the receiver-sided predictor coefficients and prediction error variance differ from the respective values of the Gaussian AR(N) source signal. In our previous work, a full numerical search for the optimal values of the predictor coefficient(s) and prediction error variance has been performed [22, 23]. In this paper, we present an analytical solution of the receiver-sided

2 AR(1) Signal Model Transmitter Receiver pũ(ũ) ũ + n ẽ n û + n ρ T LMQ i n Equivalent Channel î n û + n pê(ê n = û + n) dîn+1 dîn i = î n (6) T â û n 1 û n +1 pũ( û + n) = pê(ê n = û + n) ũ Fig. 1: Block diagram of the transmission system with the newly proposed receiver. prediction error variance for an AR(1) source process. The receiver-sided predictor coefficient can either be taken from the AR(1) source process or it can be found by a numerical search. The proposed decoder can advantageously employ either hard decisions or soft decisions [22, 24]. We focus on hard-decision decoding in this paper. A particular superiority of this approach is its system-compatible employment in the receiver making the best out of a given standard LMQ in the transmitter by utilizing the source correlation at the receiver. The paper is structured as follows: Section II presents the novel approach including encoder and decoder. Section III provides the formulae to calculate the variance of the receiversided prediction error. Simulation results are discussen Section IV. Conclusions are drawn in Section V. II. A. Overview TOWARDS A TIME-VARIANT LMQ CODEBOOK IN THE RECEIVER The block diagram of the transmission system with our newly proposed receiver is depicten Figure 1. A first order autoregressive process (AR(1)) is used as the source signal model, with the i.i.d. Gaussian innovation ẽ = (ẽ 0,ẽ 1,...,ẽ n,...) having zero mean, variance σẽ 2, and time index n N 0. Therefore, the correlated source samples ũ = (ũ 0,ũ 1,...,,...) fulfill = ẽ n + ρ 1, with the correlation coefficient ρ of the AR(1) process and ũ 0 = ẽ 0. The (unquantized) sample can be quantized to u n by an M bit LMQ codebook and further represented by a corresponding quantization index i n I = {0,1,...,2 M 1}. In this paper, the channel model is described as an equivalent channel, which comprises binary phase-shift keying (BPSK) modulation over an additive white Gaussian noise (AWGN) channel without any channel coding. After bit mapping, the corresponding bits are transmitted over the equivalent channel and further transformed to a received quantization index î n I. In the receiver, as outlinen Section II-C, the received sample û n equals a new time-variant LMQ reconstruction level computed from the receivendex î n and the decision levels of the standard LMQ. B. Encoder As can be seen from Figure 1, the standard LMQ is employen the encoder. Therefore, according to the centroid +1 û + n (given) Fig. 2: PDF of the unquantized signal and shifted PDF of the receiver-sided prediction error ê n. condition [3], the LMQ reconstruction level u (i) = ũ pũ(ũ)dũ pũ(ũ)dũ is the centroid of the region of the unquantized sample PDF pũ(ũ) in the i-th quantization interval [,+1 ] [12], with the decision levels and +1. The centrois marked as a black dot in the upper plot of Figure 2. As can be seen in Figure 1, the correlated Gaussian AR(1) signal model comprises a transmitter-sided prediction, with predicted sample ũ + n = a 1 having transmitter-sided predictor coefficient a = ρ and transmitter-sided prediction error ũ + n = ẽ n (equalling the innovation). If we are interesten the PDF of = ẽ n +ũ + n given a known predicted sample ũ + n at time index n, the signal PDF pũ() conditioned on a known deterministic value ũ + n becomes the transmittersided prediction error PDF pẽ() shifted by ũ + n. Therefore, we can write (1) pũ( ũ + n) = pẽ(ẽ n = ũ + n) = f( ), (2) with the shifted PDF pẽ(ẽ n = ũ + n) of the transmitter-sided prediction error being a function of for any given ũ + n. C. Decoder The newly proposed receiver for an AR(1) signal is depicted in Figure 1. For correlated processes, after the previous sample û n 1 has been estimated by the receiver, the prediction of the current sample can be carried out by a first order predictor û + n = â û n 1, (3) with the receiver-sided predictor coefficient â and û + n=0 = 0. We define a receiver-sided prediction error ê n = û + n. Replacing ũ + n in (2) by û + n, and consequently renaming ẽ n by ê n, and thereby adapting (2) to the receiver, we obtain pũ( û + n) = pê(ê n = û + n) = f( ), (4) with pê(ê n = û + n) denoting the receiver-sided prediction error PDF pê() shifted by the receiver-sided predictor output û + n, sketchen the lower graph of Figure 2.

3 Moreover, for a fixed received quantization index i at time n, the reconstruction levelu (i) given a known predicted sample û + n from (3) turns out to be (compare to (1)) u (i) n = pũ( û + n)d. (5) pũ( û + n)d In consequence, applying (4) to (5), the new time-variant codebook reconstruction levels can be calculated by u (i) n = pê(ê n = û + n)d. (6) pê(ê n = û + n)d The new centroid resulting in a new reconstruction level can be illustrated by comparing the two plots in Figure 2: First, the PDF shapes in the fixed quantization interval [,+1 ] are different. Moreover, the unquantized sample PDF (upper plot) has a larger variance than the (shifted) prediction error PDF. The shifted PDF of the receiver-sided prediction error in (6) can be obtained by pê(ê n = û + 1 n) = exp ( ( û + n) 2 ) 2πˆσê 2ˆσ ê 2, (7) with the mean û + n and the receiver-sided prediction error variance ˆσ ê 2, which will be deriven Section III. Besides, equation (6) is easily solved numerically with the help of the error function. In addition, as shown in Figure 1, the unquantized samples ũ are quantized by the standard LMQ. In other words, the correct quantization interval [,+1 ] (i.e., where the original sample falls in between) can be known by the receiver in error-free transmission conditions. As a result, the very same quantization interval should be used for decoding. Summing up the new proposed receiver process in Figure 1, the received quantization index î n I is used as i = î n in (6). Taking the decision levels dîn and dîn+1 from the standard LMQ, the received sample û n = u (i) n can be obtained by sequentially applying (3), (7), and (6). III. RECEIVER-SIDED PREDICTION ERROR Due to the prediction feedback loop in the receiver, the receiver-sided prediction error variance ˆσ ê 2 in (4) and (7) differs from the transmitter-sided prediction error variance σẽ 2 of the AR(1) signal model in (2). In this section, we present the derivation of ˆσ ê 2 in error-free transmission conditions for an AR(1) source process, finally resulting in (13). As mentionen [3], the predictor coefficients in closedloop predictive quantization are often chosen as the values in open-loop predictive quantization, because it is difficult to find the optimal coefficients given the quantized past. Additionally it is assumed that the reconstruction is reasonably good. Similarly, the variance of the receiver-sided prediction error is also difficult to obtain due to the feedback loop. In the following we adopt a simplified prediction scheme without feedback loop to model the receiver-sided prediction error in error-free transmission conditions, as it is shown in Figure 3. It represents an approximation of our newly proposed decoder Transmitter Model u n e LMQ n Receiver Prediction Path û + n T â û n 1 Fig. 3: A simplified transmission and receiver-sided prediction model without feedback loop (for error-free transmission). scheme from Figure 1, because the estimated sample û n is simply regarded as the standard LMQ output (û n = u n ), although our scheme will provide even better sample estimates as will be shown in Section IV. In consequence, the receiversided prediction error can be modeled as ê n = û + n = â û n 1 = â u n 1. (8) Considering the Lloyd-Max quantization error e LMQ n =u n, we further have û n ê n = â ( 1 +e LMQ n 1 ). (9) In order to calculate the variance of the receiver-sided prediction error, first we need the expectation value E{ê n } = E{ } â E{ 1 } â E{e LMQ = 0, (10) with the property of the LMQ that the mean of the total quantization error E{e LMQ is zero [3]. Consequently, the variance of the receiver-sided prediction error can be calculated by ˆσ ê 2 = E{(ê n E{ê n }) 2 } = E{ê 2 n}. (11) Applying (9) to (11), we obtain ˆσ ê 2 =E{( â ( 1 +e LMQ n 1 ))2 } =E{ũ 2 n +â 2 ( 1 +e LMQ n 1 )2 2â ( 1 +e LMQ n 1 )} =E{ũ 2 n +â 2 ũ 2 n 1 +â 2 (e LMQ n 1 )2 +2â 2 1 e LMQ n 1 2â 1 2â e LMQ =E{ũ 2 n}+â 2 E{ũ 2 n 1}+â 2 E{(e LMQ n 1 )2 } +2â 2 E{ 1 e LMQ 2âE{ 1 } 2âE{ e LMQ, (12) finally resulting in (see Appendix) ˆσ 2 ê = (1+â2 2âρ) σ 2 ẽ 1 ρ 2 (â2 2âρ) σ 2 ẽ SNR(M) (1 ρ 2 ) = ((1+â2 2âρ) SNR(M) (â 2 2âρ)) σẽ 2 SNR(M) (1 ρ 2, ) (13) with SNR(M) = σ2 ũ being the achievable signal-to-noise σ 2 e ratio of LMQ for Gaussian LMQ processes [12, Table 4.4 on page 135]. (These standard values can also be taken from Table I.) Note that SNR(M) is a linear entity, while the values from [12] are given in the log domain as db. Assuming the reconstruction is reasonably good, the receiver-sided predictor coefficient can be set to the correlation

4 M standard SNR SNR â = ρ ˆσ â=ρ proposed SNR â=â opt ˆσ opt TABLE I: SNR (in db) results of the standard LMQ and the proposed LMQ decoder for M bit quantized Gaussian AR(1) samples in error-free transmission, with predictor coefficient â = ρ = 0.9 and optimal predictor coefficient â opt by a full numerical search. ρ standard SNR SNR â = ρ ˆσ â=ρ proposed SNR â=â opt ˆσ opt TABLE II: SNR (in db) results of the standard LMQ and the proposed LMQ decoder for M = 2 bit quantized Gaussian AR(1) samples having different correlation coefficients ρ in error-free transmission, with predictor coefficient â = ρ and optimal predictor coefficient â opt by a full numerical search. coefficient of the Gaussian AR(1) process â = ρ. Then (13) simplifies to 10 M= 2 bit IV. A. Simulation Setup ˆσ 2 ê = 1+ ρ 2 σ 2 ẽ SNR(M) (1 ρ 2 ). (14) EVALUATION AND DISCUSSION A number of10 6 source samples is taken from the Gaussian AR(1) process with zero mean and unit variance (σ 2 ẽ = 1) innovation. The source samples are then quantized according to M {1,2,3,4,5} bit Lloyd-Max quantization codebooks, separately. The bit mapping of the quantization index is processed on the basis of the natural binary code (NBC) [12]. Assuming an AWGN channel with BPSK modulation and varying a given E b /N 0 ratio between 0 db to 10 db, the bits are transmitted over different noisy channel realizations, with E b denoting the signal energy per bit and N 0 denoting the noise power spectral density. Finally, we use the global signalto-noise ratio (SNR) (in db) with respect to the 10 6 source samples to evaluate the performance. In order to identify the optimal receiver-sided predictor coefficient â opt given the reconstructed past for error-free transmission conditions, a full numerical search over the number range ρ 0.1 â ρ in steps of is performed beforehand. For each â, the receiver-sided prediction error variance ˆσ ê 2 is obtained according to (13). The optimal value â opt is determined by the maximum SNR search result and made known to the receiver. B. Discussion The simulation results of the standard LMQ and the proposed LMQ for M bit quantized Gaussian AR(1) samples in error-free transmission conditions are providen Table I, with the correlation coefficient ρ = 0.9. For the proposed approach, â = ρ denotes that the receiver-sided predictor coefficient â is chosen to be the same as the correlation coefficient ρ and the corresponding prediction error variance ˆσ â=ρ 2 is taken from (14); on the other hand, â = â opt represents a full numerical search for the optimal â opt, while the corresponding prediction error variance ˆσ opt 2 is obtained from (13). Clear SNR gains can be observed by the proposed approach with ˆσ â=ρ 2. The best improvement is obtained for low rate quantization. SNR (db) M= 1 bit proposed with â=â opt proposed with â=ρ standard LMQ E b /N 0 (db) Fig. 4: SNR results for M=1 bit and 2 bit quantized Gaussian AR(1) samples with correlation coefficient ρ = 0.9. Further SNR gains can be achieved by the proposed approach with a full numerical search for â opt, however, the increase is practically insignificant. We also performed simulations for M = 2 bit quantized Gaussian AR(1) samples in error-free transmission conditions with various correlation coefficients ρ. As shown in Table II, better performance of the proposed approach is achievable for higher correlations. On the other hand, the same SNR is resulting from the standard and the proposed approach for uncorrelated samples (ρ = 0), due to the standard PDF pê(ê n = 0) pũ( ) acquiren the proposed approach, with â = ρ = 0, ˆσ 2 â=ρ = 1 or â = â opt = 0, ˆσ 2 opt = 1. Furthermore, our approach can also advantageously be applien error-prone transmission conditions. We performed two experiments for M = 1 bit and 2 bit quantized Gaussian AR(1) samples with correlation coefficient ρ = 0.9, respectively. In each experiment, the proposed approach can be either with a full numerical search for the optimal â opt in errorfree transmission (curve with asterisks) or with â = ρ = 0.9 (curve with circles). As can be seen in Figure 4, the SNR can be improved by up to 1.1 db using our proposed approach compared to the standard LMQ decoder. Comparing the two

5 curves with â = â opt and â = ρ, it is found again that choosing the predictor coefficient to be the source correlation coefficient achieves similar performance as performing a full numerical search forâ opt. Therefore we can state that adopting the predictor coefficient from the source process correlation coefficient (â = ρ) provides already most of the gains. Additionally, comparing the simulation results in Table 2 from [23] and the results in Table II, virtually the same performance is achieved for M = 2 bit by the two-dimensional full search as in [23], which implies that the approximation in Section III is valid. For M = 1 bit, however, the approximation leads to an SNR difference of about 0.2 db for ρ = 0.9. V. CONCLUSIONS In this paper, we propose a new approach utilizing the source correlation to improve scalar Lloyd-Max quantization (LMQ) performance. The new approach assumes a given nonpredictive standard LMQ encoder, while an additional receiversided predictor operating on previously reconstructed samples is integratento a feedback loop. The probability density function (PDF) of the source samples given the predictor output turns out to be the prediction error PDF shifted by the predicted sample. A time-variant reconstruction level can be obtained referring to the decision levels of the standard LMQ ants centroid condition. Moreover, we give an analytical derivation of the variance of the receiver-sided prediction error for Gaussian AR(1) processes. Performing a full numerical search for the optimal receiver-sided predictor coefficient or alternatively adopting the correlation coefficient from the AR(1) source processes fortunately leads to similar performance. Significant SNR improvements of about 1.0 db are observed in simulations over an AWGN channel, especially for highly correlated source processes with low rate quantization, both in error-free and error-prone transmission conditions. APPENDIX In the following, we will show how to calculate each term in the last line of (12). The LMQ property that the quantizer output is uncorrelated with the quantization error E{u n 1 e LMQ = 0 [3] results in E{ 1 e LMQ = E{(u n 1 e LMQ n 1 ) elmq = σ 2 e LMQ, (15) with σ 2 e being the variance of the Lloyd-Max quantization error. From LMQ the definition of the autocorrelation coefficient ρ we know that E{ 1 } = ϕũũ (1) = ρ ϕũũ (0) = ρ σ 2 ũ, (16) with σũ 2 being the variance of the unquantized samples ũ. Using (15), the last term in (12) can be obtained by E{ e LMQ = E{(ρ 1 +ẽ n ) e LMQ = ρ E{ 1 e LMQ +E{ẽ n e LMQ = ρ σ 2 e LMQ, (17) with E{ẽ n e LMQ = E{ẽ n} E{e LMQ = 0 because the innovation ẽ n is independent from the previous Lloyd-Max quantization error e LMQ n 1. Combining (15), (16), and (17), equation (12) becomes ˆσ 2 ê =σ 2 ũ +â 2 σ 2 ũ +â 2 σ 2 e LMQ 2â 2 σ 2 e LMQ 2âρσ2 ũ +2âρσ 2 e (18) LMQ =(1+â 2 2âρ) σũ 2 (â 2 2âρ) σ 2 e LMQ. Due to the known property of a Gaussian AR(1) process σ 2 ẽ = 1 ρ 2, we obtain σ 2 σũ 2 ũ = σ2 ẽ 1 ρ. Also, we have the signalto-noise ratio as a function of rate M [bits/sample] given as 2 SNR(M) = σ2 ũ, therefore σ 2 σ σ 2 e LMQ e = 2 LMQ ẽ SNR(M) (1 ρ 2 ). Note that SNR(M) varies for different quantizer bit rates M. Replacing σũ 2 and σ2 e in (18) we obtain (13). LMQ REFERENCES [1] S. Lloyd, Least Squares Quantization in PCM, IEEE Transactions on Information Theory, vol. 28, no. 2, pp , Mar [2] J. Max, Quantizing for Minimum Distortion, IRE Transactions on Information Theory, vol. 6, no. 1, pp. 7 12, Mar [3] A. Gersho and R. Gray, Vector Quantization and Signal Compression. Boston, Dordrecht, London: Kluwer Academic Publishers, [4] R. Gray and D. Neuhoff, Quantization, IEEE Transactions on Information Theory, vol. 44, no. 6, pp , Oct [5] Y. Linde, A. Buzo, and R. Gray, An Algorithm for Vector Quantizer Design, IEEE Transactions on Communications, vol. 28, no. 1, pp , Jan [6] B. Oliver, J. Pierce, and C. Shannon, The Philosophy of PCM, Proceedings IRE, vol. 36, no. 11, pp , Nov [7] D. Hui and D. Neuhoff, Asymptotic Analysis of Optimal Fixed- Rate Uniform Scalar Quantization, IEEE Transactions on Information Theory, vol. 47, no. 3, pp , Mar [8] S. Han, F. Pflug, and T. Fingscheidt, Improved AMR Wideband Error Concealment for Mobile Communications, in Proc. of EUSIPCO, Marrakech, Morocco, Sep. 2013, pp [9] T. Lookabaugh, E. Riskin, P. Chou, and R. Gray, Variable Rate Vector Quantization for Speech, Image, and Video Compression, IEEE Transactions on Communications, vol. 41, no. 1, pp , Jan [10] S. Han and T. Fingscheidt, Variable-Length Versus Fixed-Length Coding: On Tradeoffs for Soft-Decision Decoding, in Proc. of ICASSP, Florence, Italy, May. 2014, pp [11] W. Kleijn and R. Hagen, On Memoryless Quantization in Speech Coding, IEEE Signal Processing Letters, vol. 3, no. 8, pp , Aug [12] N. Jayant and P. Noll, Digital Coding of Waveforms. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., [13] N. Jayant, Digital Coding of Speech Waveforms: PCM, DPCM, and DM Quantizers, Proc. IEEE, vol. 62, no. 5, pp , May [14] R. A. McDonald, Signal-to-Noise and Idle Channel Performance of Differential Pulse Code Modulation Systems Particular Applications to Voice Signals, Bell System Technical Journal, vol. 45, no. 7, pp , Sep [15] P. Elias, Predictive Coding I, IRE Transactions on Information Theory, vol. 1, no. 1, pp , Mar [16] D. Arnstein, Quantization Error in Predictive Coders, IEEE Transactions on Communications, vol. 23, no. 4, pp , Apr [17] J. Huang and P. Schultheiss, Block Quantization of Correlated Gaussian Random Variables, IEEE Transactions on Communications Systems, vol. 11, no. 3, pp , Sep [18] P. Cummiskey, N. S. Jayant, and J. L. Flanagan, Adaptive Quantization in Differential PCM Coding of Speech, Bell System Technical Journal, vol. 52, no. 7, pp , Sep [19] ITU-T Recommendation G.726, 40, 32, 24, 16 kbit/s Adaptive Differential Pulse Code Modulation (ADPCM), ITU-T, Aug [20] ITU-T Recommendation G.722, 7 khz Audio-Coding Within 64 kbit/s, ITU, Nov

6 [21] P.-C. Chang and R. Gray, Gradient Algorithms for Designing Predictive Vector Quantizers, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 34, no. 4, pp , Aug [22] S. Han and T. Fingscheidt, Improving Scalar Quantization for Correlated Processes Using Adaptive Codebooks Only At the Receiver, in Proc. of EUSIPCO, Lisbon, Portugal, Sep. 2014, pp [23], Scalar Quantization With Optimized Receiver-Sided Adaptive Codebook Reconstruction Levels Controlled by a Predictor, in Proc. of 11th ITG Conference on Speech Communication, Erlangen, Germany, Sep. 2014, pp [24] T. Fingscheidt and P. Vary, Softbit Speech Decoding: A New Approach to Error Concealment, IEEE Transactions on Speech and Audio Processing, vol. 9, no. 3, pp , Mar

AN IMPROVED ADPCM DECODER BY ADAPTIVELY CONTROLLED QUANTIZATION INTERVAL CENTROIDS. Sai Han and Tim Fingscheidt

AN IMPROVED ADPCM DECODER BY ADAPTIVELY CONTROLLED QUANTIZATION INTERVAL CENTROIDS. Sai Han and Tim Fingscheidt AN IMPROVED ADPCM DECODER BY ADAPTIVELY CONTROLLED QUANTIZATION INTERVAL CENTROIDS Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität Braunschweig Schleinitzstr.

More information

Pulse-Code Modulation (PCM) :

Pulse-Code Modulation (PCM) : PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from

More information

SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION

SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION Hauke Krüger and Peter Vary Institute of Communication Systems and Data Processing RWTH Aachen University, Templergraben

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER 2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

window operator 2N N orthogonal transform N N scalar quantizers

window operator 2N N orthogonal transform N N scalar quantizers Lapped Orthogonal Vector Quantization Henrique S. Malvar PictureTel Corporation 222 Rosewood Drive, M/S 635 Danvers, MA 1923 Tel: (58) 623-4394 Email: malvar@pictel.com Gary J. Sullivan PictureTel Corporation

More information

LOW COMPLEX FORWARD ADAPTIVE LOSS COMPRESSION ALGORITHM AND ITS APPLICATION IN SPEECH CODING

LOW COMPLEX FORWARD ADAPTIVE LOSS COMPRESSION ALGORITHM AND ITS APPLICATION IN SPEECH CODING Journal of ELECTRICAL ENGINEERING, VOL. 62, NO. 1, 2011, 19 24 LOW COMPLEX FORWARD ADAPTIVE LOSS COMPRESSION ALGORITHM AND ITS APPLICATION IN SPEECH CODING Jelena Nikolić Zoran Perić Dragan Antić Aleksandra

More information

OPTIMUM fixed-rate scalar quantizers, introduced by Max

OPTIMUM fixed-rate scalar quantizers, introduced by Max IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL 54, NO 2, MARCH 2005 495 Quantizer Design for Channel Codes With Soft-Output Decoding Jan Bakus and Amir K Khandani, Member, IEEE Abstract A new method of

More information

IMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart

IMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart IMAGE COMPRESSIO OF DIGITIZED DE X-RAY RADIOGRAPHS BY ADAPTIVE DIFFERETIAL PULSE CODE MODULATIO Brian K. LoveweIl and John P. Basart Center for ondestructive Evaluation and the Department of Electrical

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

The Equivalence of ADPCM and CELP Coding

The Equivalence of ADPCM and CELP Coding The Equivalence of ADPCM and CELP Coding Peter Kabal Department of Electrical & Computer Engineering McGill University Montreal, Canada Version.2 March 20 c 20 Peter Kabal 20/03/ You are free: to Share

More information

Multimedia Communications. Differential Coding

Multimedia Communications. Differential Coding Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

E303: Communication Systems

E303: Communication Systems E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger

SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in

More information

Image Compression using DPCM with LMS Algorithm

Image Compression using DPCM with LMS Algorithm Image Compression using DPCM with LMS Algorithm Reenu Sharma, Abhay Khedkar SRCEM, Banmore -----------------------------------------------------------------****---------------------------------------------------------------

More information

Design of a CELP coder and analysis of various quantization techniques

Design of a CELP coder and analysis of various quantization techniques EECS 65 Project Report Design of a CELP coder and analysis of various quantization techniques Prof. David L. Neuhoff By: Awais M. Kamboh Krispian C. Lawrence Aditya M. Thomas Philip I. Tsai Winter 005

More information

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014 Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

CS578- Speech Signal Processing

CS578- Speech Signal Processing CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction

More information

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression

Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 12, NO 11, NOVEMBER 2002 957 Rate-Constrained Multihypothesis Prediction for Motion-Compensated Video Compression Markus Flierl, Student

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

HARMONIC VECTOR QUANTIZATION

HARMONIC VECTOR QUANTIZATION HARMONIC VECTOR QUANTIZATION Volodya Grancharov, Sigurdur Sverrisson, Erik Norvell, Tomas Toftgård, Jonas Svedberg, and Harald Pobloth SMN, Ericsson Research, Ericsson AB 64 8, Stockholm, Sweden ABSTRACT

More information

Soft-Decision Demodulation Design for COVQ over White, Colored, and ISI Gaussian Channels

Soft-Decision Demodulation Design for COVQ over White, Colored, and ISI Gaussian Channels IEEE TRANSACTIONS ON COMMUNICATIONS, VOL 48, NO 9, SEPTEMBER 2000 1499 Soft-Decision Demodulation Design for COVQ over White, Colored, and ISI Gaussian Channels Nam Phamdo, Senior Member, IEEE, and Fady

More information

Fractal Dimension and Vector Quantization

Fractal Dimension and Vector Quantization Fractal Dimension and Vector Quantization [Extended Abstract] Krishna Kumaraswamy Center for Automated Learning and Discovery, Carnegie Mellon University skkumar@cs.cmu.edu Vasileios Megalooikonomou Department

More information

Chapter 10 Applications in Communications

Chapter 10 Applications in Communications Chapter 10 Applications in Communications School of Information Science and Engineering, SDU. 1/ 47 Introduction Some methods for digitizing analog waveforms: Pulse-code modulation (PCM) Differential PCM

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

Gaussian Source Coding With Spherical Codes

Gaussian Source Coding With Spherical Codes 2980 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 11, NOVEMBER 2002 Gaussian Source Coding With Spherical Codes Jon Hamkins, Member, IEEE, and Kenneth Zeger, Fellow, IEEE Abstract A fixed-rate shape

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

THE dictionary (Random House) definition of quantization

THE dictionary (Random House) definition of quantization IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 6, OCTOBER 1998 2325 Quantization Robert M. Gray, Fellow, IEEE, and David L. Neuhoff, Fellow, IEEE (Invited Paper) Abstract The history of the theory

More information

Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters

Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters Empirical Lower Bound on the Bitrate for the Transparent Memoryless Coding of Wideband LPC Parameters Author So, Stephen, Paliwal, Kuldip Published 2006 Journal Title IEEE Signal Processing Letters DOI

More information

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely

EE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely EE 11: Introduction to Digital Communication Systems Midterm Solutions 1. Consider the following discrete-time communication system. There are two equallly likely messages to be transmitted, and they are

More information

ONE approach to improving the performance of a quantizer

ONE approach to improving the performance of a quantizer 640 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 Quantizers With Unim Decoders Channel-Optimized Encoders Benjamin Farber Kenneth Zeger, Fellow, IEEE Abstract Scalar quantizers

More information

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ Digital Speech Processing Lecture 16 Speech Coding Methods Based on Speech Waveform Representations and Speech Models Adaptive and Differential Coding 1 Speech Waveform Coding-Summary of Part 1 1. Probability

More information

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback 2038 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback Vincent

More information

Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source

Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 8-2016 Joint Source-Channel Coding Optimized On Endto-End Distortion for Multimedia Source Ebrahim Jarvis ej7414@rit.edu

More information

On the DPCM Compression of Gaussian Auto-Regressive. Sequences

On the DPCM Compression of Gaussian Auto-Regressive. Sequences On the DPCM Compression of Gaussian Auto-Regressive Sequences Onur G. Guleryuz, Michael T. Orchard Department of Electrical Engineering Polytechnic University, Brooklyn, NY 1101 Department of Electrical

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES

LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES Saikat Chatterjee and T.V. Sreenivas Department of Electrical Communication Engineering Indian Institute of Science,

More information

SCALABLE AUDIO CODING USING WATERMARKING

SCALABLE AUDIO CODING USING WATERMARKING SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada Email: {mahmood.movassagh@mail.mcgill.ca, peter.kabal@mcgill.ca}

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering

More information

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801

Rate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801 Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y

More information

Source Coding: Part I of Fundamentals of Source and Video Coding

Source Coding: Part I of Fundamentals of Source and Video Coding Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko

More information

One Lesson of Information Theory

One Lesson of Information Theory Institut für One Lesson of Information Theory Prof. Dr.-Ing. Volker Kühn Institute of Communications Engineering University of Rostock, Germany Email: volker.kuehn@uni-rostock.de http://www.int.uni-rostock.de/

More information

PREDICTIVE quantization is one of the most widely-used

PREDICTIVE quantization is one of the most widely-used 618 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 1, NO. 4, DECEMBER 2007 Robust Predictive Quantization: Analysis and Design Via Convex Optimization Alyson K. Fletcher, Member, IEEE, Sundeep

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

ON NOISE PROPAGATION IN CLOSED-LOOP LINEAR PREDICTIVE CODING. Hauke Krüger, Bernd Geiser, and Peter Vary

ON NOISE PROPAGATION IN CLOSED-LOOP LINEAR PREDICTIVE CODING. Hauke Krüger, Bernd Geiser, and Peter Vary 04 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) ON NOISE PROPAGATION IN CLOSEDLOOP LINEAR PREDICTIVE CODING Hauke Krüger, Bernd Geiser, and Peter Vary Institute of Communication

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization

More information

STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING

STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING Item Type text; Proceedings Authors Kummerow, Thomas Publisher International Foundation for Telemetering Journal International Telemetering

More information

EE5356 Digital Image Processing

EE5356 Digital Image Processing EE5356 Digital Image Processing INSTRUCTOR: Dr KR Rao Spring 007, Final Thursday, 10 April 007 11:00 AM 1:00 PM ( hours) (Room 111 NH) INSTRUCTIONS: 1 Closed books and closed notes All problems carry weights

More information

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel

Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Lei Bao, Mikael Skoglund and Karl Henrik Johansson School of Electrical Engineering, Royal Institute of Technology, Stockholm,

More information

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences

Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Jingning Han, Vinay Melkote, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa

More information

BASICS OF COMPRESSION THEORY

BASICS OF COMPRESSION THEORY BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find

More information

A POSTERIORI SPEECH PRESENCE PROBABILITY ESTIMATION BASED ON AVERAGED OBSERVATIONS AND A SUPER-GAUSSIAN SPEECH MODEL

A POSTERIORI SPEECH PRESENCE PROBABILITY ESTIMATION BASED ON AVERAGED OBSERVATIONS AND A SUPER-GAUSSIAN SPEECH MODEL A POSTERIORI SPEECH PRESENCE PROBABILITY ESTIMATION BASED ON AVERAGED OBSERVATIONS AND A SUPER-GAUSSIAN SPEECH MODEL Balázs Fodor Institute for Communications Technology Technische Universität Braunschweig

More information

Achieving the Gaussian Rate-Distortion Function by Prediction

Achieving the Gaussian Rate-Distortion Function by Prediction Achieving the Gaussian Rate-Distortion Function by Prediction Ram Zamir, Yuval Kochman and Uri Erez Dept. Electrical Engineering-Systems, Tel Aviv University Abstract The water-filling solution for the

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source Optimization of Variable-length Code for Data Compression of memoryless Laplacian source Marko D. Petković, Zoran H. Perić, Aleksandar V. Mosić Abstract In this paper we present the efficient technique

More information

Fractal Dimension and Vector Quantization

Fractal Dimension and Vector Quantization Fractal Dimension and Vector Quantization Krishna Kumaraswamy a, Vasileios Megalooikonomou b,, Christos Faloutsos a a School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 523 b Department

More information

A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction

A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction SPIE Conference on Visual Communications and Image Processing, Perth, Australia, June 2000 1 A Video Codec Incorporating Block-Based Multi-Hypothesis Motion-Compensated Prediction Markus Flierl, Thomas

More information

Transformation Techniques for Real Time High Speed Implementation of Nonlinear Algorithms

Transformation Techniques for Real Time High Speed Implementation of Nonlinear Algorithms International Journal of Electronics and Communication Engineering. ISSN 0974-66 Volume 4, Number (0), pp.83-94 International Research Publication House http://www.irphouse.com Transformation Techniques

More information

Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source

Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Yuval Kochman, Jan Østergaard, and Ram Zamir Abstract It was recently shown that the symmetric multiple-description

More information

MANY digital speech communication applications, e.g.,

MANY digital speech communication applications, e.g., 406 IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 2, FEBRUARY 2007 An MMSE Estimator for Speech Enhancement Under a Combined Stochastic Deterministic Speech Model Richard C.

More information

CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS

CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS CHANNEL FEEDBACK QUANTIZATION METHODS FOR MISO AND MIMO SYSTEMS June Chul Roh and Bhaskar D Rao Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 9293 47,

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels

Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Lei Bao, Mikael Skoglund and Karl Henrik Johansson Department of Signals, Sensors and Systems, Royal Institute of Technology,

More information

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997

798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 10, OCTOBER 1997 798 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL 44, NO 10, OCTOBER 1997 Stochastic Analysis of the Modulator Differential Pulse Code Modulator Rajesh Sharma,

More information

MODERN video coding standards, such as H.263, H.264,

MODERN video coding standards, such as H.263, H.264, 146 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 16, NO. 1, JANUARY 2006 Analysis of Multihypothesis Motion Compensated Prediction (MHMCP) for Robust Visual Communication Wei-Ying

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

SUCCESSIVE refinement of information, or scalable

SUCCESSIVE refinement of information, or scalable IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for

More information

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D

More information

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security

Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Distributed Detection and Estimation in Wireless Sensor Networks: Resource Allocation, Fusion Rules, and Network Security Edmond Nurellari The University of Leeds, UK School of Electronic and Electrical

More information

Field Trial Evaluation of Compression Algorithms for Distributed Antenna Systems

Field Trial Evaluation of Compression Algorithms for Distributed Antenna Systems Field Trial Evaluation of Compression Algorithms for Distributed Antenna Systems Michael Grieger, Peter Helbing, Gerhard Fettweis Technische Universität Dresden, Vodafone Chair Mobile Communications Systems,

More information

Afundamental component in the design and analysis of

Afundamental component in the design and analysis of IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 2, MARCH 1999 533 High-Resolution Source Coding for Non-Difference Distortion Measures: The Rate-Distortion Function Tamás Linder, Member, IEEE, Ram

More information

Multimedia Communications. Scalar Quantization

Multimedia Communications. Scalar Quantization Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing

More information

On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding

On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding On the Computation of EXIT Characteristics for Symbol-Based Iterative Decoding Jörg Kliewer, Soon Xin Ng 2, and Lajos Hanzo 2 University of Notre Dame, Department of Electrical Engineering, Notre Dame,

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

Example: for source

Example: for source Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby

More information

Proc. of NCC 2010, Chennai, India

Proc. of NCC 2010, Chennai, India Proc. of NCC 2010, Chennai, India Trajectory and surface modeling of LSF for low rate speech coding M. Deepak and Preeti Rao Department of Electrical Engineering Indian Institute of Technology, Bombay

More information

Speech Coding. Speech Processing. Tom Bäckström. October Aalto University

Speech Coding. Speech Processing. Tom Bäckström. October Aalto University Speech Coding Speech Processing Tom Bäckström Aalto University October 2015 Introduction Speech coding refers to the digital compression of speech signals for telecommunication (and storage) applications.

More information

6 Quantization of Discrete Time Signals

6 Quantization of Discrete Time Signals Ramachandran, R.P. Quantization of Discrete Time Signals Digital Signal Processing Handboo Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 1999 c 1999byCRCPressLLC 6 Quantization

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

7.1 Sampling and Reconstruction

7.1 Sampling and Reconstruction Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion

More information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information

Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Capacity-achieving Feedback Scheme for Flat Fading Channels with Channel State Information Jialing Liu liujl@iastate.edu Sekhar Tatikonda sekhar.tatikonda@yale.edu Nicola Elia nelia@iastate.edu Dept. of

More information

The Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal

The Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal The Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal Claus Bauer, Mark Vinton Abstract This paper proposes a new procedure of lowcomplexity to

More information

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems

More information

Causal transmission of colored source frames over a packet erasure channel

Causal transmission of colored source frames over a packet erasure channel Causal transmission of colored source frames over a packet erasure channel Ying-zong Huang, Yuval Kochman& Gregory W. Wornell Dept. of Electrical Engineering and Computer Science Massachusetts Institute

More information

Lecture 18: Gaussian Channel

Lecture 18: Gaussian Channel Lecture 18: Gaussian Channel Gaussian channel Gaussian channel capacity Dr. Yao Xie, ECE587, Information Theory, Duke University Mona Lisa in AWGN Mona Lisa Noisy Mona Lisa 100 100 200 200 300 300 400

More information

Fast Length-Constrained MAP Decoding of Variable Length Coded Markov Sequences over Noisy Channels

Fast Length-Constrained MAP Decoding of Variable Length Coded Markov Sequences over Noisy Channels Fast Length-Constrained MAP Decoding of Variable Length Coded Markov Sequences over Noisy Channels Zhe Wang, Xiaolin Wu and Sorina Dumitrescu Department of Electrical and Computer Engineering McMaster

More information

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820 Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS

More information