CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM

Size: px
Start display at page:

Download "CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM"

Transcription

1 CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM

2 Présentation Après avoir mis au point et validé une méthode d allocation de bits avec un découpage de la DCT en bandes circulaires, nous avons cherché à améliorer l optimisation du quantificateur sur lequel repose cette allocation. Jusqu alors, nos expériences étaient basées sur l hypothèse que les bandes circulaires suivaient une distribution Laplacienne. Nous avons voulu développer une approche valide pour toute distribution de type Gaussienne généralisée. Nous avons aussi cherché à réaliser une quantification non pas adaptée à chaque image, mais fixe pour une classe d images. Pour ce faire, il faut connaître l impact d un choix de quantificateur inapproprié à la distribution du signal afin de choisir au mieux les paramètres de distribution Gaussienne généralisée que l on doit appliquer avec toutes les images. Une étude du quantificateur uniforme optimum était donc nécessaire, ainsi qu une exploration des conséquences de la non adaptation du quantifieur. Le présent chapitre donne nos développements théoriques sur l optimisation et la robustesse du quantifieur scalaire uniforme. Ces travaux ont fait l objet de l article [BERE-97], qui va être soumi à la revue Signal Processing, et dont le texte est reproduit ci-dessous. Robustness of Optimum Uniform Quantizers to a Mismatched Statistical Model.. Introduction This paper gives a theoretical study of quantizer mismatch with scalar uniform quantizers. The current work was carried out in view of practical applications to transform image coding. The results are however applicable to the quantization of any waveform by a scalar uniform quantizer. The coding or compression of a digital signal facilitates its transmission and archive. Compression is often the only viable solution for transmitting large images over limited bandwidth channels, or for long term storage of large amount of data. A typical lossy compression system includes a signal transformation, commonly by discrete cosine transform or by filter banks, in order to decorrelate the signal and to compact its energy into a small number of coefficients. In such systems, called transform coding, the signal transformation is followed by quantization, a conversion of the transformed signal into a small number of levels. The quantization is non-invertible, and yields a lossy compression. The last operation of a transform coding system is an entropy coding that reduces the remaining redundancy of the quantized transformed coefficients. Quantization is the key operation of compression schemes because it must both preserve the features that are relevant to the end-user after signal reconstruction, and reduce the data rate (i.e. number of bits per signal sample). There are two main categories of quantizers: the scalar quantizer (SQ), and the vector quantizer (VQ). The SQ quantifies individual samples by mapping them into a limited set of values. In contrast, the VQ quantifies blocks of samples by mapping them into a limited set of blocks (called codewords). Vector quantization is an extension of scalar quantization to dimensional spaces higher than. In his fundamental work on rate-distortion theory, Shannon proved

3 that a VQ can always achieve better coding performances than a SQ [SHAN-59]. In practice, VQ is complex to implement. It requires a training phase in order to determine the dictionary of output codewords based on a number of test images. The coding phase consists of matching the encountered waveform blocks with the closest codeword. Both operations are complex. Only the decoding phase is very simple. Because of VQ s complexity and long coding times, SQ has been extensively studied [MAX-6], [GISH-68], [WOOD-69], [BERG-7], [GERS-78], [BERG-8], and utilized for image coding in the seventies and eighties. With the advance of computer technology, VQ was given more attention in the eighties and nineties [GERS-8], [GRAY-8]. However, scalar quantization recently regained attention with the contributions of [SHAP-9], [SAID-96], and remains widely used in transform coding because of its simplicity. Image coding standards extensively use scalar uniform quantizer, e.g. JPEG [PENN-9] and MPEG [LEGA-9]. With a scalar quantizer, the individual input samples are divided in threshold intervals, which boundaries are the threshold levels. All the values lying within a threshold interval are mapped into a single quantization level. The mapping of the input values into a limited number of quantization levels results in a distortion. Four types of scalar quantizers are principally considered in the literature. Their definition is given in the following. Definition : An N-level pdf-optimized quantizer is a quantizer that minimizes the average distortion for a fixed number of levels N. This is the Max-Lloyd quantizer [MAX-6]. The threshold and quantization levels are not uniformly spread over the input and the output range. Definition : An N-level minimum-distortion uniform-threshold quantizer is a quantizer that minimizes the average distortion for a fixed number of levels N, with uniform threshold levels and non-uniform quantization levels. Definition : An N-level minimum-distortion uniform quantizer is a quantizer that minimizes the average distortion for a fixed number of levels N, with both uniform threshold and uniform quantization levels. With the quantizers of definition,, and the bit rate is not controlled. Definition : An N-level entropy-constrained optimum quantizer is a quantizer that minimizes the average distortion at a given bit rate. A minimum-distortion uniform quantizer followed by entropy coding gives better performance than the Max-Lloyd quantizer (without entropy coding) in terms of ratedistortion [JAIN-89 pp5-7]. Other advantages of the uniform quantizer are both the small amount of overhead data, and the simplicity of its implementation. For these reasons, we limited the scope of this work to the uniform quantizer. Two approaches are possible: either the quantizer is adapted to the properties of each input signal, or it is fixed for a class of signals. In the first approach, the quantization is adaptive, and the computational cost is high. The second approach, non-adaptive quantization, is the one addressed in this paper. A non-adaptive quantizer is designed for a class of signals which are assumed to have similar properties, and in particular the same probability density function (pdf). A major concern in practical applications of coding is the robustness of the quantizer regarding possible variation of the input-signals pdf. Our objective is to address this robustness with the uniform quantizer by studying the effect of a possible mismatch of the input pdf compared with the pdf expected in the quantizer design. We consider both minimum-distortion uniform quantizers (definition ) and entropy-constrained optimum uniform quantizers (definition ). We assume that the input signal follows a generalized Gaussian (GG) distribution, which covers a wide range of signals found in practical applications. Signal modeling by the GG pdf includes the Laplacian and the Gaussian pdf. GG pdfs are encountered in DPCM [CUTL-5], [JAIN

4 89], cosine transform [REIN-8], [MULL-9], [JOSH-95], [MOSH-96], wavelet transform [MALL-89], [BARL-9] or subband coding [WOOD-86], [WEST-88]. Although scalar quantizers have been widely studied in the years 97-8, to our knowledge the robustness of the uniform quantizer has not yet been addressed. A detailed study of the robustness of the Max-Lloyd quantizer (definition ) was reported in [MAUE-79], and used in [JAYA-8]. We found no study about the mismatch of uniform quantizers from definition and. In this paper, we use the Mean Square Error (MSE) and the Signal to Noise Ratio (SNR) as measures of distortion. We present the deviation of the MSE and the SNR due to a quantizer designed with a pdf model that differs from the actual pdf of the input-signal. Section gives the mathematical expression of the MSE and the entropy with a generalized Gaussian pdf, then the analysis of the uniform quantizer properties, and finally the analytical formulation of the rate-distortion optimization of entropy constrained uniform quantizers. In section, mismatch of the quantizer relative to the shape parameter of the input pdf, and mismatch relative to the variance are addressed. Finally, section summarizes our findings and discuss them in comparison to related works.. Matched uniform quantizers with generalized Gaussian distributions. Notation A scalar quantizer is a staircase function that maps the input values into a smaller range of output levels. The quantizer maps a continuous random variable X into a discrete random variable X ~. The range of the input values is divided into N=L+ adjacent intervals, which boundaries are the threshold levels t, t,..., t N. The output belongs to a finite set of quantization levels {l, l,..., l L}. If the i th input value x(i) lies between the threshold levels t j and t j+, then it is mapped into an output value x ~ (i)= l j. A uniform quantizer is defined by the number of threshold intervals N and the quantization step size q. The number of quantization levels is also equal to N. The threshold and the quantization intervals are all constant and equal to q. Midtread quantizers are symmetrical with a central quantization level l L/ =, their number of quantization levels is always odd. Midrise quantizers have an even number of quantization levels; they cannot reconstruct a zero-value because zero is a threshold level. We limited this study to scalar uniform midtread quantizers, an example of which is shown in figure. Extension to midrise quantizers would follow similar derivations

5 N=L+=7 l L=L.q l l 6=.. t =- t = -L/.q+q/ t l l = t 6=t L L/q-q/ t N=+ x l l l =-L.q/ Figure : Characteristic of a midtread quantizer.. Mean Square Error of uniform quantizers with GG pdf The mean square error (MSE) of a quantizer is defined by: D(q) = ( x x)² p( x) dx ~ where p(x) is the pdf of the input random variable X. () Without lack of generality, we assume that X is a zero-mean random variable, with variance σ. For a uniform midtread quantizer with N=L+ quantization levels, with a quantization step q, the MSE is: D(q)= D g (q) + D o (q) () where D g (q) = { L j= ( j / ) q [ x ( j ) q]² p( x) dx+ ( j ) q and L D o (q)= L ( x q )² p ( x ) dx q L j= jq ( x jq)² p( x) dx} ( j / ) q The terms D g (q) and D o (q) refer to two different kinds of errors. D o (q) becomes important when extreme values of the input are saturated by the quantizer, i.e. the range of the input values exceeds the range of the quantizer threshold levels. This is commonly referred to as the overload distortion. This distortion is high if some input values that have a rather high probability are saturated. Conversely, D g (q) becomes high when the full range of the input values is quantized, but in a coarse manner. This is called the granular noise. Granular noise and overload distortion have a different impact on the perceptual annoyance in image coding. - -

6 The input-signal may follow a great variety of distributions, and for a broad investigation of the quantization error we used the generalized Gaussian distribution p X (x) = K e ( x ) α () where K =, α and >, and where Γ (x) is the gamma distribution. αγ( ) Γ( ) The variance of this pdf is σ² = α². Γ( ) Particular cases of the generalized Gaussian distribution are the Laplacian pdf: =, and the Gaussian pdf: =. When, the distribution tends towards an impulse. When, the distribution is uniform, it can easily be demonstrated that its height is σ and its width σ. Similarly, when σ² the distribution tends towards an impulse. When σ², the distribution becomes wider and wider, its amplitude tends towards zero. σ = = (a) (b) Figure : Pdf of Generalized Gaussian distributions. Figure gives plots of generalized Gaussian distributions for various values of and σ². As shown in figure, the shape of the distribution can be modified by varying the parameter without changing the variance. The parameter is referred to as the shape parameter. In image transform coding based on Discrete Cosine Transform (DCT) or Subband Band Coding (SBC), the values of encountered in practice are often in the range of.5 to. - -

7 The quantization error D(q) can be derived by incorporating () into (). The result is D(q)= D g (q) + D o (q) () where L D g (q) =K α {α ² Γ( C)[ γ ( b, C) γ ( b, C)] j = jqαγ( D)[ γ ( b, D) γ ( b, D)] + jq ² ² Γ( E)[ γ ( b, E) γ ( b, E)] + qαγ( D)[ γ ( b, D) γ ( b, D)] ( j ) q² Γ( E)[ γ ( b, E) γ ( b, E)] } and D o (q) = K α { α ² Γ ( C )[ γ ( b, C )] LqαΓ( D)[ γ ( b, D)] Lq ²² + Γ ( E )[ γ ( b, E )] } with Γ( ) α² = σ², K = Γ( ) αγ( ), C = /, D = /, E = /, b = ( j ) q ; b = ( j / ) q ; b α α = jq ; b α = and where γ denotes the incomplete gamma function. Figure illustrates the quantization error () as a function of the quantization step q, for a fixed number of levels, with =, and σ ²=. = σ = L q α D Figure : Quantization error D as a function of the quantization step q. The quantization error reaches a minimum for a value of q denoted here q opt/d. The MSE is also a function of the number of steps N, of the shape parameter, and of the variance σ². - -

8 . Minimum-MSE uniform quantizer The quantization step that minimizes the MSE is obtained by differentiating D(q) with respect to q and equating the result to zero. The differentiate of D(q) is: dd (q) dq where dd g (q) dq and ddo( q ) dq = dd g (q) dq = K α j L = = KL α { L + dd o( q) dq { -j αγ( D)[ γ ( b, D) γ ( b, D)] + j q Γ( E)[ γ ( b, E) γ ( b, E)] + α Γ( D)[ γ ( b, D) γ ( b, D)] - (j-) q Γ( E)[ γ ( b, E) γ ( b, E)] q Γ( E )[ γ ( b, E )] - αγ( D)[ γ ( b, D)] } (5) Our equation allows the calculation of q opt/d for GG pdf with any value of the parameters. Usually books give tables of q opt/d for a set of values of N, and only for a few distributions like the Laplacian and the Gaussian ones [JAIN-89]. Formula (5) allows to determine the quantization step of the minimum-mse uniform quantizer by solving the non-linear equation dd ( q ) =. We used toolboxes of the MatLab package from MatWorks, which dq resolves non-linear equations by the Gauss-Newton method. Figure shows that the penalty for choosing q too small compared with q opt/d is much more important than the penalty for choosing q too high. The overload and granularity error curves provide an insight to the penalties observed when q departs from q opt/d. The optimum quantization step is reached when the sum of the overload and granularity noise is minimal. Below q opt/d the overload error is dominant, and above q opt/d, the granularity error is dominant. The MSE increases more rapidely with increasing overload than with increasing granularity. If the quantization step q is not the minimum-distorsion value, overload distortion should be carefully avoided because it is more penalizing than granularity. In practical situations, it is preferable to over-estimate q as compared with q opt/d rather than under-estimate it. - -

9 = σ = = σ = D (a) (b) σ = σ = D = (c) (d) = D (e) (f) Figure : MSE and SNR as a function of the quantization steps, for different values of N,, and σ². Figure shows the MSE as a function of q, given various numbers of quantization levels N (figure -a), shape parameters (figure -c), and variances σ (figure -e). A plot of the Signal to Noise Ratio (SNR), with SNR = log (σ ²/MSE), is added as a companion of the MSE (figure -b, -d, and -f). Figure -a & b show, as expected, that large values of N result in a low distortion. In addition, as q, all the MSE curves for various values of N converge towards the same asymptotic curve. In spite of what a prime interpretation of the influence of on q opt/d could suggest, figure - c & d show that the values of the optimal quantization step decrease as the shape parameter increases. One could have expected q opt/d to increase with, in order to limit - -

10 the overload. However, when the shape parameter increases, the pdf tends toward the uniform distribution, as seen on figure -a. For uniform quantizers with a uniform pdf, the whole input range is used, the saturation distortion tends toward zero. Thus, when increases, the pdf-optimization procedure yields small values of q opt/d in order to limit the granularity distortion. When properly optimized, quantizers with large pdf will perform better than quantizers with small pdf. However, a quantization step different from q opt/d is more penalizing for large than for small. In practical situations, minimum-mse uniform quantizers with input pdf of large shape parameters will perform better than with input pdf of small shape parameters. But the penalty for having a poor optimization of the quantization step is more important with large shape parameter pdfs. Concerning the influence of σ² on q opt/d, figure -e & f show that the values of the optimal quantization step increase as the variance increases. Pdfs with a large variance are widely spread but they do not tend towards a uniform pdf, their shape remains that of a generalized Gaussian distribution. The saturation has to be limited during the quantizer optimization process by having a large quantization step. Figure -f shows that uniform quantizers with various input pdf variance have all the same maximum SNR value. However, the penalty for poorly optimized quantizers is higher with small variances. In practical situations, minimum-mse uniform quantizers having different variances perform identically in terms of SNR. But the penalty for having a poor optimization of the quantization step is more important with small variance pdfs.. Entropy of uniform quantizers with GG pdf The entropy of the output of a quantizer is the minimum amount of information to be transmitted in order to be able to reconstruct the quantizer output with an arbitrarily small error. It is also referred to as the lower bound data rate, or bit rate for a given distortion. It is expressed in bits per sample (bps). It is given by: p p H Q = - log ( ) bits/sample (6) i i i Application of formula (6) to uniform midtread quantizers yields: H Q = - { where p j = p = p = ( j+ / ) q ( j / ) q q / L q L p p j j j = pxdx ( ) pxdx ( ) pxdx ( ) log ( ) + p log (p ) + p log (p ) } (7) - 5 -

11 Incorporating the pdf definition () into (7) results in: p j = K α Γ( )[ (, ) (, )] E γ b E γ b E 5 p = K α Γ( E) γ ( b, E) 6 p = K α Γ( )[ (, )] E γ b E 7 (8) with ( j / ) q b = α ( j + / ) q b 5 = α b 6 = q / α L q b 7 = α = σ = (a) σ = Ν=5 = (b) Figure 5: Entropy as a function of the quantization steps for different values of N,, and σ². Figure 5 plots the entropy of formula (7) as a function of q, for different values of N,, and σ². There is a value of q that maximizes the entropy, denoted q opt/r. As expected, the entropy of quantizers with a large number of levels N is greater than the entropy of quantizers with small N, and q opt/r decreases with increasing N. q opt/r increases with increasing (on the contrary, q opt/d decreases with increasing ). The entropy of minimum-mse quantizers increases with increasing. q opt/r increases with increasing σ² (similarly, q opt/d increases with increasing σ²). The entropy of minimum-mse quantizers is independant of σ² (c)

12 .5 Entropy-constrained uniform quantizer with GG pdf The quantization step of an entropy-constrained quantizer should minimize the distortion subject to a fixed entropy constraint H. Using to the Lagrangian multiplier method, the solution of this problem minimizes the following functional: J(q) = D(q) + λ [H Q (q)- H ] (9) By differentiating J(q) with respect to q and λ, equating the result to zero, and choosing λ so that H(q) = H, the problem is to solve the system of non-linear equations: dj dd( q) dh( q) = + λ = dq dq dq dj = Hq ( ) H = dλ () After incorporating formulas (5) and (A), from the appendix, into (), we are able to resolve () for any a priori number of levels N, any fixed entropy H, and any GG input pdf. We used MatLab toolboxes that resolve sets of non-linear equations with the Gauss- Newton method. Our approach results in a practical method for designing N-level entropy-constrained uniform quantizers (definition ). Note that minimum-mse uniform quantizers (definition ) are designed simply by taking λ= and relaxing the constraint H(q)- H.=. The performance of an entropy-constrained optimum quantizer is assessed by its ratedistortion curve R(D). It is well known that for each pdf there exists a bound, called the rate-distortion bound R B (D), such that: R(D) R B (D) () The minimum bit rate needed to transmit a quantized signal is determined by the entropy of the quantizer output. This entropy H Q is given by: H Q H s - log (q) () Where H s is the differential entropy of the source: + H s = - px ( )log pxdx ( ) () For minimum-mse uniform quantizers, the uniform pdf yields the lowest possible distortion. According to the well-known formula of the distortion for uniform quantizers with a uniform pdf, we have: D = q = (5) The entropy of the quantizer output can be bounded by H Q for q =. Incorporating () into () results in the Gish-Pierce asymptote of the rate-distortion performance. Figure 6 shows an example of R(D) curve for a 5-levels uniform quantizer with a Laplacian unity variance pdf (= and σ²=). The relation between the entropy R, the quantization step q, and the distortion D is shown on the figure. The R(D) curve (figure 6- b) has an optimum that is reached when the best compromise between the highest rate and the lowest distortion is achieved. This point corresponds to the optimum entropy

13 constrained uniform quantizer. Its quantization step is denoted q opt/r-d. Below or above this optimum quantization step, the distortion is higher (figure 6-b and c). For q> q opt/r-d the entropy is lower, and for q< q opt/r-d the entropy is higher which is less favorable to compression (figure 6-a). Similarly to minimum-mse quantizers, for entropy-constrained quantizers it is preferable to over-estimate q as compared to q opt/r-d rather than to under-estimate it Gish-Pierce lower bound (a) = σ²= (b) Figure 6: Rate-distortion performance R(D) of matched uniform quantizers (6-b), correspondence with the curves R(q) (6-a), and q(d) ( 6-c). (c) - 8 -

14 6 5 N=6 GP: Gish-Pierce lower bound N= N=7 = σ²= N= (a) =. GP =. σ²= =. =. =.5 5 σ =.5,.,., &. σ²= GP =. GP =. GP σ=.5,.,., &. GP = (b) (c) Figure 7: Rate-distortion performance R(D) of matched uniform quantizers. Figure 7 shows the rate-distortion performance of entropy-constrained uniform quantizers, given various values of N (figure 7-a), (figure 7-b), and σ² (figure 7-c). As expected, large values of N result in entropy-constrained quantizers with a high entropy and a low distortion (figure 7-a). The R(D) performance is closer to the Gish-Pierce lower bound for large N. The difference between the lower bound is less than.5 bit with 6 levels and more than bit with 7 levels and less (= and σ²=). The performance of optimum entropy-constrained uniform quantizers increases with increasing (figure 7-b). But the distortion increase, when one departs from the optimum, also increases with. The performance of entropy-constrained uniform quantizers is independant of the variance, whether they are optimum or not. These observations are similar to the findings of section. for minimum-mse quantizers.. Mismatched uniform quantizers. Mismatch relative to the shape Quantizer mismatch refers to practical situation of non-adaptive quantization when the input pdf is different from the pdf expected for the design of the quantizer. Shape mismatch occurs when the shape parameter X of the input signal pdf differs from the - 9 -

15 shape parameter Q used for the quantizer design (i.e. for determining the optimum opt quantization step ). Various generalized Gaussian pdf shapes were given in figure -a q Q with a fixed variance σ ²=. Without lack of generality, unit variance pdfs will be considered throughout of the current section... Minimum-MSE uniform quantizers As a first insight regarding the effect of the shape parameter on the quantizer, figure 8 gives the distortion as a function of for MSE-optimized quantizers, i.e. when q opt/d is evaluated and used for each point of the curve. Figure 8 shows that the distortion of minimum-mse uniform quantizers decreases with increasing. D N= N=7 N= N= N=6 N= N=7 N= Figure 8: Distortion of matched minimum-mse uniform quantizers as a function of the shape parameter. Figure 9 illustrates the relative performance of uniform quantizers when the quantization step departs from the optimum, i.e. in case of mismatch relative to the shape parameter. It shows the distortion as a function of, each curve being computed with only one value of q opt/d. Figure 9-b shows that when X < Q, the SNR is much lower than at the expected optimal point where X = Q. If the input pdf has a shape parameter lower than the quantizer shape parameter, the performance of the quantizer for this input pdf will be poor. Conversely, when X > Q, the SNR is slightly higher than expected. For X much greater than Q, the SNR reaches an asymptote. If the input pdf has a shape parameter higher than the quantizer shape parameter, the quantizer performance for this input pdf will be slightly better than expected because the input pdf is closer to the uniform pdf than the quantizer pdf itself. - -

16 q opt/d, = σ = 8 6 q opt/d, = q opt/d, = q opt/d, = D q opt/d, = q opt/d, = q opt/d, =.5 SNR 8 q opt/d, = beta beta Figure 9: MSE and SNR as a function of the shape parameter for different values of q opt/d : shape mismatch. As an example using data from figure 9-b, if the quantizer was optimized for Q =, and the input is X =.5, then the expected SNR is 8.5 db, but the observed SNR is only 8 db. Choosing a model pdf with a too high shape parameter compared with the real input results in a poor quantizer performance compared with expection. If Q =.5, and X =, then the expected SNR is.5 db, but the observed SNR is slightly higher: db. Choosing a too small shape parameter for the quantizer compared with observed input pdf parameters does not degrade the quantizer performance. It slightly increases it compared with expectation, but the global quantizer performance remains relatively poor. The study of the quantizer robustness in terms of SNR when the input shape parameter deviates from its expected is of major interest. Figure shows the deviation / about the MSE-optimum Q, assuming that a SNR deviation of ±.5 db is acceptable. A deviation of -.5 db is observed when X < Q, and a deviation of +.5dB is observed when X > Q. A small deviation of X is enough to yield a loss of.5db, especially for small input pdf shape parameters X. Clearly, the robustness increases with. This finding is in accordance with the discussion of figure -c&d, and σ²= +.5dB -.5dB Figure : Range of the shape parameter for a deviation of +/-.5dB.. Entropy-constrained uniform quantizers This section addresses the relative performance of matched and mismatched entropyconstrained quantizers with respect to the shape parameter. - -

17 In figure -a, the quantizer is matched for a Laplacian pdf (=). When the targeted entropy is above the optimum point of the matched quantizer, it is favorable to have an input pdf with X > Q because the real entropy is comparable to expectations, and the distortion is much lower. Here, X < Q yields a slightly better entropy than expected but a worse distortion. When the targeted entropy is below the optimal point, having X > Q does not make much difference. Having X < Q increases the distortion. As the entropy gets even lower, there is an entropy below which the real entropy and distortion are a little better than expected. Such a reversed situation where X < Q is more favorable is not observed in figure -b where Q =. Globally, with entropy constrained quantizers, under-estimation of the quantizer shape parameter Q compared with the input pdf X is more favorable because the real distortion is better than expected. At low bit rates, the robustness of the quantizer relative to shape mismatch is higher..5 mismatched, = matched, = mismatched, =.5 mismatched, =.5 matched, = mismatched, = σ = σ = Figure : Rate-distorsion performance of shape mismatched entropy-constrained uniform quantizers.. Mismatch relative to the variance Variance mismatch occurs when the variance σ ² X of the input signal pdf differs from the variance σ ² Q used for the quantizer design (i.e. for determining the optimal quantization opt step ). Various generalized Gaussian pdf variances were illustrated in figure -b with q σ ² Q a fixed shape parameter =... Minimum-MSE uniform quantizers We first study both the quantization distortion and SNR as a function of σ ² for the minimum-mse quantizer, i.e. when q opt/d is evaluated and used for each value of σ ². Figure shows such plots for various values of quantization steps, each point was drawn with the exact optimum q corresponding to the value of σ ². The MSE of minimum-mse quantizers is a linear function of the variance, and as a result the SNR is constant. This can obviously be also deducted from section. and figure -e & f. - -

18 .9 = N= 6 55 N= N= D N=7 5 N= N= N= N= Figure : Distortion of minimum-mse quantizers as a function of the variance σ ². The value of the input pdf variance does not influence the SNR performance of minimum-mse uniform quantizers. This can obviously be also deducted from section. and figure -e & f. This results is known [JAIN-89], but the equation () does not shoew an obvious linear relationship between the distostion and the variance. Figure illustrates the relative performance of uniform quantizers when the quantization step departs from the optimum. It shows the distortion as a function of σ ², each curve being computed with only one value of q. Figure -b shows that whenσ ² X σ ² Q,, the SNR is lower than at the optimal point σ ² X =σ ² Q. The penalty for under-estimating the variance is slightly higher than the penalty for over-estimating the variance because the overload distortion rises very rapidely when the variance is greater than expected (see figure and ) q opt/d, σ = q opt/d, σ = q opt/d, σ = D s q opt/d, σ = q opt/d, σ = q opt/d, σ = SNR s q opt/d, σ = q opt/d, σ =.5 Figure : Distortion as a function of the variance σ² for different values of q opt/d. As an example using data from figure -b, if σ ² Q =, and σ ² X =.5, then the expected SNR is 5.5 db, but the observed SNR is only. db. If σ ² Q =.5, and σ ² X =, then the expected SNR is also 5.5 db (because the minimum-mse quantizer performance is independent of the variance), but the observed SNR is only. db. - -

19 All matched minimum-mse quantizers perform the same regarding the variance. Mismatch of the quantizer relatively to the input variance is always penalizing especially if the input variance is smaller than the variance of the quantizer... Entropy-constrained uniform quantizer This section addresses the relative performance of matched and mismatched entropyconstrained quantizers with respect to the variance. Figure shows the rate-distortion of the entropy-constrained quantizer with a variance mismatch, when the quantizer design is matched σ²=. Variance mismatch has no effect on the performance of entropy-constrained quantizers. The only difference lies on the existence of points on the R(D) curve. When σ ² X <σ ², more points exist with entropy below the optimum, and vice-versa..5.5 matched σ = and mismatched, σ =.5 &.5.5 = mismatched, σ = matched, σ = mismatched, σ = Figure : Rate-distorsion performance of variance mismatched entropyconstrained uniform quantizers.. Discussion and conclusion We have focused this paper on scalar quantizers with Generalized Gaussian distributions. GG distributions cover a wide range of possible distributions and relate well with distributions encountered in coding applications. We limited this work to the uniform quantizers because they have interesting theoritical and practical properties. They give nearly optimum solutions to entropy constrained quantization ([WOOD-69], [NOLL-78], [BERG-8]). Their simple implementation makes them attractive to waveform and image coding. A mathematical formulation of the quantizer distortion with a midtread quantizer and of its derivate, as well as a formulation of the entropy and its derivate give a practical method - -

20 for determining minimum-mse and entropy-constrained quantizers. With our quantizer design method, it is possible to study in detail the properties of minimum-mse and entropy-constrained quantizers. Particularly, quantizer mismatch, when the input pdf differs from the pdf used for the quantizer design has not been extensively studied dispite its practical interest for memoryless source coding or non-adaptive quantization. [MAUE- 79] reported results for mismatched Max-Lloyd quantizers. He found that the quantizer shape parameter should be chosen as a lower bound to the input shapes ( Q X ) and that variance mismatch is not very critical. [JAYA-8] gives results of mismatch for nonuniform and uniform minimum-mse quantizers, only with levels. He suggests that in these conditions, the performance of uniform and non-uniform quantizers are very similar, and that the difference would be more significant at higher bit rates. Our results are in agreement with the previous findings, and extend them to more pdfs, more bite-rates, and to entropy-constrained quantizers. They lead to practical conclusions for the design of uniform midtread quantizers: Influence of the shape parameter Let us assume that the input pdf shape parameters X lies in an interval [ Xmin, Xmax ]. If the quantizer is designed with Q = Xmin, i.e. the input shape parameter is always larger than the quantizer shape parameter, then the quantizer output distortion is lower than expected. The approach yields better performance than expected, but is conservative (better performance could be achieved for the highest values of X ). In this situation, the quantizer mismatch corresponds to quantization steps always higher than the minimum- MSE optimum, and granularity error is important. The R(D) performance of an entropyconstrained quantizer with Q = Xmin is globally robust to shape mismatch, or gives better performance than expected. Conversly, if Q = Xmax the distortion will be higher than expected for small shape parameters. Here, the poor optimization results in a too small quantization step compared with the minimum-mse, and the overload distortion is important. The R(D) performance of an entropy-constrained quantizer with Q = Xmin is not very robust to shape mismatch, yielding higher distortions than expected especially at higher rates. Under-estimating the shape parameter of the quantizer is penalizing, over-estimated it is slightly advantageous. The robustness of entropy-constrained quantizers increases with decreasing bit-rates. Influence of the variance Let us assume that the input pdf variance σ X lies in an interval [σ Xmin, σ Xmax ]. If the quantizer is designed with σ Q = σ Xmin, i.e. the input variance is always larger than the quantizer variance, then the quantizer output distortion is higher than expected. The approach yields worse results than expected, and worse than if σ Q = σ Xmax. In this situation, the quantizer mismatch corresponds to a quantization step always smaller than the minimum-mse optimum, and overload error is important. The R(D) performance of an entropy-constrained quantizer with σ Q = σ Xmin or σ Q = σ Xmax is robust to variance mismatch. If σ Q = σ Xmax the distortion will be higher than expected. Here, the poor optimization results in a too large qiantization step compared with the minimum-mse, and granularity distortion is happening. Mismatch of the quantizer relatively to the input variance is always penalizing. The entropy-constrained quantizers are robust to variance mismatch

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D

More information

Multimedia Communications. Scalar Quantization

Multimedia Communications. Scalar Quantization Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014 Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values

More information

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER 2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section

More information

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

Pulse-Code Modulation (PCM) :

Pulse-Code Modulation (PCM) : PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Proyecto final de carrera

Proyecto final de carrera UPC-ETSETB Proyecto final de carrera A comparison of scalar and vector quantization of wavelet decomposed images Author : Albane Delos Adviser: Luis Torres 2 P a g e Table of contents Table of figures...

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding. Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate

More information

BASICS OF COMPRESSION THEORY

BASICS OF COMPRESSION THEORY BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization Multimedia Systems Giorgio Leonardi A.A.2014-2015 Lecture 4 -> 6 : Quantization Overview Course page (D.I.R.): https://disit.dir.unipmn.it/course/view.php?id=639 Consulting: Office hours by appointment:

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

CS578- Speech Signal Processing

CS578- Speech Signal Processing CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction

More information

Transform Coding. Transform Coding Principle

Transform Coding. Transform Coding Principle Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients

More information

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, ) Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

Transform coding - topics. Principle of block-wise transform coding

Transform coding - topics. Principle of block-wise transform coding Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts

More information

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems

More information

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820

Joint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820 Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS

More information

Objectives of Image Coding

Objectives of Image Coding Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization

More information

EE67I Multimedia Communication Systems

EE67I Multimedia Communication Systems EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

Outils de Recherche Opérationnelle en Génie MTH Astuce de modélisation en Programmation Linéaire

Outils de Recherche Opérationnelle en Génie MTH Astuce de modélisation en Programmation Linéaire Outils de Recherche Opérationnelle en Génie MTH 8414 Astuce de modélisation en Programmation Linéaire Résumé Les problèmes ne se présentent pas toujours sous une forme qui soit naturellement linéaire.

More information

SCALABLE AUDIO CODING USING WATERMARKING

SCALABLE AUDIO CODING USING WATERMARKING SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada Email: {mahmood.movassagh@mail.mcgill.ca, peter.kabal@mcgill.ca}

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Example: for source

Example: for source Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

Digital Signal Processing

Digital Signal Processing COMP ENG 4TL4: Digital Signal Processing Notes for Lecture #3 Wednesday, September 10, 2003 1.4 Quantization Digital systems can only represent sample amplitudes with a finite set of prescribed values,

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia

More information

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative Being edited by Prof. Sumana Gupta 1 Quantization This involves representation the sampled data by a finite number of levels based on some criteria such as minimizing of the quantifier distortion. Quantizer

More information

THE dictionary (Random House) definition of quantization

THE dictionary (Random House) definition of quantization IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 6, OCTOBER 1998 2325 Quantization Robert M. Gray, Fellow, IEEE, and David L. Neuhoff, Fellow, IEEE (Invited Paper) Abstract The history of the theory

More information

E303: Communication Systems

E303: Communication Systems E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17

More information

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING

MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING MMSE DECODING FOR ANALOG JOINT SOURCE CHANNEL CODING USING MONTE CARLO IMPORTANCE SAMPLING Yichuan Hu (), Javier Garcia-Frias () () Dept. of Elec. and Comp. Engineering University of Delaware Newark, DE

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

Navneet Agrawal Deptt. Of Electronics & Communication Engineering CTAE,MPUAT,Udaipur,India

Navneet Agrawal Deptt. Of Electronics & Communication Engineering CTAE,MPUAT,Udaipur,India Navneet Agrawal et al / (IJCSE) International Journal on Computer Science and Engineering, Saturation Adaptive Quantizer Design for Synthetic Aperture Radar Data Compression Navneet Agrawal Deptt. Of Electronics

More information

LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK

LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK R. R. Khandelwal 1, P. K. Purohit 2 and S. K. Shriwastava 3 1 Shri Ramdeobaba College Of Engineering and Management, Nagpur richareema@rediffmail.com

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ Digital Speech Processing Lecture 16 Speech Coding Methods Based on Speech Waveform Representations and Speech Models Adaptive and Differential Coding 1 Speech Waveform Coding-Summary of Part 1 1. Probability

More information

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data

More information

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com Extending RD-OPT with Global Thresholding for JPEG Optimization Viresh Ratnakar University of Wisconsin-Madison Computer Sciences Department Madison, WI 53706 Phone: (608) 262-6627 Email: ratnakar@cs.wisc.edu

More information

Source Coding: Part I of Fundamentals of Source and Video Coding

Source Coding: Part I of Fundamentals of Source and Video Coding Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko

More information

Compression and Coding. Theory and Applications Part 1: Fundamentals

Compression and Coding. Theory and Applications Part 1: Fundamentals Compression and Coding Theory and Applications Part 1: Fundamentals 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why

More information

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256 General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)

More information

Multiscale Image Transforms

Multiscale Image Transforms Multiscale Image Transforms Goal: Develop filter-based representations to decompose images into component parts, to extract features/structures of interest, and to attenuate noise. Motivation: extract

More information

ECE Information theory Final

ECE Information theory Final ECE 776 - Information theory Final Q1 (1 point) We would like to compress a Gaussian source with zero mean and variance 1 We consider two strategies In the first, we quantize with a step size so that the

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming

Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Min Dai Electrical Engineering, Texas A&M University Dmitri Loguinov Computer Science, Texas A&M University

More information

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal

Performance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal Performance Bounds for Joint Source-Channel Coding of Uniform Memoryless Sources Using a Binary ecomposition Seyed Bahram ZAHIR AZAMI*, Olivier RIOUL* and Pierre UHAMEL** epartements *Communications et

More information

PREDICTIVE quantization is one of the most widely-used

PREDICTIVE quantization is one of the most widely-used 618 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 1, NO. 4, DECEMBER 2007 Robust Predictive Quantization: Analysis and Design Via Convex Optimization Alyson K. Fletcher, Member, IEEE, Sundeep

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin Shanghai Jiao Tong University Chapter 10: Information Theory Textbook: Chapter 12 Communication Systems Engineering: Ch 6.1, Ch 9.1~ 9. 92 2009/2010 Meixia Tao @

More information

Proc. of NCC 2010, Chennai, India

Proc. of NCC 2010, Chennai, India Proc. of NCC 2010, Chennai, India Trajectory and surface modeling of LSF for low rate speech coding M. Deepak and Preeti Rao Department of Electrical Engineering Indian Institute of Technology, Bombay

More information

Homework Set 3 Solutions REVISED EECS 455 Oct. 25, Revisions to solutions to problems 2, 6 and marked with ***

Homework Set 3 Solutions REVISED EECS 455 Oct. 25, Revisions to solutions to problems 2, 6 and marked with *** Homework Set 3 Solutions REVISED EECS 455 Oct. 25, 2006 Revisions to solutions to problems 2, 6 and marked with ***. Let U be a continuous random variable with pdf p U (u). Consider an N-point quantizer

More information

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design

EE4512 Analog and Digital Communications Chapter 4. Chapter 4 Receiver Design Chapter 4 Receiver Design Chapter 4 Receiver Design Probability of Bit Error Pages 124-149 149 Probability of Bit Error The low pass filtered and sampled PAM signal results in an expression for the probability

More information

Information Theory and Its Application to Image Coding

Information Theory and Its Application to Image Coding Information Theory and Its Application to Image Coding Y.-Kheong Chee Technical Report 11-95 School of Elec. and Computer Engineering Curtin University of Technology GPO Box U 1987 Perth, Western Australia

More information

Class of waveform coders can be represented in this manner

Class of waveform coders can be represented in this manner Digital Speech Processing Lecture 15 Speech Coding Methods Based on Speech Waveform Representations ti and Speech Models Uniform and Non- Uniform Coding Methods 1 Analog-to-Digital Conversion (Sampling

More information

EE 5345 Biomedical Instrumentation Lecture 12: slides

EE 5345 Biomedical Instrumentation Lecture 12: slides EE 5345 Biomedical Instrumentation Lecture 1: slides 4-6 Carlos E. Davila, Electrical Engineering Dept. Southern Methodist University slides can be viewed at: http:// www.seas.smu.edu/~cd/ee5345.html EE

More information

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1 IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding

More information

arxiv: v1 [cs.it] 20 Jan 2018

arxiv: v1 [cs.it] 20 Jan 2018 1 Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits Alon Kipnis, Yonina C. Eldar and Andrea J. Goldsmith fs arxiv:181.6718v1 [cs.it] Jan 18 X(t) sampler smp sec encoder R[ bits

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

CS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability

More information

Multiple Description Coding: Proposed Methods And Video Application

Multiple Description Coding: Proposed Methods And Video Application Multiple Description Coding: Proposed Methods And Video Application by Saeed Moradi A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements for the

More information

Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks

Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität

More information

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)

MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006) MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)

More information

EE5356 Digital Image Processing

EE5356 Digital Image Processing EE5356 Digital Image Processing INSTRUCTOR: Dr KR Rao Spring 007, Final Thursday, 10 April 007 11:00 AM 1:00 PM ( hours) (Room 111 NH) INSTRUCTIONS: 1 Closed books and closed notes All problems carry weights

More information

EXAMPLE OF SCALAR AND VECTOR QUANTIZATION

EXAMPLE OF SCALAR AND VECTOR QUANTIZATION EXAMPLE OF SCALAR AD VECTOR QUATIZATIO Source sequence : This could be the output of a highly correlated source. A scalar quantizer: =1, M=4 C 1 = {w 1,w 2,w 3,w 4 } = {-4, -1, 1, 4} = codeboo of quantization

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information

VID3: Sampling and Quantization

VID3: Sampling and Quantization Video Transmission VID3: Sampling and Quantization By Prof. Gregory D. Durgin copyright 2009 all rights reserved Claude E. Shannon (1916-2001) Mathematician and Electrical Engineer Worked for Bell Labs

More information

Supplementary Figure 1: Scheme of the RFT. (a) At first, we separate two quadratures of the field (denoted by and ); (b) then, each quadrature

Supplementary Figure 1: Scheme of the RFT. (a) At first, we separate two quadratures of the field (denoted by and ); (b) then, each quadrature Supplementary Figure 1: Scheme of the RFT. (a At first, we separate two quadratures of the field (denoted by and ; (b then, each quadrature undergoes a nonlinear transformation, which results in the sine

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information

Entropy-constrained quantization of exponentially damped sinusoids parameters

Entropy-constrained quantization of exponentially damped sinusoids parameters Entropy-constrained quantization of exponentially damped sinusoids parameters Olivier Derrien, Roland Badeau, Gaël Richard To cite this version: Olivier Derrien, Roland Badeau, Gaël Richard. Entropy-constrained

More information

Image Compression Basis Sebastiano Battiato, Ph.D.

Image Compression Basis Sebastiano Battiato, Ph.D. Image Compression Basis Sebastiano Battiato, Ph.D. battiato@dmi.unict.it Compression and Image Processing Fundamentals; Overview of Main related techniques; JPEG tutorial; Jpeg vs Jpeg2000; SVG Bits and

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

The information loss in quantization

The information loss in quantization The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally

More information

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental

More information