SCALABLE AUDIO CODING USING WATERMARKING
|
|
- Mervin Gordon
- 5 years ago
- Views:
Transcription
1 SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada ABSTRACT A scalable audio coding method is proposed using a technique, Quantization Index Modulation, borrowed from watermarking. Some of the information of each layer output is embedded (watermarked) in the previous layer. This approach leads to a saving in bitrate while keeping the distortion almost unchanged. This makes the scalable coding system more efficient in terms of Rate-Distortion. The results show that the proposed method outperforms the scalable audio coding based on reconstruction error quantization which is used in practical systems such as MPEG-4 AAC. Index Terms Scalable coding, Quantization Index Modulation, Watermarking, Entropy coding. 1. INTRODUCTION Bit-rate scalability has been a desired feature in multimedia communications. Without the need to re-encode the original signal, it allows for improving the quality of an audio/video signal as more of a total bit stream becomes available, or lowering the quality if channel conditions deteriorate. Scalability can also provide robustness to packet loss for transmission over packet networks. In such systems, robust channel coding can be performed for the core bitstream so that all the receivers can receive it without loss. The rest of the bitstream is sent with normal channel coding. Thus, if the enhancement layers are lost, the signal can be still reconstructed at base level quality. Several scalable coding systems have been proposed so far, including using wavelet transforms [1], bit-plane based coding [2, 3], and fine-grain scalable coding [4, ]. One popular scalable coding system, at the core of AAC scalable coding [6], is a system based on reconstruction error quantization (REQ). In REQ (Fig. 1), the signal is quantized by an quantizer designed for a minimum bit rate and acceptable distortion (the base layer). Enhancement layers improve the quality of the base layer signal, refining the quantization by subtracting the quantized signal from the original. This error signal is quantized, encoded and transmitted as the first enhancement layer. This enhancement step can be repeated, to form an ordered set of layers. From the base up, each additional layer that the receiver receives is used to refine the quality of the decoded signal. X Q1 EC Xb Q2 Fig. 1. Salable Audio Coding based on REQ. EC is entropy coding. In terms of Rate-Distortion (RD) performance, REQ is optimal for the Mean Square Error (MSE) criterion. It asymptotically achieves the performance of an equivalent nonscalable coding system [7] if the rate is measured by the entropy of resulting output symbols. However, in practical coding systems symbols need to be encoded in a bitstream using an entropy coding scheme, which adds an overhead for each layer. In this paper we propose a method that improves the performance of the scalable audio coding systems base on REQ. In this method we used the Quantization Index Modulation (QIM) which is a technique borrowed form watermarking [8]. Using this technique some of the information of each layer output is embedded (watermarked) in the previous layer. We will show that using this approach, a saving in bit rate is achieved while it does not much affect the distortion. In the following section we will discuss AAC quantization and will address an issue which we will take advantage of in our method to improve the performance of the REQ system. Then in Section 3 we will give a brief introduction to QIM technique and in Section 4 we will introduce our proposed method Scalable Audio Coding using Watermarking which we will refer to by WSAC in the rest of paper. Simulation results will be presented in Section and the paper will conclude in Section 6. EC Xe1 Q3 EC Xe2
2 2. AAC AND UNIFORM THRESHOLD QUANTIZATION The quantization formula which is used in AAC [9] is given by i x = sgn(x)nint( x 0.7 Δ ) ˆx = sgn(x)(δ i x ) 4 3, (1) where Δ is the step size parameter, nint() and sgn() denote the nearest integer and signum functions and is the offset value which is also referred to as the magic number. This quantization formula is based on the assumption that the input signal is Laplacian. The optimal quantizer for exponential and Laplacian signals, a special case of exponential signals, is a Uniform Threshold Quantizer (UTQ) with a dead-zone around zero []. In such a quantizer, reconstruction points are not in the middle of the quantizer intervals and there is a constant offset in each interval. However, the interval width (or the step size) of the quantizer is constant (uniform threshold) except for the zero interval (the dead-zone) which is larger than the other intervals. In high-rate theory, for the case of constrained-entropy quantization the relation between the optimal uniform quantizer step size and its entropy is [11] Δ=2 (R h(x)), (2) where Δ is the quantizer step size, R is the quantizer output entropy and h(x) is the input signal differential entropy. This equation implies that when you double the resolution of the quantizer (by halving the step size) the output entropy will be increased by one bit. The above relation holds only with the high-rate assumptions where 1) quantization cells are small enough that the source density is constant within the cell 2) each re-construction point is located at the center of the cell and 3) N. However, in a REQ scalable coding system a low resolution quantizer is used in each layer, e.g. a 4-layer system with 4-bit quantizers. In the following we will obtain a general relation for the entropy of a UTQ and will show that the entropy increment becomes less than unity for the low-rate quantization. Consider a Laplacian signal which is a good approximation for audio signals. Note that this is true even for the compressed MDCT coefficients. In AAC the compression is performed on the input signal MDCT coefficients before quantization (x 0.7 function in the above formula). In the rest of the paper, by input signal we mean the MDCT coefficients after this compression. For a Laplacian signal the pdf is where λ = 2 σ f X (x) = 1 2 λe λ x, (3) and σ is the standard deviation of the signal. Now consider a UTQ. The probability of the input signal to be in each interval (for i>0) is p i = 1 2 (e λti e λti+1 ), (4) where t i and t i+1 are the thresholds of the interval i and we have (except the dead-zone) which gives t i+1 = t i +Δ () p i = 1 2 (1 e λδ )e λti (6) and the probability of the dead-zone becomes p 0 =(1 e λt ), (7) where T = Δ 2 + t offset (t offset is the offset value of the UTQ). The entropy of the quantizer output then can be written as R = i i 1 2 i 1 p i log 2 (p i )= (1 e λt )log 2 (1 e λt ) 1 2 (1 e λδ )e λti log 2 ( 1 2 (1 e λδ )e λti ) =(1 e λt ) (1 e λδ )[ i 1 (log 2 (1 e λδ ) 1) i 1 e λti ]. e λti log 2 (e λti )+ Since t 1 = T,wehave e λti log 2 (e λti )=e λt log 2 (e λt ) i 1 + e λ(t +Δ) log 2 (e λ(t +Δ) ) + e λ(t +2Δ) log 2 (e λ(t +2Δ )) +... = e λt log 2 (e λt )(1 + e λδ + e 2λΔ +...) λδlog 2 (e)e λt (e λδ +2e 2λΔ +...) = e λt log 2 (e λt ) 1 e λδ λδlog 2(e)e λt e λδ (1 e λδ ) 2 and e λti = e λt (e λδ + e 2λΔ e λt +...)= (1 e λδ ), then R = e λt + λ log 2 (e)e λt (T + Δe λδ 1 e λδ ) e λt log 2 (1 e λδ ) (1 e λt )log 2 (1 e λt ). (8) (9) () (11)
3 The quantization loading factor L is defined as L = x m /σ, where σ is the standard deviation of the input signal and x m is the quantization limit which is for instance in AAC quantization. By properly choosing this loading factor, the probability of signal to be beyond the quantization limit becomes negligible. That is why we used the infinite summation formula for the above geometric series (in all resolutions the quantizer limit is fixed and that probability remains the same for a given L). Normally L 7 is chosen for Laplacian signals to avoid the quantization overloading effect. In Fig. 2 the obtained entropy versus the quantization resolution was plotted for the AAC quantizer and for three different values of L. Note that changing the step size parameter Δ in (1) changes the quantizer resolution at the same time. For instance, if Δ=2then the resolution of the quantizer is halved. In other words the resolution (number of levels) can be expressed by: N =2x m /Δ. The horizontal axis in Fig. 2 is log 2 (N). Entropy L=7 L= L=1 a 0 bit is decoded and otherwise a 1 bit is decoded. Therefore, QIM enables us to watermark one bit of data into the signal at the cost of higher distortion. Figure 3 shows the QIM based on the nearest-even integer quantization. even odd even odd even Fig. 3. Quantization Index Modulation and its equivalency to quantization with half resolution However, the point (which is desired here) is that if we double the resolution of a quantizer and use the QIM, that is equivalent to using the original quantizer in terms of ratedistortion. Figure 4 shows the rate distortion performance of an AAC quantizer without and with using QIM denoted by UTQ and QIM respectively (L = 7). For each resolution, first the regular quantization was performed using UTQ, and then the resolution was doubled and QIM was applied to the new quantizer for a randomly generated input bitstream with probabilities UTQ QIM Quantizer Resolution (bits) Fig. 2. Entropy of a UTQ versus quantization resolution for three values of input σ (L =7, L =and L =1) 0 It can be seen that for high resolutions the slope of the curves ΔR Δb tends to unity (as predicted by the high-rate equatio), while for lower resolutions it is less than unity. This fact that in low resolutions the entropy increment becomes less than unity for doubling the resolution is the issue that we mentioned in the introduction section. We will take the advantage of that in our proposed coding system. 3. QUANTIZATION INDEX MODULATION (QIM) QIM is one of the techniques used in watermarking. Consider for instance the AAC quantizer in which the quantized values are integers (nint() function in (1)). The idea is to quantize a signal to the nearest-even or nearest-odd integer rather than to the nearest integer value. If the receiver receives an even value Quantizer Resolution (bits) Fig. 4. Performance of UTQ and its equivalent QIM (L =7) As can be seen in the figure, the difference in performance of UTQ and its equivalent QIM is not noticeable for b 3. Note that in performing QIM using the nearest-even integer function (NE-QIM) the dead-zone covers one interval, whereas in the nearest-odd integer QIM (NO-QIM) two adjacent intervals around zero are covered in the dead-zone. However, as can be seen in Fig. 4, that does not much affect the performance for the desired range of resolution (b 3). Equation (11) gives the entropy of a UTQ in terms of the step size and offset values. When using the NE-QIM approach,
4 the same equation can be used for obtaining the entropy since the two quantizers are exactly equivalent. However, when using the NO-QIM since there are two dead-zones around zero the entropy equation for such a quantizer slightly differs from (11) only in the last term. Nevertheless, the same story for the differences in entropy is repeated. In fact, the weighted average entropy of the two quantizers (in NE-QIM and NO-QIM) also follow the same pattern of Fig. 2 if plotted. 4. SCALABLE AUDIO CODING USING WATERMARKING In a scalable audio coding based on REQ, after quantization in each layer entropy coding is performed. The entropy coding used in AAC for this scalable coder is Huffman coding which we use here too. For each quantization resolution a Huffman codebook is built which is used by both the coder and the decoder. In these codebooks there are codewords corresponding to each quantizer reconstruction point which is found in the entropy coding process and sent to the receiver. These codewords are variable length and their average determines the bitrate of each layer. The main idea in our proposed method is that for each layer output we embed one bit of each codeword in the previous layer using QIM. By this approach the average bitrate of each layer is exactly decreased by unity (except the base layer which has no preceding layer): p i (w i 1) = p i w i p i = B 1, (12) i i i where B is the average bitrate and w i is are the codeword lengths. On the other hand, since we are using QIM for the previous layer, to keep the distortion unchanged we need to double the quantization resolution of that layer. This will be done for all layers except the last one for which QIM is not performed (it has no succeeding layer). However, as we showed in section 2, in low resolution (which is the case for each layer of REQ scalable coding), when we double the quantization resolution its output entropy (hence the bitrate after entropy coding) is increased by less than one. Therefore, we save one bit in bitrate of a layer by watermarking in the previous layer, and at the same increase the bitrate of that layer by an amount less than one by doubling the quantization resolution which in total leads to a savings in bitrate: B =(B 1 +ΔB 1 )+(B 2 +ΔB 2 1) + (B 3 +ΔB 3 1) + +(B M 1) = B [(M 1) ΔB 1 ΔB 2 ΔB M 1 ], (13) where B is the new total bitrate using WSAC, B is the total bitrate using REQ, B i is the bitrate of layer i before using QIM, M is number of layers and ΔB i is the increment in its bitrate after using QIM which is less than unity. Assuming that the same resolution is used for all layers and the bitrate increment for them equals to ΔB we can write B = B (M 1)(1 ΔB). (14) This gives a clear equation for savings in bitrate which equals to (M 1)(1 ΔB). The more number of layers, the more we save in bitrate Coding Consider the REQ scalable coding system (Fig. 1). First we perform coding based on REQ from first layer to the last. However, instead of using regular quantizers we use doubledresolution quantizers and in each layer apply NE-QIM to them. In the next step, from the last layer to the first we perform the following algorithm: If the ith bit of the output codeword for the current layer is 1, requantize the input of the previous layer using NO-QIM and obtain the new codeword. Otherwise, keep the codeword obtained in the first step. This algorithm is repeated for each layer. Note that the choice of i is made based on the resulting probabilities of the watermarked bits which directly affects the performance of the system. A specific i should be determined and be known by both coder and decoder. Since the codewords are variable length and some of them might have a length shorter than i, circular shifting can be used to solve this problem. Using this technique in our simulations shows that considering the last bit of codewords for watermarking leads to good results. Figure shows the block diagram of WSAC for a 3-layer WSAC. Note that the last layer remains unchanged since it does not have any succeeding layer. X QIM Q1 QIM Q2 Q3Q Xb Xe Decoding EC EC EC Fig.. Block diagram of a 3-layer WSAC The receiver starts decoding from the base (first) layer to any level it receives. For each layer, after decoding the variable length codeword, if the quantization index is even the ith bit of the next layer output codeword is considered to be 0. Otherwise it is 1. This bit will be used as the ith bit of the next layer s output codeword (if received). Xe2
5 In the REQ system. We simply add the outputs of the layers to reconstruct the original signal in a scalable manner. Here, however, a modification is required for the reconstruction pattern. Consider a 2-layer WSAC. In the first step we use NE-QIM for the first layer. In the next step, however the NO-QIM might be used for this layer depending on the second layer output. Reconstruction of the original signal at the receiver is dependent on which QIM is used for the first layer since the input of the second layer, which is the reconstruction error of the first layer, was formed in the first step of QIM in which NE-QIM was used for the layers. See Fig. 6. The reconstruction error resulting from NE-QIM is denoted by e e in the figure. If the decoded quantizer index for the first layer is odd, which means NO-QIM was applied to the first layer, for reconstructing the original signal there are two possible cases: e e is positive or negative. If e e is positive the reconstruction formula is x =ˆx + e e Δ, (1) and for the negative case x =ˆx + e e +Δ. (16) were used for entropy coding in both REQ and WSAC. Laplacian random variables were generated as the input signal with the desired values of L parameter REQ WSAC Bitate (bit/sample) Fig. 7. Comparison of 4-layer WSAC and REQ (L =7). 4- bit quantizers used for REQ and -bit quantizers for WSAC (except the 4-bit quantizer used for the last layer). even odd even Fig. 6. Reconstruction scenarios in WSAC If the decoded quantizer index for the first layer is even, reconstruction is simply performed by REQ WSAC x =ˆx + e e. (17) Note that what receiver actually uses is the quantized version of e e which is the output of the next layer. This is the reconstruction algorithm used for all layers in WSAC. Note that for each layer the appropriate step size Δ should be used which might be different from other layers.. SIMULATION AND RESULTS Two common REQ cases were considered for our simulation. A 4-layer system with 4-bit quantizers in each layer and a -layer system with 3-bit quantizers. For WSAC, since the resolutions are doubled the corresponding quantizers used were -bit and 4-bit respectively. The quantizations were performed using AAC quantization formula. A set of Huffman coding tables were generated for different resolutions which Bitate (bit/sample) Fig. 8. Comparison of -layer WSAC and REQ (L =7). 3- bit quantizers used for REQ and 4-bit quantizers for WSAC (except the 3-bit quantizer used for the last layer). Figures 7 and 8 show the comparisons between WSAC and the REQ scalable coding systems in terms of bitratedistortion performance. In the first one the 4-layer systems were compared and in the second one the -layer systems. The points show the possible rate-distortion pairs, that is for a 1-layer system, a 2-layer system and so on. As can be seen in these two figures when the number of layers increases the performance of WSAC becomes considerably better than REQ. The more number of layers used, the better WSAC performs compared to REQ. The only case where REQ works better is a
6 one layer system which is obvious since in that case a doubleresolution quantizer is used in the base layer while there are no more layers and hence no savings for the other layers using QIM. In fact the proposed method starts to outperform when more than one layer is sent. Also comparing the two figures reveals that in the -layer systems, the amount of WSAC s outperforming increases since ΔB in (14) is smaller for a 3- bit quantizer compared to the 4-bit one REQ WSAC Bitate (bit/sample) Fig. 9. Comparison of 4-layer WSAC and REQ (L =). 4-bit quantizers used for REQ and -bit quantizers for WSAC (except the 4-bit quantizer used for the last layer). In Fig. 9 the performance of the 4-layer systems were compared for L=. It can be seen that the performance difference between the two systems becomes higher for larger L or equivalently smaller input variance. 6. CONCLUSION REQ scalable coding is a practical scalable coding scheme which is used in MPEG-4 audio coding. We proposed a modified version of such a system in which a watermarking technique known as QIM was used to embed some of the information of each layer in the previous one. The proposed method considerably outperforms REQ scalable coding in terms of rate-distortion and can be considered as a suitable replacement for practical scalable audio coders. 7. REFERENCES [1] D. Ning and M. Deriche, A bitstream scalable audio coder using a hybrid WLPC-wavelet representation, in Proc. IEEE ICASSP, Apr. 03, vol., pp. V 417. [2] H. Huang, H. Shu, and S. Rahardja, Bit-plane arithmetic coding for Laplacian source, in Proc. IEEE ICASSP, Mar., pp [3] D. H. Kim, J. H. Kim, and S. W. Kim, Scalable lossless audio coding based on MPEG-4 BSAC, in Proc. 113th AES Conv., Oct. 02, Paper Number:679. [4] R. Yu, S. Rahardja, L. Xiao, and C. C. Ko, A fine granular scalable to lossless audio coder, IEEE Trans. Audio, Speech, Lang. Process., vol. 14, no. 4, pp , Jul. 06. [] M. Movassagh, J. Thiemann, and P. Kabal, Joint entropy-scalable coding of audio signals, in Proc. IEEE ICASSP, Mar. 12, pp [6] R. Geiger, J. Herre, S. Kim, X. Lin, S. Rahardja, M. Schmidt, and R. Yu, ISO/IEC MPEG-4 highdefinition scalable advanced audio coding, in Proc. 1th AES Conv., May 06, Paper Number:6791. [7] A. Aggarwal, S. L. Regunathan, and K. Rose, Efficient bit-rate scalability for weighted squared error optimization in audio coding, IEEE Trans. Audio, Speech, Lang. Process., vol. 14, no. 4, pp , Jul. 06. [8] B. Chen and G.W. Wornell, Quantization index modulation: a class of provably good methods for digital watermarking and information embedding, IEEE Trans. Inform. Theory, vol. 47, no. 4, pp , May 01. [9] M. Bosi, K. Brandenburg, S. Quackenbush, L. Fielder, K. Akagiri, H. Fuchs, M. Dietz, J. Herre, G. Davidson, and Y. Oikawa, ISO/IEC MPEG-2 advanced audio coding, J. Audio Eng. Soc., vol. 4, no., pp , Oct [] G. J. Sullivan, Efficient scalar quantization of exponential and laplacian random variables, IEEE Trans. Inform. Theory, vol. 42, no., pp , Sep [11] R. M. Gray and D. L. Neuhoff, Quantization, IEEE Trans. Inf. Theory, vol. 44, no. 6, pp , Oct
On Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationThe Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal
The Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal Claus Bauer, Mark Vinton Abstract This paper proposes a new procedure of lowcomplexity to
More informationMultimedia Communications. Scalar Quantization
Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing
More informationScalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014
Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values
More informationCompression and Coding
Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)
More informationFast Progressive Wavelet Coding
PRESENTED AT THE IEEE DCC 99 CONFERENCE SNOWBIRD, UTAH, MARCH/APRIL 1999 Fast Progressive Wavelet Coding Henrique S. Malvar Microsoft Research One Microsoft Way, Redmond, WA 98052 E-mail: malvar@microsoft.com
More informationAdaptive Karhunen-Loeve Transform for Enhanced Multichannel Audio Coding
Adaptive Karhunen-Loeve Transform for Enhanced Multichannel Audio Coding Dai Yang, Hongmei Ai, Chris Kyriakakis and C.-C. Jay Kuo Integrated Media Systems Center and Department of Electrical Engineering-Systems
More informationON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose
ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106
More informationLATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK
LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK R. R. Khandelwal 1, P. K. Purohit 2 and S. K. Shriwastava 3 1 Shri Ramdeobaba College Of Engineering and Management, Nagpur richareema@rediffmail.com
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationMODERN video coding standards, such as H.263, H.264,
146 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 16, NO. 1, JANUARY 2006 Analysis of Multihypothesis Motion Compensated Prediction (MHMCP) for Robust Visual Communication Wei-Ying
More informationThe information loss in quantization
The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally
More informationStatistical Analysis and Distortion Modeling of MPEG-4 FGS
Statistical Analysis and Distortion Modeling of MPEG-4 FGS Min Dai Electrical Engineering Texas A&M University, TX 77843 Dmitri Loguinov Computer Science Texas A&M University, TX 77843 Hayder Radha Hayder
More informationSelective Use Of Multiple Entropy Models In Audio Coding
Selective Use Of Multiple Entropy Models In Audio Coding Sanjeev Mehrotra, Wei-ge Chen Microsoft Corporation One Microsoft Way, Redmond, WA 98052 {sanjeevm,wchen}@microsoft.com Abstract The use of multiple
More informationCan the sample being transmitted be used to refine its own PDF estimate?
Can the sample being transmitted be used to refine its own PDF estimate? Dinei A. Florêncio and Patrice Simard Microsoft Research One Microsoft Way, Redmond, WA 98052 {dinei, patrice}@microsoft.com Abstract
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationProgressive Wavelet Coding of Images
Progressive Wavelet Coding of Images Henrique Malvar May 1999 Technical Report MSR-TR-99-26 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 1999 IEEE. Published in the IEEE
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationEE368B Image and Video Compression
EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer
More informationIntroduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.
Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems
More informationencoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256
General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)
More informationEnhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding
Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding Wen-Hsiao Peng, Tihao Chiang, Hsueh-Ming Hang, and Chen-Yi Lee National Chiao-Tung University 1001 Ta-Hsueh Rd., HsinChu 30010,
More informationOn Scalable Source Coding for Multiple Decoders with Side Information
On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationAnalysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming
Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Min Dai Electrical Engineering, Texas A&M University Dmitri Loguinov Computer Science, Texas A&M University
More informationImage Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels
Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels Anindya Sarkar and B. S. Manjunath Department of Electrical and Computer Engineering, University
More informationSUCCESSIVE refinement of information, or scalable
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for
More informationEstimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences
Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Jingning Han, Vinay Melkote, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane
More informationMultimedia Communications. Differential Coding
Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and
More informationGaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)
Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D
More informationCompression and Coding. Theory and Applications Part 1: Fundamentals
Compression and Coding Theory and Applications Part 1: Fundamentals 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments
Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au
More informationAn Introduction to Lossless Audio Compression
An Introduction to Lossless Audio Compression Florin Ghido Department of Signal Processing Tampere University of Technology SGN-2306 Signal Compression 1 / 31 Introduction and Definitions Digital Audio
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More informationThis is a repository copy of The effect of quality scalable image compression on robust watermarking.
This is a repository copy of The effect of quality scalable image compression on robust watermarking. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/7613/ Conference or Workshop
More informationFault Tolerance Technique in Huffman Coding applies to Baseline JPEG
Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5
Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More informationCompression and Coding. Theory and Applications Part 1: Fundamentals
Compression and Coding Theory and Applications Part 1: Fundamentals 1 What is the problem? Transmitter (Encoder) Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why
More informationMultimedia Communications Fall 07 Midterm Exam (Close Book)
Multimedia Communications Fall 07 Midterm Exam (Close Book) 1. (20%) (a) For video compression using motion compensated predictive coding, compare the advantages and disadvantages of using a large block-size
More informationRate-Distortion Based Temporal Filtering for. Video Compression. Beckman Institute, 405 N. Mathews Ave., Urbana, IL 61801
Rate-Distortion Based Temporal Filtering for Video Compression Onur G. Guleryuz?, Michael T. Orchard y? University of Illinois at Urbana-Champaign Beckman Institute, 45 N. Mathews Ave., Urbana, IL 68 y
More informationSeismic compression. François G. Meyer Department of Electrical Engineering University of Colorado at Boulder
Seismic compression François G. Meyer Department of Electrical Engineering University of Colorado at Boulder francois.meyer@colorado.edu http://ece-www.colorado.edu/ fmeyer IPAM, MGA 2004 Seismic Compression
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationImplementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source
Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source Ali Tariq Bhatti 1, Dr. Jung Kim 2 1,2 Department of Electrical
More informationX 1 Q 3 Q 2 Y 1. quantizers. watermark decoder N S N
Quantizer characteristics important for Quantization Index Modulation Hugh Brunk Digimarc, 19801 SW 72nd Ave., Tualatin, OR 97062 ABSTRACT Quantization Index Modulation (QIM) has been shown to be a promising
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationWavelet Scalable Video Codec Part 1: image compression by JPEG2000
1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance
More informationModule 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur
Module 5 EMBEDDED WAVELET CODING Lesson 13 Zerotree Approach. Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the principle of embedded coding. 2. Show the
More informationECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec
University of Wisconsin Madison Electrical Computer Engineering ECE533 Digital Image Processing Embedded Zerotree Wavelet Image Codec Team members Hongyu Sun Yi Zhang December 12, 2003 Table of Contents
More informationGeneralized Writing on Dirty Paper
Generalized Writing on Dirty Paper Aaron S. Cohen acohen@mit.edu MIT, 36-689 77 Massachusetts Ave. Cambridge, MA 02139-4307 Amos Lapidoth lapidoth@isi.ee.ethz.ch ETF E107 ETH-Zentrum CH-8092 Zürich, Switzerland
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More informationQuantization for Distributed Estimation
0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu
More informationOn Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University
On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional
More informationThe Secrets of Quantization. Nimrod Peleg Update: Sept. 2009
The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the
More informationPerformance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal
Performance Bounds for Joint Source-Channel Coding of Uniform Memoryless Sources Using a Binary ecomposition Seyed Bahram ZAHIR AZAMI*, Olivier RIOUL* and Pierre UHAMEL** epartements *Communications et
More informationONE approach to improving the performance of a quantizer
640 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 2, FEBRUARY 2006 Quantizers With Unim Decoders Channel-Optimized Encoders Benjamin Farber Kenneth Zeger, Fellow, IEEE Abstract Scalar quantizers
More informationINTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO
INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC9/WG11 CODING OF MOVING PICTURES AND AUDIO ISO/IEC JTC1/SC9/WG11 MPEG 98/M3833 July 1998 Source:
More informationVector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l
Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT
More informationEntropy-constrained quantization of exponentially damped sinusoids parameters
Entropy-constrained quantization of exponentially damped sinusoids parameters Olivier Derrien, Roland Badeau, Gaël Richard To cite this version: Olivier Derrien, Roland Badeau, Gaël Richard. Entropy-constrained
More informationHARMONIC VECTOR QUANTIZATION
HARMONIC VECTOR QUANTIZATION Volodya Grancharov, Sigurdur Sverrisson, Erik Norvell, Tomas Toftgård, Jonas Svedberg, and Harald Pobloth SMN, Ericsson Research, Ericsson AB 64 8, Stockholm, Sweden ABSTRACT
More informationSIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding
SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental
More informationPulse-Code Modulation (PCM) :
PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from
More informationLloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks
Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität
More informationLOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES
LOW COMPLEXITY WIDEBAND LSF QUANTIZATION USING GMM OF UNCORRELATED GAUSSIAN MIXTURES Saikat Chatterjee and T.V. Sreenivas Department of Electrical Communication Engineering Indian Institute of Science,
More informationEE67I Multimedia Communication Systems
EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationA NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING
A NEW BASIS SELECTION PARADIGM FOR WAVELET PACKET IMAGE CODING Nasir M. Rajpoot, Roland G. Wilson, François G. Meyer, Ronald R. Coifman Corresponding Author: nasir@dcs.warwick.ac.uk ABSTRACT In this paper,
More informationVector Quantization and Subband Coding
Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block
More information+ (50% contribution by each member)
Image Coding using EZW and QM coder ECE 533 Project Report Ahuja, Alok + Singh, Aarti + + (50% contribution by each member) Abstract This project involves Matlab implementation of the Embedded Zerotree
More informationMultiuser Successive Refinement and Multiple Description Coding
Multiuser Successive Refinement and Multiple Description Coding Chao Tian Laboratory for Information and Communication Systems (LICOS) School of Computer and Communication Sciences EPFL Lausanne Switzerland
More informationA Modified Moment-Based Image Watermarking Method Robust to Cropping Attack
A Modified Moment-Based Watermarking Method Robust to Cropping Attack Tianrui Zong, Yong Xiang, and Suzan Elbadry School of Information Technology Deakin University, Burwood Campus Melbourne, Australia
More informationMultiple Description Coding: Proposed Methods And Video Application
Multiple Description Coding: Proposed Methods And Video Application by Saeed Moradi A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements for the
More informationQuantization Index Modulation using the E 8 lattice
1 Quantization Index Modulation using the E 8 lattice Qian Zhang and Nigel Boston Dept of Electrical and Computer Engineering University of Wisconsin Madison 1415 Engineering Drive, Madison, WI 53706 Email:
More informationImage Data Compression
Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression
More informationNoise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source
Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Yuval Kochman, Jan Østergaard, and Ram Zamir Abstract It was recently shown that the symmetric multiple-description
More informationRate Bounds on SSIM Index of Quantized Image DCT Coefficients
Rate Bounds on SSIM Index of Quantized Image DCT Coefficients Sumohana S. Channappayya, Alan C. Bovik, Robert W. Heath Jr. and Constantine Caramanis Dept. of Elec. & Comp. Engg.,The University of Texas
More informationModule 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur
Module 4 Multi-Resolution Analysis Lesson Multi-resolution Analysis: Discrete avelet Transforms Instructional Objectives At the end of this lesson, the students should be able to:. Define Discrete avelet
More informationSTATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING
STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING Item Type text; Proceedings Authors Kummerow, Thomas Publisher International Foundation for Telemetering Journal International Telemetering
More informationVector Quantizers for Reduced Bit-Rate Coding of Correlated Sources
Vector Quantizers for Reduced Bit-Rate Coding of Correlated Sources Russell M. Mersereau Center for Signal and Image Processing Georgia Institute of Technology Outline Cache vector quantization Lossless
More informationCoding for Discrete Source
EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively
More informationScalable color image coding with Matching Pursuit
SCHOOL OF ENGINEERING - STI SIGNAL PROCESSING INSTITUTE Rosa M. Figueras i Ventura CH-115 LAUSANNE Telephone: +4121 6935646 Telefax: +4121 69376 e-mail: rosa.figueras@epfl.ch ÉCOLE POLYTECHNIQUE FÉDÉRALE
More informationENVELOPE MODELING FOR SPEECH AND AUDIO PROCESSING USING DISTRIBUTION QUANTIZATION
ENVELOPE MODELING FOR SPEECH AND AUDIO PROCESSING USING DISTRIBUTION QUANTIZATION Tobias Jähnel *, Tom Bäckström * and Benjamin Schubert * International Audio Laboratories Erlangen, Friedrich-Alexander-University
More informationConverting DCT Coefficients to H.264/AVC
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Converting DCT Coefficients to H.264/AVC Jun Xin, Anthony Vetro, Huifang Sun TR2004-058 June 2004 Abstract Many video coding schemes, including
More informationDigital communication system. Shannon s separation principle
Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation
More informationCapacity-Approaching Codes for Reversible Data Hiding
Capacity-Approaching Codes for Reversible Data Hiding Weiming Zhang 1,2,BiaoChen 1, and Nenghai Yu 1, 1 Department of Electrical Engineering & Information Science, University of Science and Technology
More informationReal-Time Perceptual Moving-Horizon Multiple-Description Audio Coding
4286 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 59, NO 9, SEPTEMBER 2011 Real-Time Perceptual Moving-Horizon Multiple-Description Audio Coding Jan Østergaard, Member, IEEE, Daniel E Quevedo, Member, IEEE,
More informationTitle. Author(s)Tsai, Shang-Ho. Issue Date Doc URL. Type. Note. File Information. Equal Gain Beamforming in Rayleigh Fading Channels
Title Equal Gain Beamforming in Rayleigh Fading Channels Author(s)Tsai, Shang-Ho Proceedings : APSIPA ASC 29 : Asia-Pacific Signal Citationand Conference: 688-691 Issue Date 29-1-4 Doc URL http://hdl.handle.net/2115/39789
More informationSource Coding: Part I of Fundamentals of Source and Video Coding
Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationReview of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition
Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to
More informationIntraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding
Intraframe Prediction with Intraframe Update Step for Motion-Compensated Lifted Wavelet Video Coding Aditya Mavlankar, Chuo-Ling Chang, and Bernd Girod Information Systems Laboratory, Department of Electrical
More informationDirty Paper Writing and Watermarking Applications
Dirty Paper Writing and Watermarking Applications G.RaviKiran February 10, 2003 1 Introduction Following an underlying theme in Communication is the duo of Dirty Paper Writing and Watermarking. In 1983
More informationLow-Complexity Fixed-to-Fixed Joint Source-Channel Coding
Low-Complexity Fixed-to-Fixed Joint Source-Channel Coding Irina E. Bocharova 1, Albert Guillén i Fàbregas 234, Boris D. Kudryashov 1, Alfonso Martinez 2, Adrià Tauste Campo 2, and Gonzalo Vazquez-Vilar
More informationMultimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization
Multimedia Systems Giorgio Leonardi A.A.2014-2015 Lecture 4 -> 6 : Quantization Overview Course page (D.I.R.): https://disit.dir.unipmn.it/course/view.php?id=639 Consulting: Office hours by appointment:
More informationQuantization 2.1 QUANTIZATION AND THE SOURCE ENCODER
2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section
More informationTHE newest video coding standard is known as H.264/AVC
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 17, NO. 6, JUNE 2007 765 Transform-Domain Fast Sum of the Squared Difference Computation for H.264/AVC Rate-Distortion Optimization
More informationOrder Adaptive Golomb Rice Coding for High Variability Sources
Order Adaptive Golomb Rice Coding for High Variability Sources Adriana Vasilache Nokia Technologies, Tampere, Finland Email: adriana.vasilache@nokia.com Abstract This paper presents a new perspective on
More information