ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose
|
|
- Sabina Byrd
- 5 years ago
- Views:
Transcription
1 ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, {salehifar, tejaswi, ABSTRACT While several real world signals, such as speech, image and sensor network data, are modeled as hidden Markov sources (HMS) for recognition and analysis applications, their typical compression exploits temporal correlations by modeling them as simple (nonhidden) Markov sources. However, the inherent hidden Markov nature of these sources implies that an observed sample depends, in fact, on all past observations, thus rendering simple Markov modeling suboptimal. Motivated by this realization, previous work from our lab derived a technique to optimally quantize and compress HMS. In this paper we build on, and considerably extend, these results to the problem of scalable coding of HMS. At the base layer, as proposed earlier, the approach tracks an estimate of the state probability distribution and adapts the encoder structure accordingly. At the enhancement layer, the state probability distribution is refined using available information from past enhancement layer reconstructed samples, and this refined estimate is further combined with quantization information from the base layer, to effectively characterize the probability density of the current sample conditioned on all available information. We update code parameters on the fly, at each observation, at both the encoder and the decoder and at both layers. Experimental results validate the superiority of the proposed approach with considerable gains over standard predictive coding employing a simple Markov model. Index Terms Scalable Coding, Hidden Markov Sources. 1. INTRODUCTION The Hidden Markov model (HMM) is a discrete-time Markov chain observed through a memoryless channel. The random process consisting of the sequence of observations is referred to as a hidden Markov source (HMS). Markov chains are common models for information sources with memory, and the memoryless channel is among the simplest communication models. Thus HMMs are widely used in image understanding and speech recognition [1], source coding [2], communications, information theory, economics, robotics, computer vision and several other disciplines. Note that most signals modeled as Markov process are usually captured by imperfect sensors and are hence contaminated with noise, i.e., the resulting sequence is in fact an HMS. HMS is a special case of the broader family of multivariate Markov sources, which has been a focus of recent research, notably in the context of parameter estimation [3, 4]. Despite the importance of this model, there has been very limited work on quantizing HMS observations optimally and most practical applications simply assume a (non-hidden) Markov The work was supported in part by the National Science Foundation under grant CCF Fig. 1. A continuous emission hidden Markov source with N states. model for encoding these signals. The closest information theoretic work was on indirect rate distortion problems [5, 6], wherein, instead of the observations, the source hidden behind a layer of noise is encoded and reconstructed. In an alternative approach, a finite state quantizer was proposed in [7, 8, 9], but these techniques do not exploit all the available relevant information. Hence, in a recent paper from our lab [10], an optimal quantization scheme was proposed for HMS, which exploits all the available information on the source state. The probability distribution over the states of the underlying Markov chain captures all the available information and depends on the entire history of observations. Hence, we proposed to refine, with each observation, the estimate of the state probability distribution at both encoder and decoder, and correspondingly update the coding rule. This paper focuses on a significant extension of the optimal HMS quantization paradigm. Advances in internet and communication technologies, have created an extremely heterogeneous network scenario with data consumption devices of highly diverse decoding and display capabilities, all accessing the same content over networks of time varying bandwidth and latency. Thus it is necessary for coding and transmission systems to provide a scalable bitstream that allows decoding at a variety of bit rates (and corresponding levels of quality), where the lower rate information streams are embedded within the higher rate bitstreams in a manner that minimizes redundancy. That is, we need to generate layered bitstreams, wherein a base layer provides a coarse quality reconstruction and successive layers refine the quality, incrementally. Scalable coding with two quality levels, transmits at rate R 12 to both decoders and at rate R 2 to only the second decoder /16/$ IEEE 4024 ICASSP 2016
2 Ô b n Ô e n O n Base Layer Encoder i b n Enh. Layer Encoder i e n Encoder Adapter State Probability Distribution Tracking Unit Encoder Adapter State Probability Distribution Tracking Unit Fig. 2. Proposed encoder for the base and the enhancement layer. The commonly employed technique for scalable coding of HMS, completely neglects the hidden states and employs a predictive coding approach such as DPCM (i.e., assumes a simple Markov model) at the base layer, and at the enhancement layer, the base layer reconstruction error is compressed and transmitted. That is, the base layer reconstruction is used as an estimate for the original signal, and the estimation error is compressed in the enhancement layer. A review of scalable coding techniques for audio and video signals can be found in [11, 12]. An estimation theoretically optimal scalable predictive coding technique was proposed in [13], which accounts for all the information available from the current base layer and past reconstructed samples to generate an optimal estimate at the enhancement layer. In this paper we propose a novel scalable coding technique for HMS, which accounts for all the available information while coding a given layer. At the base layer, we exploit all the available information by employing our previously proposed technique of tracking the state probability distribution at the encoder and the decoder, and using it to update the quantizers for encoding the current observation. At the enhancement layer, we again track the state probability distribution at the encoder and the decoder, but using the higher quality enhancement layer reconstruction for a better estimate, and then the enhancement layer quantizer is adapted to the interval determined by base layer quantization so as to enable full exploitation of all available information. The rest of the paper is organized as follows. In Section 2, we define the problem. In Section 3, we present the proposed method. We substantiate the effectiveness of the proposed approach with comparative numerical results in Section 4, and conclude in Section PROBLEM DEFINITION A hidden Markov source (HMS) is defined by the following set of parameters (please refer to [14] for more details): 1. Hidden states in the model: Assuming a finite number of the states, N, the set of all possible states is denoted S {S 1, S 2,..., S N }, and the state at time t is denoted by q t. 2. The state transition probability distribution: A {a ij}, where a ij P [q t+1 S j q t S i], 1 i, j N. 3. Observations: The set of observation symbols (or alphabet) is denoted by V, and in the discrete emission case V {v 1, v 2,, v M }, where M is the cardinality of the observation alphabet. 4. The observation (emission) probability density function (pdf) given state S j, is denoted g j( ), in the continuous emission case, and the observation (emission) probability mass function (pmf), B {b j(v k )}, where b j(v k ) p[o t v k q t S j] in the discrete emission case, where O t denotes the emitted observation at time t. 5. The initial state distributions π {π i} where π i P [q 1 S i], 1 i N. Fig. 1 depicts an example of a continuous hidden Markov source with N states. Clearly in hidden Markov sources, we have access only to the emitted observation, O t, and not to the state of the source. Our objective is to find the best scalable coding scheme for HMS. 3. PROPOSED METHOD For optimal scalable coding of HMS, we need to exploit all the available information while encoding at each layer. To achieve this, we rely on the important HMS property that the state at time t 1, captures all the past information relevant to the emission of the next source symbol. Specifically, P [O t v k q t 1 S j, O 1 (t 1) ] P [O t v k q t 1 S j], which implies that all observations until time t 1 provide no additional information on the next source symbol, beyond what is captured by the state of the Markov chain at time t 1. Further note that the state of the HMS cannot be known with certainty. Thus the fundamental paradigm to optimally capture the correlations with the past is by tracking the state probability distribution of the HMS. In this approach, each output of the encoder (quantized observation) at a given layer is sent into a unit called the state probability distribution tracking function. This unit estimates the state probability distribution, i.e., probabilities of the Markov chain being in each of the states, denoted by ˆp. The base layer encoder adapter unit utilizes these probabilities to redesign the encoder optimally for the next input sample. At the enhancement layers, there is additional information of the quantization interval from the lower layers along with the state probability distribution. Thus the enhancement layer encoder adapter unit combines both types of available information to redesign the encoder optimally for the next input sample. Fig. 2 shows the proposed scalable encoder of HMS. Our objective is to design the state probability distribution tracking function and the encoder adapter, given a training set of samples, 4025
3 (a) (b) (c) (d) The state probability distribution at time t, p, is defined as, p {p (t,1),..., p (t,n) } {P [q t S 1 O 1 (t 1) ],, P [q t S N O 1 (t 1) ]}. (4) Given p, the best estimate for the source distribution will be: p (t,j) g j(x) (5) Fig. 3. Example of observation pdf. (a) and (b) are the observation pdf given states 1 and 2, g 1(x) and g 2(x) respectively. (c) is the overall observation pdf in the base layer given ˆp {0.5, 0.5}, P (O t ˆp {0.5, 0.5}). (d) is the overall observation pdf in the enhancement layer given ˆp {0.5, 0.5} and the quantization interval information from the base layer (I b ), P (O t ˆp {0.5, 0.5}, I b ). so as to minimize the average reconstruction distortion at a given encoding rates for both base and enhancement layers. We first describe tracking of the state probability distribution, and then discuss the encoder adapter unit for the base layer and the enhancement layer State probability distribution tracking We first estimate the HMS parameters using a training set, for which we follow standard procedure in HMM analysis. We then define forward variable at time t 1 as α t 1(i) P (O 1 (t 1), q t 1 S i), (1) i.e., the joint probability of emitting the observed source sequence up to time t 1, O 1 (t 1) and that the state at time t 1 is S i. These forward variables can be computed recursively as given below: 1. α 1(i) π ig i(o 1) for 1 i N 2. α k+1 (j) [ N i1 α k(i)a ij]g j(o k+1 ) We can then obtain probability of being in state i at time t 1, using Bayes rule, P [q t 1 S i O 1 (t 1) ] P (qt 1 Si, O 1 (t 1)) P (O 1 (t 1) ) P (q t 1 S i, O 1 (t 1) ) N P (qt 1 Sj, O 1 (t 1)) α t 1(i) N αt 1(j) (2) Using these we can compute the probability of being in state i at time t, p (t,i), given the observation sequence up to time t 1 as: p (t,i) P [q t S i O 1 (t 1) ] P [q t 1 S j O 1 (t 1) ]a ij (3) 3.2. Base Layer Design At the base layer the state probability distribution tracking unit finds, ˆp b, using base layer reconstructed observation samples Ôb, as ˆp b {ˆpb (t,1),..., ˆp b (t,n)} {P [q t S 1 Ôb 1 (t 1)],..., P [q t S N Ôb 1 (t 1)]}. (6) Using the reconstructed observation, instead of the original samples, ensures the decoder can exactly mimic the operations of the encoder. Given ˆp b, the best estimate for the source distribution is: ˆp b (t,j)g j(x) (7) One possible approach is to design a quantizer for this pdf for each source output from the ground up. But it clearly entails high complexity at the encoder and the decoder. We thus propose an alternative method with both low complexity and low memory requirement. For a set of T representative p, we find the best codebook using Lloyd s (or other) algorithm offline. Then for a given ˆp b in the process of encoding or decoding, we find the closest representative p from the set of T. Finally using the codebook of the closest representative as an initialization, we run one iteration of the Lloyd s algorithm, to find the codebook to be used for current sample. Note that for the first symbol, we simply use a quantizer based on the assumption that the source symbol is from a fixed state (based on π) at both the encoder and decoder Enhancement Layer Design The goal here is to design the best codebook for the enhancement layer based on: Quantization interval from the base layer, and State probability distribution based on the enhancement layer reconstructed observation samples Ôe. The enhancement layer state probability distribution tracking unit finds ˆp e, similar to the base layer, but using enhancement layer reconstructed observation samples Ôe, as ˆp e {ˆpe (t,1),..., ˆp e (t,n)} {P [q t S 1 Ôe 1 (t 1)],..., P [q t S N Ôe 1 (t 1)]}. Given just ˆp e, the best estimate for the source distribution is: ˆp e (t,j)g j(x) (8) 4026
4 15 20 Comparative Result DPCM Proposed Method Lower bound Comparative Result DPCM Proposed Method Distortion (MSE) (db) Distortion (MSE) (db) Rate (bits) a11 Fig. 4. Rate-Distortion plots for the proposed method, DPCM and the lower bound, at base layer rate of 3 and enhancement layer rate ranging from 2 to 5 bits per sample. Fig. 5. Distortion plots for the proposed method and DPCM, at base and enhancement layer rates of 3 bits/sample, and transition probability, a 11 a 22, varying from 0.9 to However, in combination with the quantization interval information from the base layer, the best estimate for the source distribution is: ˆp e (t,j)ĝ j(x) (9) where, ĝ j(x) is the observation pdf in state j truncated and normalized to the interval determined by quantization at the base layer. We design the quantizer for this pdf of the current sample, by using uniform codebook as an initialization and running one iteration of Lloyd s algorithm. Note that we use uniform codebook as an initialization for the enhancement layer quantizer design as observation pdf within the quantization interval of the base layer has lesser variations and is closer to uniform distribution. However, this assumption is not true for the base layer, as illustrated by an example source distribution for a specific ˆp in Fig. 3. Using the uniform codebook as the initialization also reduces the memory requirements of the encoder and the decoder. For the first symbol, we assume that the source symbol is from a fixed state (based on π) at both the encoder and decoder, and use this in combination with the quantization interval information from the base layer. Note that we can generalize our proposed approach to any number of enhancement layers, by combining the refined estimate of state probability distribution based on observation reconstruction of the given layer, with the quantization interval information from its lower layers. 4. EXPERIMENTAL RESULTS For the the first experiment, we use a HMS which has two Gaussian subsources, one of them with mean µ and variance σ1 2 1 the other one with mean µ and variance σ2 2 1, and the transition probabilities are, a 11 a For the scalable coder, we set the base layer rate to be R 12 3 bits and the enhancement layer rate varies as R 2 2, 3, 4, 5 bits. We compare the proposed method with the prior approach which assumes a simple Markov model and uses DPCM in the base layer and employs a quantizer designed via Lloyd s algorithm for encoding the base layer reconstruction error in the enhancement layer. Also we compare our results with the theoretical lower bound for using a switched scalar quantizer [9] operating at the total rate of R 12 + R 2 bits. The rate distortion plots for the different approaches and the lower bound are shown in Fig. 4. The superiority of the proposed method is evident from the plots with substantial gains of around 5 db over the prior approach. Note that there is a gap of around 2 db from the lower bound due to the scalable coding penalty of the hierarchical structure employed, as this source distortion pair is not successively refinable. In the second experiment, the same source is used, but with varying transition probability, a 11 a 22, from 0.9 to 0.99, and fixed coding rate of 3 bits in the base and the enhancement layer. Results for this experiment, as shown in Fig. 5, demonstrates the gains marginally increasing with values of a 11. Note that calculating ˆp at base and enhancement layers of encoder and decoder does not impose any significant computational burden, as forward variables are easily updated recursively for each sample (as given in Section 3.1) and then ˆp is obtained with a few more manipulations. 5. CONCLUSION We have proposed a novel technique for scalable coding of hidden Markov sources which utilizes all the available information while coding a given layer. Contrary to the existing approaches, which assume a simple Markov model, we use the hidden Markov model and exploit the dependency efficiently in the hidden states. In the base layer, the state probability distribution is estimated for each sample using past base layer observation reconstruction and the quantizer for the current sample is updated accordingly. In the enhancement layer, the state probability distribution is refined for each sample using past enhancement layer observation reconstruction. Then this information is combined with the quantization interval available from the base layer, and the quantizer for the current sample is updated accordingly. The decoder mimics this quantizer updates of the encoder in both base and enhancement layers. Experimental results show substantial performance improvements of the proposed approach over the prior approach of assuming a simple Markov model. 4027
5 6. REFERENCES [1] L. R. Rabiner, S. E. Levinson, and M. M. Sondhi, On the application of vector quantization and hidden markov models to speaker-independent, isolated word recognition, Bell System Technical Journal, vol. 62, no. 4, pp , [2] A. Gersho and R. Gray, Vector quantization and signal compression, Springer, [3] Y. Ephraim and B.L. Mark, On forward recursive estimation for bivariate markov chains, 46th Annual Conference on Information Sciences and Systems (CISS), pp. 1 6, March [4] Y. Ephraim and B.L. Mark, Causal recursive parameter estimation for discrete-time hidden bivariate markov chains, IEEE Transactions on Signal Processing, vol. 63, no. 8, pp , April [5] R. L. Dobrushin and B. S. Tsybakov, Information transmission with additional noise, IRE Trans. Inform. Theory, vol. IT-18, pp. S293 S304, [6] H. S. Witsenhausen, Indirect rate distortion problems, IEEE Transactions on Information Theory, vol. IT-26, no. 5, pp , September [7] J. Foster, R. Gray, and M. Dunham, Finite-state vector quantization for waveform coding, IEEE Transactions on Information Theory, vol. 31, no. 3, pp , [8] M. Dunham and R. Gray, An algorithm for the design of labeled-transition finite-state vector quantizers, IEEE Transactions on Communications, vol. 33, no. 1, pp , [9] D.M. Goblirsch and N. Farvardin, Switched scalar quantizers for hidden markov sources, IEEE Transactions on Information Theory, vol. 38, no. 5, pp , September [10] M. Salehifar, E. Akyol, K. Viswanatha, and K. Rose, On optimal coding of hidden markov sources, IEEE Data Compression Conference, March [11] J. Ohm, Advances in scalable video coding, Proceedings of the IEEE, vol. 93, no. 1, pp , [12] T. Painter and A. Spanias, Perceptual coding of digital audio, Proceedings of the IEEE, vol. 88, no. 4, pp , [13] K. Rose and S. Regunathan, Toward optimality in scalable predictive coding, IEEE Transactions on Image processing, vol. 10, no. 7, pp , July [14] L.R. Rabiner, A tutorial on hidden markov models and selected applications in speech recognition, Proceedings of the IEEE, vol. 77, no. 2, pp , February
On Optimal Coding of Hidden Markov Sources
2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University
More informationOn Common Information and the Encoding of Sources that are Not Successively Refinable
On Common Information and the Encoding of Sources that are Not Successively Refinable Kumar Viswanatha, Emrah Akyol, Tejaswi Nanjundaswamy and Kenneth Rose ECE Department, University of California - Santa
More informationOn Scalable Coding in the Presence of Decoder Side Information
On Scalable Coding in the Presence of Decoder Side Information Emrah Akyol, Urbashi Mitra Dep. of Electrical Eng. USC, CA, US Email: {eakyol, ubli}@usc.edu Ertem Tuncel Dep. of Electrical Eng. UC Riverside,
More informationEstimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences
Estimation-Theoretic Delayed Decoding of Predictively Encoded Video Sequences Jingning Han, Vinay Melkote, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationSUCCESSIVE refinement of information, or scalable
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 49, NO. 8, AUGUST 2003 1983 Additive Successive Refinement Ertem Tuncel, Student Member, IEEE, Kenneth Rose, Fellow, IEEE Abstract Rate-distortion bounds for
More informationScalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014
Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values
More informationA New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality
0 IEEE Information Theory Workshop A New Achievable Region for Gaussian Multiple Descriptions Based on Subset Typicality Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California
More informationMARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for
MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1
More informationSUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING. Kenneth Zeger
SUBOPTIMALITY OF THE KARHUNEN-LOÈVE TRANSFORM FOR FIXED-RATE TRANSFORM CODING Kenneth Zeger University of California, San Diego, Department of ECE La Jolla, CA 92093-0407 USA ABSTRACT An open problem in
More informationEE368B Image and Video Compression
EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer
More informationReview of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition
Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to
More informationSCALABLE AUDIO CODING USING WATERMARKING
SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada Email: {mahmood.movassagh@mail.mcgill.ca, peter.kabal@mcgill.ca}
More informationHIDDEN MARKOV MODELS IN SPEECH RECOGNITION
HIDDEN MARKOV MODELS IN SPEECH RECOGNITION Wayne Ward Carnegie Mellon University Pittsburgh, PA 1 Acknowledgements Much of this talk is derived from the paper "An Introduction to Hidden Markov Models",
More informationProc. of NCC 2010, Chennai, India
Proc. of NCC 2010, Chennai, India Trajectory and surface modeling of LSF for low rate speech coding M. Deepak and Preeti Rao Department of Electrical Engineering Indian Institute of Technology, Bombay
More informationVector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l
Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More information10. Hidden Markov Models (HMM) for Speech Processing. (some slides taken from Glass and Zue course)
10. Hidden Markov Models (HMM) for Speech Processing (some slides taken from Glass and Zue course) Definition of an HMM The HMM are powerful statistical methods to characterize the observed samples of
More informationOn Scalable Source Coding for Multiple Decoders with Side Information
On Scalable Source Coding for Multiple Decoders with Side Information Chao Tian School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS), EPFL, Lausanne,
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationEncoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels
Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering
More informationHidden Markov Models Hamid R. Rabiee
Hidden Markov Models Hamid R. Rabiee 1 Hidden Markov Models (HMMs) In the previous slides, we have seen that in many cases the underlying behavior of nature could be modeled as a Markov process. However
More informationLloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks
Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität
More informationPulse-Code Modulation (PCM) :
PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from
More informationDesign of Optimal Quantizers for Distributed Source Coding
Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationSource Coding: Part I of Fundamentals of Source and Video Coding
Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko
More informationTowards control over fading channels
Towards control over fading channels Paolo Minero, Massimo Franceschetti Advanced Network Science University of California San Diego, CA, USA mail: {minero,massimo}@ucsd.edu Invited Paper) Subhrakanti
More informationIterative Encoder-Controller Design for Feedback Control Over Noisy Channels
IEEE TRANSACTIONS ON AUTOMATIC CONTROL 1 Iterative Encoder-Controller Design for Feedback Control Over Noisy Channels Lei Bao, Member, IEEE, Mikael Skoglund, Senior Member, IEEE, and Karl Henrik Johansson,
More informationEncoder Decoder Design for Feedback Control over the Binary Symmetric Channel
Encoder Decoder Design for Feedback Control over the Binary Symmetric Channel Lei Bao, Mikael Skoglund and Karl Henrik Johansson School of Electrical Engineering, Royal Institute of Technology, Stockholm,
More informationCompression and Coding
Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)
More informationDigital communication system. Shannon s separation principle
Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation
More informationMultimedia Communications. Scalar Quantization
Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing
More informationSubset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding
Subset Typicality Lemmas and Improved Achievable Regions in Multiterminal Source Coding Kumar Viswanatha, Emrah Akyol and Kenneth Rose ECE Department, University of California - Santa Barbara {kumar,eakyol,rose}@ece.ucsb.edu
More informationPredictive Coding. Prediction
Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe
More informationPredictive Coding. Prediction Prediction in Images
Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe
More informationStatistical Analysis and Distortion Modeling of MPEG-4 FGS
Statistical Analysis and Distortion Modeling of MPEG-4 FGS Min Dai Electrical Engineering Texas A&M University, TX 77843 Dmitri Loguinov Computer Science Texas A&M University, TX 77843 Hayder Radha Hayder
More informationCapacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback
2038 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback Vincent
More informationQuantization 2.1 QUANTIZATION AND THE SOURCE ENCODER
2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section
More informationPREDICTIVE quantization is one of the most widely-used
618 IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 1, NO. 4, DECEMBER 2007 Robust Predictive Quantization: Analysis and Design Via Convex Optimization Alyson K. Fletcher, Member, IEEE, Sundeep
More informationHARMONIC VECTOR QUANTIZATION
HARMONIC VECTOR QUANTIZATION Volodya Grancharov, Sigurdur Sverrisson, Erik Norvell, Tomas Toftgård, Jonas Svedberg, and Harald Pobloth SMN, Ericsson Research, Ericsson AB 64 8, Stockholm, Sweden ABSTRACT
More informationAnalysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming
Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Min Dai Electrical Engineering, Texas A&M University Dmitri Loguinov Computer Science, Texas A&M University
More informationWavelet Scalable Video Codec Part 1: image compression by JPEG2000
1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance
More informationOptimal Multiple Description and Multiresolution Scalar Quantizer Design
Optimal ultiple Description and ultiresolution Scalar Quantizer Design ichelle Effros California Institute of Technology Abstract I present new algorithms for fixed-rate multiple description and multiresolution
More informationSTATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING
STATISTICS FOR EFFICIENT LINEAR AND NON-LINEAR PICTURE ENCODING Item Type text; Proceedings Authors Kummerow, Thomas Publisher International Foundation for Telemetering Journal International Telemetering
More informationHidden Markov Modelling
Hidden Markov Modelling Introduction Problem formulation Forward-Backward algorithm Viterbi search Baum-Welch parameter estimation Other considerations Multiple observation sequences Phone-based models
More informationBASICS OF COMPRESSION THEORY
BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find
More information6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011
6196 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 9, SEPTEMBER 2011 On the Structure of Real-Time Encoding and Decoding Functions in a Multiterminal Communication System Ashutosh Nayyar, Student
More informationThe Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal
The Choice of MPEG-4 AAC encoding parameters as a direct function of the perceptual entropy of the audio signal Claus Bauer, Mark Vinton Abstract This paper proposes a new procedure of lowcomplexity to
More informationFast Length-Constrained MAP Decoding of Variable Length Coded Markov Sequences over Noisy Channels
Fast Length-Constrained MAP Decoding of Variable Length Coded Markov Sequences over Noisy Channels Zhe Wang, Xiaolin Wu and Sorina Dumitrescu Department of Electrical and Computer Engineering McMaster
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More informationCan the sample being transmitted be used to refine its own PDF estimate?
Can the sample being transmitted be used to refine its own PDF estimate? Dinei A. Florêncio and Patrice Simard Microsoft Research One Microsoft Way, Redmond, WA 98052 {dinei, patrice}@microsoft.com Abstract
More informationCapacity of a Two-way Function Multicast Channel
Capacity of a Two-way Function Multicast Channel 1 Seiyun Shin, Student Member, IEEE and Changho Suh, Member, IEEE Abstract We explore the role of interaction for the problem of reliable computation over
More informationLATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK
LATTICE VECTOR QUANTIZATION FOR IMAGE CODING USING EXPANSION OF CODEBOOK R. R. Khandelwal 1, P. K. Purohit 2 and S. K. Shriwastava 3 1 Shri Ramdeobaba College Of Engineering and Management, Nagpur richareema@rediffmail.com
More informationScalable color image coding with Matching Pursuit
SCHOOL OF ENGINEERING - STI SIGNAL PROCESSING INSTITUTE Rosa M. Figueras i Ventura CH-115 LAUSANNE Telephone: +4121 6935646 Telefax: +4121 69376 e-mail: rosa.figueras@epfl.ch ÉCOLE POLYTECHNIQUE FÉDÉRALE
More informationEncoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels
Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels Lei Bao, Mikael Skoglund and Karl Henrik Johansson Department of Signals, Sensors and Systems, Royal Institute of Technology,
More informationA Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationWaveform-Based Coding: Outline
Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and
More informationHidden Markov Models. Dr. Naomi Harte
Hidden Markov Models Dr. Naomi Harte The Talk Hidden Markov Models What are they? Why are they useful? The maths part Probability calculations Training optimising parameters Viterbi unseen sequences Real
More informationITCT Lecture IV.3: Markov Processes and Sources with Memory
ITCT Lecture IV.3: Markov Processes and Sources with Memory 4. Markov Processes Thus far, we have been occupied with memoryless sources and channels. We must now turn our attention to sources with memory.
More informationVector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression
Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18
More informationCommunication constraints and latency in Networked Control Systems
Communication constraints and latency in Networked Control Systems João P. Hespanha Center for Control Engineering and Computation University of California Santa Barbara In collaboration with Antonio Ortega
More informationOPTIMUM fixed-rate scalar quantizers, introduced by Max
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL 54, NO 2, MARCH 2005 495 Quantizer Design for Channel Codes With Soft-Output Decoding Jan Bakus and Amir K Khandani, Member, IEEE Abstract A new method of
More informationHyper-Trellis Decoding of Pixel-Domain Wyner-Ziv Video Coding
1 Hyper-Trellis Decoding of Pixel-Domain Wyner-Ziv Video Coding Arun Avudainayagam, John M. Shea, and Dapeng Wu Wireless Information Networking Group (WING) Department of Electrical and Computer Engineering
More informationModule 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur
Module 5 EMBEDDED WAVELET CODING Lesson 13 Zerotree Approach. Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the principle of embedded coding. 2. Show the
More informationIN this paper, we study the problem of universal lossless compression
4008 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 9, SEPTEMBER 2006 An Algorithm for Universal Lossless Compression With Side Information Haixiao Cai, Member, IEEE, Sanjeev R. Kulkarni, Fellow,
More informationFast Progressive Wavelet Coding
PRESENTED AT THE IEEE DCC 99 CONFERENCE SNOWBIRD, UTAH, MARCH/APRIL 1999 Fast Progressive Wavelet Coding Henrique S. Malvar Microsoft Research One Microsoft Way, Redmond, WA 98052 E-mail: malvar@microsoft.com
More informationComputational Genomics and Molecular Biology, Fall
Computational Genomics and Molecular Biology, Fall 2011 1 HMM Lecture Notes Dannie Durand and Rose Hoberman October 11th 1 Hidden Markov Models In the last few lectures, we have focussed on three problems
More informationIncremental Grassmannian Feedback Schemes for Multi-User MIMO Systems
1130 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 63, NO. 5, MARCH 1, 2015 Incremental Grassmannian Feedback Schemes for Multi-User MIMO Systems Ahmed Medra and Timothy N. Davidson Abstract The communication
More informationSCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION
SCELP: LOW DELAY AUDIO CODING WITH NOISE SHAPING BASED ON SPHERICAL VECTOR QUANTIZATION Hauke Krüger and Peter Vary Institute of Communication Systems and Data Processing RWTH Aachen University, Templergraben
More informationIndependent Component Analysis and Unsupervised Learning. Jen-Tzung Chien
Independent Component Analysis and Unsupervised Learning Jen-Tzung Chien TABLE OF CONTENTS 1. Independent Component Analysis 2. Case Study I: Speech Recognition Independent voices Nonparametric likelihood
More informationLecture 7 Predictive Coding & Quantization
Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationarxiv: v1 [cs.it] 20 Jan 2018
1 Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits Alon Kipnis, Yonina C. Eldar and Andrea J. Goldsmith fs arxiv:181.6718v1 [cs.it] Jan 18 X(t) sampler smp sec encoder R[ bits
More informationState of the art Image Compression Techniques
Chapter 4 State of the art Image Compression Techniques In this thesis we focus mainly on the adaption of state of the art wavelet based image compression techniques to programmable hardware. Thus, an
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More information+ (50% contribution by each member)
Image Coding using EZW and QM coder ECE 533 Project Report Ahuja, Alok + Singh, Aarti + + (50% contribution by each member) Abstract This project involves Matlab implementation of the Embedded Zerotree
More informationCut-Set Bound and Dependence Balance Bound
Cut-Set Bound and Dependence Balance Bound Lei Xiao lxiao@nd.edu 1 Date: 4 October, 2006 Reading: Elements of information theory by Cover and Thomas [1, Section 14.10], and the paper by Hekstra and Willems
More informationShankar Shivappa University of California, San Diego April 26, CSE 254 Seminar in learning algorithms
Recognition of Visual Speech Elements Using Adaptively Boosted Hidden Markov Models. Say Wei Foo, Yong Lian, Liang Dong. IEEE Transactions on Circuits and Systems for Video Technology, May 2004. Shankar
More informationRESEARCH ARTICLE. Online quantization in nonlinear filtering
Journal of Statistical Computation & Simulation Vol. 00, No. 00, Month 200x, 3 RESEARCH ARTICLE Online quantization in nonlinear filtering A. Feuer and G. C. Goodwin Received 00 Month 200x; in final form
More informationCODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING
5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).
More information3238 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 6, JUNE 2014
3238 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 60, NO. 6, JUNE 2014 The Lossy Common Information of Correlated Sources Kumar B. Viswanatha, Student Member, IEEE, Emrah Akyol, Member, IEEE, and Kenneth
More informationSIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding
SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental
More informationOrder Adaptive Golomb Rice Coding for High Variability Sources
Order Adaptive Golomb Rice Coding for High Variability Sources Adriana Vasilache Nokia Technologies, Tampere, Finland Email: adriana.vasilache@nokia.com Abstract This paper presents a new perspective on
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationImage Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image
Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n
More informationPerformance Bounds for Joint Source-Channel Coding of Uniform. Departements *Communications et **Signal
Performance Bounds for Joint Source-Channel Coding of Uniform Memoryless Sources Using a Binary ecomposition Seyed Bahram ZAHIR AZAMI*, Olivier RIOUL* and Pierre UHAMEL** epartements *Communications et
More informationMultiscale Systems Engineering Research Group
Hidden Markov Model Prof. Yan Wang Woodruff School of Mechanical Engineering Georgia Institute of echnology Atlanta, GA 30332, U.S.A. yan.wang@me.gatech.edu Learning Objectives o familiarize the hidden
More informationMultiple Description Coding: Proposed Methods And Video Application
Multiple Description Coding: Proposed Methods And Video Application by Saeed Moradi A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements for the
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationStatistical Problem. . We may have an underlying evolving system. (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t
Markov Chains. Statistical Problem. We may have an underlying evolving system (new state) = f(old state, noise) Input data: series of observations X 1, X 2 X t Consecutive speech feature vectors are related
More informationThe information loss in quantization
The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments
Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au
More informationHidden Markov Models. By Parisa Abedi. Slides courtesy: Eric Xing
Hidden Markov Models By Parisa Abedi Slides courtesy: Eric Xing i.i.d to sequential data So far we assumed independent, identically distributed data Sequential (non i.i.d.) data Time-series data E.g. Speech
More informationNoise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source
Noise-Shaped Predictive Coding for Multiple Descriptions of a Colored Gaussian Source Yuval Kochman, Jan Østergaard, and Ram Zamir Abstract It was recently shown that the symmetric multiple-description
More informationImage Data Compression
Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression
More information