EE-597 Notes Quantization
|
|
- Branden Bridges
- 6 years ago
- Views:
Transcription
1 EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both time and amplitude, as accomplished by an analogto-digital converter (ADC. We will typically wor with discrete-time quantities (n (indeed by time variable n which we assume were sampled ideally and without aliasing. Here we discuss various ways of discretizing the amplitude of (n so that it may be represented by a finite set of numbers. This is generally a lossy operation, and so we analyze the performance of various quantization schemes and design quantizers that are optimal under certain assumptions. A good reference for much of this material is [].. Memoryless Scalar Quantization Memoryless scalar quantization of continuous-amplitude variable is the mapping of to output y when lies within interval X : { < + },,,..., L. The are called decision thresholds, and the number of quantization levels is L. The quantization operation is written y Q(. When {y,..., y L }, quantizer is called midtread, else midrise. Quantization error defined q : Q( If is a r.v. with pdf p ( and liewise for q, then quantization error variance is σ q E{q } q p q (qdq ( ( Q( p (d + ( y p (d ( A special quantizer is the uniform quantizer: y + y, for,,...,l, +, for finite, +, L+. c P. Schniter, 999
2 (a (b Q( Q( y y q( q( Figure : (a Uniform and (b non-uniform quantization Q( and quantization error q(. Uniform Quantizer Performance for large L: For bounded input (, ma, uniform quantization with + and L ma, and with y / and y + / (for >, the quantization error is well approimated by a uniform distribution for large L: { / q /, p q (q else. Why? As L, p ( is constant over X for any. Since q y X, it follows that p q (q X will have uniform distribution for any. With (, ma and with and y as specified, q ( /, /] for all (see Fig.. Hence, for any, { / q ( /, /], p q (q X else. q / / Figure : Quantization error for bounded input and midpoint y. c P. Schniter, 999
3 In this case, from (, σ q / [ q 3 q / dq 3 ] / / ( (3 If we use R bits to represent each discrete output y and choose L R, then σq ( ma L 3 ma R and SNR [db] log ( σ σ q ( log 3 σ R ma ( 6.R log 3 ma σ. Recall that the epression above is only valid for σ small enough to ensure (, ma. For larger σ, the quantizer overloads and the SNR decreases rapidly. Eample. (SNR for Uniform Quantization of Uniformly-Distributed Input: For uniformly distributed, can show ma/σ 3, so that SNR 6.R. Eample. (SNR for Uniform Quantization of Sinusoidal Input: For a sinusoidal, can show ma/σ, so that SNR 6.R (Interesting since sine waves are often used as test signals. Eample.3 (SNR for Uniform Quantization of Gaussian Input: Though not truly bounded, Gaussian might be considered as approimately bounded if we choose ma 4σ and ignore residual clipping. In this case SNR 6.R MSE-Optimal Memoryless Scalar Quantization Though uniform quantization is convenient for implementation and analysis, non-uniform quantization yeilds lower σ q when p ( is non-uniformly distributed. By decreasing q( for frequently occuring (at the epense of increasing q( for infrequently occuring, the average error power can be reduced. Lloyd-Ma Quantizer: MSE-optimal thresholds { } and outputs {y } can be determined given an input distribution p (, and the result is the Lloyd-Ma quantizer. Necessary conditions on { } and {y } are σ q for {,...,L} and σ q y for {,...,L}. Using (, /b b a f(d f(b, /a b f(d f(a, and above, a σ q ( y p ( ( y p ( y +y, {...L},, L+, (4 σq + + ( y p (d y y + p (d, {...L} (5 p (d It can be shown that above are sufficient for global MMSE when log p (/, which holds for uniform, Gaussian, and Laplace pdfs, but not Gamma. Note: c P. Schniter, 999 3
4 optimum decision thresholds are halfway between neighboring output values, optimum output values are centroids of the pdf within the appropriate interval, i.e., are given by the conditional means y E{ X } p ( X d p (, X Pr( X d, + + p (d. p (d Iterative Procedure to Find { } and {y }:. Choose ŷ.. For,...,L, given ŷ and ˆ, solve (5 for ˆ +, given ŷ and ˆ +, solve (4 for ŷ +. end; 3. Compare ŷ L to y L calculated from (5 based on ˆ L and L+. Adjust ŷ accordingly, and go to step. Lloyd-Ma Performance for large L: As with the uniform quantizer, can analyze quantization error performance for large L. Here, we assume that the pdf p ( is constant over X for {,...,L}, the input is bounded, i.e., (, ma for some (potentially large ma. So with assumption and definition we can write and thus, from (, σ q becomes p ( p (y for, y X : +, P : Pr{ X } p (y (where we require P σ q P + ( y d. (6 For MSE-optimal {y }, now σ q y P + ( y d y + +, which is epected since the centroid of a flat pdf over X is simply the midpoint of X. Plugging y into epression (6, σ q P [ ( / + / 3] + 3 P 3 [ (+ / / 3 ( / + / 3] P 3 ( 3 P. (7 c P. Schniter, 999 4
5 Note that for uniform quantization (, the epression above reduces to the one derived earlier. Now we minimize σ q w.r.t. { }. The tric here is to define α : 3 p (y so that σ q p (y 3 α 3. For p ( constant over X and y X, α 3 p (y y p (d C we have the following constrained optimization problem: min α 3 s.t. α C. {α } This may be solved using Lagrange multipliers. (a nown constant, Aside. (Optimization via Lagrange Multipliers: Consider the problem of minimizing N-dimensional real-valued cost function J(, where (,,..., N t, subject to M <N real-valued equality constraints f m( a m, m,..., M. This may be converted into an unconstrained optimization of dimension N +M by introducing additional variables λ (λ,..., λ M t nown as Lagrange multipliers. The uncontrained cost function is J u(, λ J( + X m λ m`fm( a m, and necessary conditions for its minimization are Ju(, λ J( + X m λ m fm( (8 Ju(, λ fm( am for m,..., M. (9 λ The typical procedure used to solve for optimal is the following:. Equations for n, n,..., N, in terms of {λ m} are obtained from (8.. These N equations are used in (9 to solve for the M optimal λ m. 3. The optimal {λ m} are plugged bac into the N equations for n, yielding optimal { n}. Necessary conditions are ( ( l, α 3 α + λ α C λ 3α l α l λ/3 l ( ( α 3 + λ α C α C, λ which can be combined to solve for λ: λ ( 3 C C λ 3. L Plugging λ bac into the epression for α l, we find α l C /L, l. c P. Schniter, 999 5
6 Using the definition of α l, the optimal decision spacing is C L 3 p (y 3 p (d L 3 p (y, and the minimum quantization error variance is σq min L p (y 3 ( p (y 3 3 p (d. ( ma 3 3 p (d L 3 p (y An interesting observation is that α 3 l, the lth interval s optimal contribution to σ q, is constant over l..3 Entropy-Coding Binary Scalar Encoding: Previously we have focused on the memoryless scalar quantizer y Q(, where y taes a value from a set of L reconstruction levels. By coding each quantizer output in binary format, we transmit (store the information at a rate (cost of R log L bits/sample. If, for eample, L 8, then we transmit at 3 bits/sample. Say we can tolerate a bit more quantization error, e.g., as results from L 5. We hope that this reduction in fidelity reduces our transmission requirements, but with this simple binary encoding scheme we still require R 3 bits/sample! Idea Bloc Coding: Let s assign a symbol to each bloc of 3 consecutive quantizer outputs. We need a symbol alphabet of size 5 3 5, which is adequately represented by a 7-bit word ( 7 8. Transmitting these words requires only 7/3.33 bits/sample! Idea Variable Length Coding: Assume some of the quantizer outputs occur more frequently than others. Could we come up with an alphabet consisting of short words for representing frequent outputs and longer words for infrequent outputs that would have a lower average transmission rate? Eample.4 (Variable Length Coding: Consider the quantizer with L 4 and output probabilities indicated on the right. Straightforward -bit encoding requires average bit rate of bits/sample, while the variable length code on the right gives average R P P n bits/sample. output P code y.6 y.5 y 3. y 4.5 (Just enough information about Entropy: Q: Given an arbitrarily comple coding scheme, what is the minimum bits/sample required to transmit (store the sequence {y(n}? A: When random process {y(n} is i.i.d., the minimum average bit rate is R min H y + ɛ, c P. Schniter, 999 6
7 Notes: where H y is the entropy of random variable y(n in bits: H y P log P, and ɛ is an arbitrarily small positive constant [], [3]. Entropy obeys H y log L. The left inequality occurs when P for some, while the right inequality occurs when P /L for every. The term entropy refers to the average information of a single random variable, while the term entropy rate refers to a sequence of random variables, i.e., a random process. When {y(n} is not independent (the focus of later sections, a different epression for R min applies. Though the minimum rate is well specified, the construction of a coding scheme which always achieves this rate is not. Eample.5 (Entropy of Variable Length Code: Recalling the setup of Eample.4, we find that H y `.6 log log.5 +. log. +.5 log.5.49 bits. Assuming i.i.d. {y(n}, we have R min.49 bits/sample. Compare to the variable length code on the right which gave R.55 bits/sample. output P code y.6 y.5 y 3. y 4.5 Huffman Encoding: Given quantizer outputs y or fied-length blocs of outputs (y j y y l, the Huffman procedure constructs variable length codes that are optimal in certain respects [3]. For eample, when the probabilities of {P } are powers of / (and {y(n} is i.i.d., the entropy rate of a Huffman encoded output attains R min. Aside. (Huffman Procedure (Binary Case:. Arrange ouput probabilities P in decreasing order and consider them as leaf nodes of a tree.. While there eists more than one node: Merge the two nodes with smallest probability to form a new node whose probability equals the sum of the two merged nodes. Arbitrarily assign and to the two branches of the merging pair. 3. The code assigned to each output is obtained by reading the branch bits sequentially from root note to leaf node Eample.6 (Huffman Encoder Attaining R min: In Aside., a Huffman code was constructed for the output probabilities listed on the right. Here H y `.5 log log log.5.75 bits, so that R min.75 bits/sample (with the i.i.d. assumption. Since the average bit rate for the Huffman code is also R bits/sample, Huffman encoding attains R min for this output distribution. output P code y.5 y.5 y 3.5 y 4.5 c P. Schniter, 999 7
8 .4 Quantizer Design for Entropy Coded Sytems Say that we are designing a system with a memoryless quantizer followed by an entropy coder, and our goal is to minimize the average transmission rate for a given σq (or vice versa. Is it optimal to cascade a σq-minimizing (Lloyd-Ma quantizer with a rate-minimizing code? In other words, what is the optimal memoryless quantizer if the quantized outputs are to be entropy coded? A Compander Formulation: To determine the optimal quantizer,. consider a companding system: a memoryless nonlinearity c( followed by uniform quantizer,. find c( minimizing entropy H y for a fied error variance σ q. c( ma L Figure 3: Compander curve: nonuniform input regions mapped to uniform output regions (for subsequent uniform quantization. First we must epress σ q and H y in terms of c(. Fig. 3 suggests that, for large L, the slope c ( : dc(/d obeys c ( X ma/l, so that we may write ma Lc (. X Assuming large L, the σq -approimation (7 can be transformed as follows. σ q P ma 3L ma 3L ma 3L P c ( X p ( c ( X since P p ( X for large L p ( c ( d. c P. Schniter, 999 8
9 Similarly, H y P log P ( p ( log p ( X p ( log p ( X p (log p (d }{{} h : differential entropy h log ma L constant + p (d }{{} p ( log X p (log c (d p (log ma Lc ( }{{} d + p (log c (d ( Entropy-Minimizing Quantizer: Our goal is to choose c( which minimizes the entropy rate H y subject to fied error variance σq. We employ a Lagrange technique again, minimizing the cost p (log c (d under the constraint that the quantity ma p ( ( c ( d equals a constant C. This yields the unconstrained cost function ( J u c (, λ [ p (log c ( + λ (p ( ( c ( ] C d, ( } {{ } φ(c (,λ with scalar λ, and the unconstrained optimization problem becomes min J u(c (, λ. c (,λ The following technique is common in variational calculus [4]. Say a ( minimizes a (scalar cost J ( a(. Then for any (well-behaved variation η( from this optimal a (, we must have ɛ J( a ( + ɛη( ɛ where ɛ is a scalar. Applying this principle to our optimization problem, we search for c ( such that η(, ɛ J ( u c ( + ɛη(, λ. ɛ From ( we find (using log a log e log e a J u ma ɛ ɛ ɛ φ( c ( + ɛη(, λ ɛ d ( ( [p (log ɛ (e log e c ( + ɛη( + λ p ( ( c ( + ɛη( ] C d ɛ ma [ ( η( ( 3η( ] log (ep ( c ( + ɛη( λp ( c ( + ɛη( d p ( ( c ( [ log (e λ ( c ( ] η(d c P. Schniter, ɛ
10 and to allow for any η( we require log (e λ ( c ( c λ (. log e }{{} a constant! Applying the boundary conditions, { c(ma ma c( } c(. Thus, for large-l, the quantizer that minimizes entropy rate H y for a given quantization error variance σq is the uniform quantizer. Plugging c( into (, the rightmost integral disappears and we have ma H y uniform h log, }{{ L } and using the large-l uniform quantizer error variance approimation (3, H y uniform h log ( σ q. It is interesting to compare this result to the information-theoretic minimal average rate for transmission of a continuous-amplitude memoryless source of differential entropy h at average distortion σ q [, ]: R min h log ( πe σ q. Comparing the previous two equations, we find that (for a continous-amplitude memoryless source uniform quantization prior to entropy coding requires log ( πe 6.55 bits/sample more than the theoretically optimum transmission scheme, regardless of the distribution of. Thus,.55 bits/sample (or.5 db using the 6.R relationship is the price paid for memoryless quantization..5 Adaptive Quantization Previously have considered the case of stationary source processes, though in reality the source signal may be highly non-stationary. For eample, the variance, pdf, and/or mean may vary significantly with time. Here we concentrate on the problem of adapting uniform quantizer stepsize to a signal with unnown variance. This is accomplished by estimating the input variance ˆσ (n and setting the quantizer stepsize appropriately: (n φ ˆσ (n. L Here φ is a constant that depends on the distribution of the input signal whose function is to prevent input values greater than σ (n from being clipped by the quantizer (see Fig. 4; comparing to non-adaptive step size relation ma /L, we see that φ ˆσ (n ma. c P. Schniter, 999
11 Q( φˆσ L p ( ˆσ φ ˆσ Figure 4: Adaptive quantization stepsize (n φ ˆσ /L As long as the reconstruction levels {y } are the same at encoder and decoder, the actual values chosen for quantizer design are arbitrary. Assuming integer values as in Fig. 4, the quantization rule becomes (n (n midrise, y(n ( (n midtread. (n AQF and AQB: Fig. 5 shows two structures for stepsize adaptation: (a adaptive quantization with forward estimation (AQF and (b adaptive quantization with bacward estimation (AQB. The advantage of AQF is that variance estimation may be accomplished more accurately, as it is operates directly on the source as opposed to a quantized (noisy version of the source. The advantage of AQB is that the variance estimates do not need to be transmitted as side information for decoding. In fact, practical AQF encoders transmit variance estimates only occasionally, e.g., once per bloc. (a (b (n y(n y(n (n (n y(n y(n Q channel Q Q channel Q (n Variance Estimator channel Variance Estimator Variance Estimator Figure 5: (a AQF and (b AQB. Bloc Variance Estimation: When operating on finite blocs of data, the structures in Fig. 5 c P. Schniter, 999
12 perform variance estimation as follows: Bloc AQF: ˆσ (n N Bloc AQB: ˆσ (n N N (n i i N ( y(n i (n i i N is termed the learning period and its choice may significantly impact quantizer SNR performance: choosing N too large prevents the quantizer from adapting to the local statistics of the input, while choosing N too small results in overly noisy AQB variance estimates and ecessive AQF side information. Fig. 6 demonstrates these two schemes for two choices of N. (a AQF, N (b AQB, N (c AQF, N (d AQB, N Figure 6: Bloc AQF and AQB estimates of σ (n superimposed on (n for N 8, 3. SNR acheived: (a.6 db, (b 8.8 db, (c. db, and (d 8.8 db. c P. Schniter, 999
13 Recursive Variance Estimation: The recursive method of estimating variance is as follows Recursive AQF: ˆσ (n αˆσ (n + ( α (n Recursive AQB: ˆσ (n αˆσ (n + ( α (y(n (n. where α is a forgetting factor in the range < α < and typically near to. This leads to an eponential data window, as can be seen below. Plugging the epression for ˆσ (n into that for ˆσ (n, ˆσ (αˆσ (n α (n + ( α (n + ( α (n α ˆσ ( (n + ( α (n + α (n. Then plugging ˆσ (n into the above, ( ˆσ (n α( αˆσ (n 3 + ( α (n 3 + ( α (n + α (n α 3ˆσ ( (n 3 + ( α (n + α (n + α (n 3. Continuing this process N times, we arrive at ˆσ (n ( α N i α i (n i + α N ˆσ (n N. Taing the limit as N, α < ensures that ˆσ (n ( α α i (n i. i References [] N.S. Jayant and P. Noll, Digital Coding of Waveforms, Englewood Cliffs, NJ: Prentice-Hall, 984. [] T. Berger, Rate Distortion Theory, Englewood Cliffs, NJ: Prentice-Hall, 97. [3] T.A. Cover and J.A. Thomas, Elements of Information Theory, New Yor, NY: Wiley, 99. [4] A.P. Sage and C.C. White, III, Optimum Systems Control, nd Ed., Englewood Cliffs, NJ: Prentice- Hall, 977. c P. Schniter, 999 3
14 (a AQF, lambda (b AQB, lambda (c AQF, lambda (d AQB, lambda Figure 7: Eponential AQF and AQB estimates of σ (n superimposed on (n for λ.9,.99. (a.5 db, (b 8. db, (c. db, (d 4. db. c P. Schniter, 999 4
Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014
Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values
More informationMultimedia Communications. Scalar Quantization
Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing
More informationGaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)
Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationQuantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.
Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate
More informationQuantization 2.1 QUANTIZATION AND THE SOURCE ENCODER
2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section
More informationCoding for Discrete Source
EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively
More informationBeing edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative
Being edited by Prof. Sumana Gupta 1 Quantization This involves representation the sampled data by a finite number of levels based on some criteria such as minimizing of the quantifier distortion. Quantizer
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationExample: for source
Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationAnalysis of methods for speech signals quantization
INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs
More informationReview of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition
Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to
More informationThe information loss in quantization
The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally
More informationNUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS
NUMERICAL COMPUTATION OF THE CAPACITY OF CONTINUOUS MEMORYLESS CHANNELS Justin Dauwels Dept. of Information Technology and Electrical Engineering ETH, CH-8092 Zürich, Switzerland dauwels@isi.ee.ethz.ch
More informationChapter 9 Fundamental Limits in Information Theory
Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More informationChapter 3 Source Coding. 3.1 An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code
Chapter 3 Source Coding 3. An Introduction to Source Coding 3.2 Optimal Source Codes 3.3 Shannon-Fano Code 3.4 Huffman Code 3. An Introduction to Source Coding Entropy (in bits per symbol) implies in average
More informationDigital communication system. Shannon s separation principle
Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation
More informationData Compression. Limit of Information Compression. October, Examples of codes 1
Data Compression Limit of Information Compression Radu Trîmbiţaş October, 202 Outline Contents Eamples of codes 2 Kraft Inequality 4 2. Kraft Inequality............................ 4 2.2 Kraft inequality
More informationPredictive Coding. Prediction
Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe
More informationPredictive Coding. Prediction Prediction in Images
Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe
More informationPulse-Code Modulation (PCM) :
PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from
More informationDecision-Point Signal to Noise Ratio (SNR)
Decision-Point Signal to Noise Ratio (SNR) Receiver Decision ^ SNR E E e y z Matched Filter Bound error signal at input to decision device Performance upper-bound on ISI channels Achieved on memoryless
More informationLecture 20: Quantization and Rate-Distortion
Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,
More informationCompression and Coding
Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More informationFault Tolerance Technique in Huffman Coding applies to Baseline JPEG
Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More informationA Systematic Description of Source Significance Information
A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh
More informationJoint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820
Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS
More informationMARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for
MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationChapter 2 Date Compression: Source Coding. 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code
Chapter 2 Date Compression: Source Coding 2.1 An Introduction to Source Coding 2.2 Optimal Source Codes 2.3 Huffman Code 2.1 An Introduction to Source Coding Source coding can be seen as an efficient way
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More information1 Introduction to information theory
1 Introduction to information theory 1.1 Introduction In this chapter we present some of the basic concepts of information theory. The situations we have in mind involve the exchange of information through
More informationLecture 5 Channel Coding over Continuous Channels
Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationWaveform-Based Coding: Outline
Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and
More informationPrinciples of Communications
Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount
More informationVector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l
Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT
More informationAnalysis of Finite Wordlength Effects
Analysis of Finite Wordlength Effects Ideally, the system parameters along with the signal variables have infinite precision taing any value between and In practice, they can tae only discrete values within
More informationCODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING
5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).
More informationEE 121: Introduction to Digital Communication Systems. 1. Consider the following discrete-time communication system. There are two equallly likely
EE 11: Introduction to Digital Communication Systems Midterm Solutions 1. Consider the following discrete-time communication system. There are two equallly likely messages to be transmitted, and they are
More informationIMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart
IMAGE COMPRESSIO OF DIGITIZED DE X-RAY RADIOGRAPHS BY ADAPTIVE DIFFERETIAL PULSE CODE MODULATIO Brian K. LoveweIl and John P. Basart Center for ondestructive Evaluation and the Department of Electrical
More informationlossless, optimal compressor
6. Variable-length Lossless Compression The principal engineering goal of compression is to represent a given sequence a, a 2,..., a n produced by a source as a sequence of bits of minimal possible length.
More informationFinite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut
Finite Word Length Effects and Quantisation Noise 1 Finite Word Length Effects Finite register lengths and A/D converters cause errors at different levels: (i) input: Input quantisation (ii) system: Coefficient
More informationClass of waveform coders can be represented in this manner
Digital Speech Processing Lecture 15 Speech Coding Methods Based on Speech Waveform Representations ti and Speech Models Uniform and Non- Uniform Coding Methods 1 Analog-to-Digital Conversion (Sampling
More informationPCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1
PCM Reference Chapter 1.1, Communication Systems, Carlson. PCM.1 Pulse-code modulation (PCM) Pulse modulations use discrete time samples of analog signals the transmission is composed of analog information
More informationMultimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization
Multimedia Systems Giorgio Leonardi A.A.2014-2015 Lecture 4 -> 6 : Quantization Overview Course page (D.I.R.): https://disit.dir.unipmn.it/course/view.php?id=639 Consulting: Office hours by appointment:
More informationObjectives of Image Coding
Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction
More informationModule 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur
Module Quantization and Coding ersion, ECE IIT, Kharagpur Lesson Logarithmic Pulse Code Modulation (Log PCM) and Companding ersion, ECE IIT, Kharagpur After reading this lesson, you will learn about: Reason
More information7.1 Sampling and Reconstruction
Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion
More informationECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)
ECE 74 - Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) 1. A Huffman code finds the optimal codeword to assign to a given block of source symbols. (a) Show that cannot be a Huffman
More informationLloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks
Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität
More informationBASICS OF COMPRESSION THEORY
BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find
More informationE303: Communication Systems
E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17
More informationEE368B Image and Video Compression
EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer
More informationSignals and Spectra - Review
Signals and Spectra - Review SIGNALS DETERMINISTIC No uncertainty w.r.t. the value of a signal at any time Modeled by mathematical epressions RANDOM some degree of uncertainty before the signal occurs
More informationMultimedia Communications. Differential Coding
Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More informationPrinciples of Communications
Principles of Communications Weiyao Lin Shanghai Jiao Tong University Chapter 10: Information Theory Textbook: Chapter 12 Communication Systems Engineering: Ch 6.1, Ch 9.1~ 9. 92 2009/2010 Meixia Tao @
More informationRevision of Lecture 5
Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information
More informationVector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression
Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationEstimators as Random Variables
Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until
More informationOptimization of Quantizer s Segment Treshold Using Spline Approximations for Optimal Compressor Function
Applied Mathematics, 1, 3, 143-1434 doi:1436/am1331 Published Online October 1 (http://wwwscirporg/ournal/am) Optimization of Quantizer s Segment Treshold Using Spline Approimations for Optimal Compressor
More informationConstructing Polar Codes Using Iterative Bit-Channel Upgrading. Arash Ghayoori. B.Sc., Isfahan University of Technology, 2011
Constructing Polar Codes Using Iterative Bit-Channel Upgrading by Arash Ghayoori B.Sc., Isfahan University of Technology, 011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree
More informationMultimedia Communications. Mathematical Preliminaries for Lossless Compression
Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when
More informationencoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256
General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)
More information10-704: Information Processing and Learning Fall Lecture 10: Oct 3
0-704: Information Processing and Learning Fall 206 Lecturer: Aarti Singh Lecture 0: Oct 3 Note: These notes are based on scribed notes from Spring5 offering of this course. LaTeX template courtesy of
More informationThe Secrets of Quantization. Nimrod Peleg Update: Sept. 2009
The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the
More informationShannon s A Mathematical Theory of Communication
Shannon s A Mathematical Theory of Communication Emre Telatar EPFL Kanpur October 19, 2016 First published in two parts in the July and October 1948 issues of BSTJ. First published in two parts in the
More informationInformation Sources. Professor A. Manikas. Imperial College London. EE303 - Communication Systems An Overview of Fundamentals
Information Sources Professor A. Manikas Imperial College London EE303 - Communication Systems An Overview of Fundamentals Prof. A. Manikas (Imperial College) EE303: Information Sources 24 Oct. 2011 1
More informationVector Quantization and Subband Coding
Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block
More informationVID3: Sampling and Quantization
Video Transmission VID3: Sampling and Quantization By Prof. Gregory D. Durgin copyright 2009 all rights reserved Claude E. Shannon (1916-2001) Mathematician and Electrical Engineer Worked for Bell Labs
More informationSource Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria
Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal
More informationChannel Coding 1. Sportturm (SpT), Room: C3165
Channel Coding Dr.-Ing. Dirk Wübben Institute for Telecommunications and High-Frequency Techniques Department of Communications Engineering Room: N3, Phone: 4/8-6385 Sportturm (SpT), Room: C365 wuebben@ant.uni-bremen.de
More informationUncertainity, Information, and Entropy
Uncertainity, Information, and Entropy Probabilistic experiment involves the observation of the output emitted by a discrete source during every unit of time. The source output is modeled as a discrete
More informationEE67I Multimedia Communication Systems
EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original
More informationECE 587 / STA 563: Lecture 5 Lossless Compression
ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 2017 Author: Galen Reeves Last Modified: October 18, 2017 Outline of lecture: 5.1 Introduction to Lossless Source
More informationECE 587 / STA 563: Lecture 5 Lossless Compression
ECE 587 / STA 563: Lecture 5 Lossless Compression Information Theory Duke University, Fall 28 Author: Galen Reeves Last Modified: September 27, 28 Outline of lecture: 5. Introduction to Lossless Source
More informationMultiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks. Ji an Luo
Multiple Bits Distributed Moving Horizon State Estimation for Wireless Sensor Networks Ji an Luo 2008.6.6 Outline Background Problem Statement Main Results Simulation Study Conclusion Background Wireless
More informationCS578- Speech Signal Processing
CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction
More information12.4 Known Channel (Water-Filling Solution)
ECEn 665: Antennas and Propagation for Wireless Communications 54 2.4 Known Channel (Water-Filling Solution) The channel scenarios we have looed at above represent special cases for which the capacity
More informationCompression and Coding. Theory and Applications Part 1: Fundamentals
Compression and Coding Theory and Applications Part 1: Fundamentals 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why
More informationChapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University
Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission
More informationIntroduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.
Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems
More information1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.
Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification
More informationChapter 3 Single Random Variables and Probability Distributions (Part 1)
Chapter 3 Single Random Variables and Probability Distributions (Part 1) Contents What is a Random Variable? Probability Distribution Functions Cumulative Distribution Function Probability Density Function
More informationOn the Cost of Worst-Case Coding Length Constraints
On the Cost of Worst-Case Coding Length Constraints Dror Baron and Andrew C. Singer Abstract We investigate the redundancy that arises from adding a worst-case length-constraint to uniquely decodable fixed
More informationFractal Dimension and Vector Quantization
Fractal Dimension and Vector Quantization [Extended Abstract] Krishna Kumaraswamy Center for Automated Learning and Discovery, Carnegie Mellon University skkumar@cs.cmu.edu Vasileios Megalooikonomou Department
More informationBlock 2: Introduction to Information Theory
Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation
More informationCMPT 365 Multimedia Systems. Final Review - 1
CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression
More informationConstellation Shaping for Communication Channels with Quantized Outputs
Constellation Shaping for Communication Channels with Quantized Outputs Chandana Nannapaneni, Matthew C. Valenti, and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia
More informationEE 5345 Biomedical Instrumentation Lecture 12: slides
EE 5345 Biomedical Instrumentation Lecture 1: slides 4-6 Carlos E. Davila, Electrical Engineering Dept. Southern Methodist University slides can be viewed at: http:// www.seas.smu.edu/~cd/ee5345.html EE
More information