C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

Size: px
Start display at page:

Download "C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University"

Transcription

1 Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Office: EC538 (03)

2 Contents Quantization Problem Uniform Quantization Adaptive Quantization Nonuniform Quantization Entropy-Coded Quantization Vector Quantization Rate-Distortion Function

3 Quantization 3 Definition: The process of representing a large possibly infinite set of values with a smaller set. Example: Source: Real numbers in the [-10.0, 10.0] Quantization Q(x) x+0.5 [-10.0, -10.0] {-10, -9,, -1, 0, 1,,, 9, 10} Scalar vs. vector quantization Scalar: applied to scalars Vector: applied to vectors

4 The Quantization Process 4 Two aspects Encoder mapping Map a range of values to a codeword If source is analog A/D converter Knowledge of the source can help pick more appropriate ranges Decoder mapping Map the codeword to a value in the range If output is analog D/A converter Knowledge of the source distribution can help pick better approximations Quantizer encoder + decoder

5 5 Quantization Example

6 6 Quantizer Input-Output Map

7 Quantization Problem Formulation 7 Input: X random variable f X (x) probability density function (pdf) Output: {b i } i0..m decision boundaries {y i } i1..m reconstruction levels Discrete processes are often approximated by continuous distributions E.g.: Laplacian model of pixel difference If source is unbounded, then first/last decision boundaries ±

8 Quantization Error 8 Q( x) yi iff bi 1 < x b i Mean squared quantization error σ q M i 1 ( x Q( x) ) b b i i 1 f dx ( x y ) f dx i X X Quantization error is a.k.a. Quantization noise Quantizer distortion

9 Quantization Problem Formulation () 9 Bit rates w/ fixed-length codewords R log M E.g.: M 8 R 3 Quantizer design problem Given: input pdf f X (x) & M Find: decision boundaries {b i } and Reconstruction levels {y i } Such that: MSQE is minimized

10 Quantization Problem Formulation (3) Bit rates w/ variable-length codewords R depends on boundary selection Example 10 M i i i y P l R 1 ) ( i i b b X i dx x f y P 1 ) ( ) ( M i b b X i i i dx x f l R 1 1 ) (

11 Quantization Problem Formulation (4) 11 Rate-optimization formulation Given: Distortion constraint σ q D* Find: {b i }, {y i }, binary codes Such that: R is minimized Distortion-optimization formulation Given: Rate constraint R R* Find: {b i }, {y i }, binary codes Such that: σ q is minimized

12 Uniform Quantizer 1 All intervals of the same size i.e., boundaries are evenly spaced (Δ) Outer intervals may be an exception Reconstruction Usually the midpoint is selected Midrise quantizer Zero is not an output level Midtread quantizer Zero is an output level

13 Midrise vs. Midtread Quantizer 13 Midrise Midtread

14 Uniform Quantization of Uniform Source 14 Input Uniform [-X max, X max ] Output M-level uniform quantizer Δ X max M σ M iδ i 1 1 Δ q x Δ dx ( i 1) Δ i 1 X max 1

15 Alternative MSQE Derivation 15 Consider quantization error instead: q x Q(x) q [-Δ/, Δ/] σ q 1 Δ Δ Δ q dq Δ 1 SNR(dB) 10log 10log σs 10log σq ( M ) 0log ( X ) 1 ( X ) ( n max 1 Δ ) 6.0n db 10log 10 max 1 1 X M max

16 Examples (8 1,, 3 bits/pixel) 16 Darkening, contouring & dithering

17 Uniform Quantization of Nonuniform Sources 17 Example nonuniform source: x [-100, 100], P(x [-1, 1]) 0.95 Problem Design an 8-level quantizer The naïve approach leads to 95% of samples values represented by two numbers: -1.5 and 1.5 Max quantization error (QE) 1.5 Min QE 11.5 (!) Consider an alternative Step 0.3 Max QE 98.5, however 95% of the time QE < 0.15

18 Optimizing MSQE 18 Numerically solvable for specific PDF

19 19 Example Optimum Step Sizes

20 0 QE for 3-bit Midrise Quantizer

21 Overload/Granular Regions 1 The step selection Tradeoff between overload noise and granular noise. max granular value f1, f1 4 "4σ loading" stdev

22 Variance Mismatch Effects

23 3 Variance Mismatch Effects ()

24 Distribution Mismatch Effects 4 8-level quantizers, SNR

25 Adaptive Quantization 5 Idea Instead of a static scheme, adapt to the actual data: Mean, variance, pdf Forward adaptive (off-line) Divide source in blocks Analyze block statistics Set quantization scheme Side channel Backward adaptive (on-line) Adaptation based on quantizer output No side channel necessary

26 Forward Adaptive Quantization (FAQ) 6 Choosing block size Too large Not enough resolution Increased latency Too small More side channel information Assuming a mean of zero Variance estimate: ˆ σ 1 N 1 q x i 0 n + i N

27 Speech Quantization Example 7 16-bit speech 3-bit fixed

28 Speech Quantization Example () 8 16-bit speech 3-bit FAQ Block 18 samples 8-bit variance quantization

29 FAQ Refinement 9 So far we assumed uniform pdf Refinement Assume uniform pdf but Record min/max values for each block Example: Sena image 8x8 blocks 3-bit quantization Overhead 16/8x8 0.5 bits/pixel

30 FAQ Refinement Example 30 Original: 8 bits/pixel Quantized: 3.5 bits/pixel

31 Backward Adaptive Quantization (BAQ) 31 Observation Only encoder sees input Adaptation can only be based on quantized output Problem How do we deduce mismatch information from output only? It is possible, if we know the pdf and we are very patient

32 Jayant Quantizer 3 Idea If input falls in the outer levels Expand step size If input falls in the inner levels Contract step size The product of expansions & contraction should be 1 Multipliers: M k If input S n-1 falls in the k th interval, then step is multiplied by M k Inner M k < 1, outer M k > 1 Δ Δ n M l ( n 1) n 1

33 33 3-bit Jayant Quantizer Output Levels

34 Jayant Example 34 M 0 M 4 0.8, M 1 M M M 6 1.0, M 3 M 7 1., Δ Input:

35 Picking Jayant Multipliers Δ min / Δ max to prevent under/overflow. Adaption speed affected by γ. 35 > M k k k M k P l k k l k M k k k P M k N n M k n P l l l M Let N n P M M M k k k k k k k k k , where, where 1, 1 1 γ γ γ

36 Jayant Example 36 Ringing

37 Jayant Performance 37 Expands more rapidly than contracts to avoid overload errors. Robustness over changing input statistics.

38 Non-uniform Quantization 38 Idea: Pick the boundaries such that error is minimized i.e., smaller/bigger step for smaller/bigger values e.g.:

39 39 Non-uniform Quantization-- pdfoptimized Quantization Problem: Given f X, minimize MSQE: σ q M i 1 b b i i 1 ( x y ) f dx i X Set derivative w.r.t. y j to zero and solve for y j : y j b b j j 1 b b j j 1 x f f X X ( x) dx ( x)dx Set derivative w.r.t. b j to zero and solve for b j : ( y y ) b + j j+ 1 j

40 Non-uniform Quantization-- Lloyd- 40 Max Algorithm Observation: Circular dependency b/w b j and y j Lloyd/Max/Lukaszewics/Steinhaus approach: Solve the two iteratively until an acceptable solution is found Example:

41 Non-uniform Quantization-- Lloyd- 41 Max Algorithm () Boundaries: { b 1, b,, b M/-1 } b 0 0, b M/-1 MAX_INPUT Reconstruction levels: { y 1, y,, y M/-1 } y 1 b b 0 1 x f X b1 ( x) dx f ( x)dx b 0 X One equation, two unknowns: b 1, y 1 Pick a value for b 1 (e.g. b 1 1), solve for y 1 and continue: y x f ( x) dx f ( x)dx y + b1 y1 and so on until all { b n } and { y m } are found b b 1 X b b 1 X

42 Non-uniform Quantization-- Lloyd- 4 Max Algorithm (3) Terminating condition: y M yˆm ε yˆ M b + M 1 ym 1 y M b b M M / / 1 x f X bm / ( x) dx f ( x)dx b M / 1 X Else: pick a different b 1 & repeat

43 43 Non-uniform Quantization Example: pdf- Optimized Quantizers Significant improvement over the uniform quantizer.

44 Lloyd-Max Quantizer Properties Mean OUTPUT Mean INPUT. Variance OUTPUT Variance INPUT 3. MSQE: σ q M x σ y j P b j 1 j 1 [ x b ] 4. If N is a random variable representing QE E [ XN ] σ q 5. Quantizer output and QE are orthogonal (uncorrelated) E [ Q X ) N b, b, K, ] 0 ( 0 1 b M j

45 Mismatch Effects 45 4-bit Laplacian pdf-optimized quantizer

46 Companded Quantization (CQ) 46 Compressor Uniform quantizer Expander

47 47 CQ Example: Compressor

48 48 CQ Example: Uniform Quantizer

49 49 CQ Example: Expander

50 50 CQ Example: Equivalent Non-uniform Quantizer

51 Vector Quantization x, y Definition x [ x(1) x()... x( N )] y [ y(1) y()... y( N )] x( i), y( i), 1 i N : real random variables : N - dimensional random vector the vector y has a special distribution in that it may only take one of L (deterministic ) vector values in N R

52 Vector Quantization (c.1) Vector quantization y Q() x the vector quantization of x may be viewed as a pattern recognition problem involving the classification of the outcomes of the random variable x into a discrete number of categories or cell in N-space in a way that optimizes some fidelity criterion, such as mean square distortion.

53 Vector Quantization (c.) VQ Distortion L D P( x Ck) E{ d( x, yk) x C k k} 1 d( x, y ) are typically the distance measures k N in R, including l, l, l norm 1 VQ Optimization minimize the average distortion D.

54 Vector Quantization (c.3) Two conditions for optimality Nearest Neighbor Selection Q( x) y iff k d( x, y k, ) x C k d( x, y j ) for k j,1 j L. minimize average distortion y k arg min arg y y min D k... x C arg min k > applied to partition the N-dimensional space into cell { C, k L} when the joint pdf y d ( x, y ) f x E { d ( x, y ) x C ( ξ... ξ ) d ξ... 1 n 1 d ξ k 1 f x () k N } is known.

55 55

56 Appendix C: Rate-Distortion Functions Introduction Rate-Distortion Function for a Gaussian Source Rate-Distortion Bounds Distortion Measure Methods

57 1. Introduction Considering question Given a source-user pair and a channel, under what conditions is it possible to design a communication system that reproduces the source output for the user with an average distortion that does not exceed some specified upper limit D? The capacity (C) of a communication channel. The rate distortion function ( R(D) )of a source-user pair. Rate-distortion function R(D) A communication system can be designed that achieves fidelity D if and only if the capacity of the channel that connects the source to user exceeds R(D). The lower limit for data compression to achieve a certain fidelity subject to a predetermined distortion measure D.

58 1. Introduction (cont.) Equations representations : Distortion D: D d ( q ) p ( x ) q ( y x ) ρ ( x, y ) dxdy M utual information: q ( y x ) I ( q ) p ( x ) q ( y x ) log dxdy q ( y ) Rate distortion function R(D): R ( D ) inf I ( q ), Q { q ( y x ): d ( q ) D } q Q D ρ ( x, y ): distortion measure for the source word x ( x 1,..., x ) reproduced as y (y,..., y ) ρ n n 1 ( x,y ) n ρ ( x, y ) t 1 The fam ily F ρ { ρ n, 1 n < } is called the single-letter fidelity criterion generated by ρ. n d t t 1 n

59 . Rate-Distortion Bounds Introduction Rate-Distortion Function for A Gaussian Source R(D) for a memoryless Gaussian source Source coding with a distortion measure Rate-Distortion Bounds Conclusions

60 3. Rate-Distortion Function for A Gaussian Source Rate-Distortion for a memoryless Gaussian source The minimum information rate (bpn) necessary to represent the output of a discrete-time, continuous-amplitude, memoryless stationary Gaussian source based on an MSE distortion measure per symbol. 1 log ( x D), σ Rg ( D) 0, 0 D σx D σ x

61 3. Rate-Distortion Function for A Gaussian Source (c.1) Source coding with a distortion measure (Shannon, 1959) There exists a coding scheme that maps the source output into codewords such that for any given distortion D, the minimum rate R(D) bpn is sufficient to reconstruct the source output with an average distortion that is arbitrarily close to D. Transform the R(D) to distortion-rate function D(R) D g ( R) 10log 10 R Expressin db D g σ x ( R) 6R + 10log 10 σ x

62 3. Rate-Distortion Function for A Gaussian Source (c.) Comparison between different quantizations

63 4. Rate-Distortion Bounds Source: Memoryless, continuous-amplitude source with zero mean and finite variance σ x with respect to the MSE distortion measure. Upper bound According to the theorem of Berger (1971), it implies that the Gaussian source requires the maximum rate among all other sources for a specified level of mean square distortion. 1 σx RD D R D D ( ) log g( ), 0 σx R DR ( ) D( R) σ g x

64 Lower bound (Shannon lower bound) where 4. Rate-Distortion Bounds (c.1) ) ( * * 1 ) ( log 1 ) ( ) ( x H R e R D ed x H D R π π D ed e D R e x H e x f x x x g x x x x * log 1 log 1 log 1 ) ( log 1 ) ( 1 ) ( For Gaussian source : σ π σ π σ π πσ σ ξ ξ ξ d f f x H x H n x n x def ) ( )log ( ) ( entropy ) :differential ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( * * R D R D R D D R D R D R g g

65 4. Rate-Distortion Bounds (c.) For Gaussian source, the rate-distortion, upper bound and lower bound are all identical to each other. The bound of differential entropy 10log 10log D * D D g * ( R) 6R 6[ H ( R) ( R) 6[ H 6[ R g g ( x) H( x)] ( D) R ( x) H( x)] ( D)] The differential entropy is upper boundedby g * H g ( x)

66 4. Rate-Distortion Bounds (c.3) Rate-distortion R(D) to channel capacity C For C > R g (D) The fidelity (D) can be achieved. For R(D)<C< R g (D) Achieve fidelity for stationary source May not achieve fidelity for random source For C<R(D) Can not be sure to achieve fidelity

67 4. Rate-Distortion Bounds (c.4)

68 5. Distortion Measure Methods { r } GG( m, σ, r) k exp c( x m) rc k and c Γ( 1/ r) For different r: r 1 Laplacian pdf r Gaussian pdf r 0 constant pdf r uniform pdf Γ( 3 / r) σ Γ( 1 / r)

69 Problems 69 Homeworks (Sayood 3 rd, pp ) 3, 6. References J.R. Deller, J.G. Proakis, and J.H.L. Hansen, Discrete-Time Processing of Speech Signals, IEEE Press

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014 Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values

More information

Multimedia Communications. Scalar Quantization

Multimedia Communications. Scalar Quantization Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing

More information

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER 2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization Multimedia Systems Giorgio Leonardi A.A.2014-2015 Lecture 4 -> 6 : Quantization Overview Course page (D.I.R.): https://disit.dir.unipmn.it/course/view.php?id=639 Consulting: Office hours by appointment:

More information

Coding for Discrete Source

Coding for Discrete Source EGR 544 Communication Theory 3. Coding for Discrete Sources Z. Aliyazicioglu Electrical and Computer Engineering Department Cal Poly Pomona Coding for Discrete Source Coding Represent source data effectively

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Example: for source

Example: for source Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby

More information

Pulse-Code Modulation (PCM) :

Pulse-Code Modulation (PCM) : PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from

More information

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding. Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

CS578- Speech Signal Processing

CS578- Speech Signal Processing CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

Soft-Output Trellis Waveform Coding

Soft-Output Trellis Waveform Coding Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak 4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

The information loss in quantization

The information loss in quantization The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization

More information

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative Being edited by Prof. Sumana Gupta 1 Quantization This involves representation the sampled data by a finite number of levels based on some criteria such as minimizing of the quantifier distortion. Quantizer

More information

Basic Principles of Video Coding

Basic Principles of Video Coding Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion

More information

E303: Communication Systems

E303: Communication Systems E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Objectives of Image Coding

Objectives of Image Coding Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction

More information

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to

More information

ELEC546 Review of Information Theory

ELEC546 Review of Information Theory ELEC546 Review of Information Theory Vincent Lau 1/1/004 1 Review of Information Theory Entropy: Measure of uncertainty of a random variable X. The entropy of X, H(X), is given by: If X is a discrete random

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

CMPT 365 Multimedia Systems. Final Review - 1

CMPT 365 Multimedia Systems. Final Review - 1 CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function

Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Lattices for Distributed Source Coding: Jointly Gaussian Sources and Reconstruction of a Linear Function Dinesh Krithivasan and S. Sandeep Pradhan Department of Electrical Engineering and Computer Science,

More information

A Systematic Description of Source Significance Information

A Systematic Description of Source Significance Information A Systematic Description of Source Significance Information Norbert Goertz Institute for Digital Communications School of Engineering and Electronics The University of Edinburgh Mayfield Rd., Edinburgh

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ Digital Speech Processing Lecture 16 Speech Coding Methods Based on Speech Waveform Representations and Speech Models Adaptive and Differential Coding 1 Speech Waveform Coding-Summary of Part 1 1. Probability

More information

CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM

CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM - 96 - Présentation Après avoir mis au point et validé une méthode d allocation de bits avec un découpage de la DCT en bandes

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University

Chapter 4. Data Transmission and Channel Capacity. Po-Ning Chen, Professor. Department of Communications Engineering. National Chiao Tung University Chapter 4 Data Transmission and Channel Capacity Po-Ning Chen, Professor Department of Communications Engineering National Chiao Tung University Hsin Chu, Taiwan 30050, R.O.C. Principle of Data Transmission

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

Multimedia Communications. Differential Coding

Multimedia Communications. Differential Coding Multimedia Communications Differential Coding Differential Coding In many sources, the source output does not change a great deal from one sample to the next. This means that both the dynamic range and

More information

Lecture 20: Quantization and Rate-Distortion

Lecture 20: Quantization and Rate-Distortion Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,

More information

Information Dimension

Information Dimension Information Dimension Mina Karzand Massachusetts Institute of Technology November 16, 2011 1 / 26 2 / 26 Let X would be a real-valued random variable. For m N, the m point uniform quantized version of

More information

Chapter 9 Fundamental Limits in Information Theory

Chapter 9 Fundamental Limits in Information Theory Chapter 9 Fundamental Limits in Information Theory Information Theory is the fundamental theory behind information manipulation, including data compression and data transmission. 9.1 Introduction o For

More information

18.2 Continuous Alphabet (discrete-time, memoryless) Channel

18.2 Continuous Alphabet (discrete-time, memoryless) Channel 0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not

More information

Random Signal Transformations and Quantization

Random Signal Transformations and Quantization York University Department of Electrical Engineering and Computer Science EECS 4214 Lab #3 Random Signal Transformations and Quantization 1 Purpose In this lab, you will be introduced to transformations

More information

EE67I Multimedia Communication Systems

EE67I Multimedia Communication Systems EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original

More information

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source INFORMATICA, 007, Vol. 18, No., 79 88 79 007 Institute of Mathematics and Informatics, Vilnius An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin Shanghai Jiao Tong University Chapter 10: Information Theory Textbook: Chapter 12 Communication Systems Engineering: Ch 6.1, Ch 9.1~ 9. 92 2009/2010 Meixia Tao @

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Information and Entropy

Information and Entropy Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source

Optimization of Variable-length Code for Data. Compression of memoryless Laplacian source Optimization of Variable-length Code for Data Compression of memoryless Laplacian source Marko D. Petković, Zoran H. Perić, Aleksandar V. Mosić Abstract In this paper we present the efficient technique

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Digital Communications III (ECE 154C) Introduction to Coding and Information Theory Tara Javidi These lecture notes were originally developed by late Prof. J. K. Wolf. UC San Diego Spring 2014 1 / 26 Lossy

More information

Constellation Shaping for Communication Channels with Quantized Outputs

Constellation Shaping for Communication Channels with Quantized Outputs Constellation Shaping for Communication Channels with Quantized Outputs, Dr. Matthew C. Valenti and Xingyu Xiang Lane Department of Computer Science and Electrical Engineering West Virginia University

More information

Chapter 3. Quantization. 3.1 Scalar Quantizers

Chapter 3. Quantization. 3.1 Scalar Quantizers Chapter 3 Quantization As mentioned in the introduction, two operations are necessary to transform an analog waveform into a digital signal. The first action, sampling, consists of converting a continuous-time

More information

Analysis of methods for speech signals quantization

Analysis of methods for speech signals quantization INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs

More information

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited

Ch. 8 Math Preliminaries for Lossy Coding. 8.4 Info Theory Revisited Ch. 8 Math Preliminaries for Lossy Coding 8.4 Info Theory Revisited 1 Info Theory Goals for Lossy Coding Again just as for the lossless case Info Theory provides: Basis for Algorithms & Bounds on Performance

More information

Performance of small signal sets

Performance of small signal sets 42 Chapter 5 Performance of small signal sets In this chapter, we show how to estimate the performance of small-to-moderate-sized signal constellations on the discrete-time AWGN channel. With equiprobable

More information

Block 2: Introduction to Information Theory

Block 2: Introduction to Information Theory Block 2: Introduction to Information Theory Francisco J. Escribano April 26, 2015 Francisco J. Escribano Block 2: Introduction to Information Theory April 26, 2015 1 / 51 Table of contents 1 Motivation

More information

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes

Shannon meets Wiener II: On MMSE estimation in successive decoding schemes Shannon meets Wiener II: On MMSE estimation in successive decoding schemes G. David Forney, Jr. MIT Cambridge, MA 0239 USA forneyd@comcast.net Abstract We continue to discuss why MMSE estimation arises

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

SCALABLE AUDIO CODING USING WATERMARKING

SCALABLE AUDIO CODING USING WATERMARKING SCALABLE AUDIO CODING USING WATERMARKING Mahmood Movassagh Peter Kabal Department of Electrical and Computer Engineering McGill University, Montreal, Canada Email: {mahmood.movassagh@mail.mcgill.ca, peter.kabal@mcgill.ca}

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, ) Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,

More information

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance

More information

Simultaneous SDR Optimality via a Joint Matrix Decomp.

Simultaneous SDR Optimality via a Joint Matrix Decomp. Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s

More information

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions

3F1: Signals and Systems INFORMATION THEORY Examples Paper Solutions Engineering Tripos Part IIA THIRD YEAR 3F: Signals and Systems INFORMATION THEORY Examples Paper Solutions. Let the joint probability mass function of two binary random variables X and Y be given in the

More information

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau Audio /Video Signal Processing Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau Quantization Signal to Noise Ratio (SNR). Assume we have a A/D converter with a quantizer with a certain number of

More information

Source Coding: Part I of Fundamentals of Source and Video Coding

Source Coding: Part I of Fundamentals of Source and Video Coding Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko

More information

BASICS OF COMPRESSION THEORY

BASICS OF COMPRESSION THEORY BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University Huffman Coding C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)573877 cmliu@cs.nctu.edu.tw

More information

Expectation propagation for symbol detection in large-scale MIMO communications

Expectation propagation for symbol detection in large-scale MIMO communications Expectation propagation for symbol detection in large-scale MIMO communications Pablo M. Olmos olmos@tsc.uc3m.es Joint work with Javier Céspedes (UC3M) Matilde Sánchez-Fernández (UC3M) and Fernando Pérez-Cruz

More information

OPTIMUM fixed-rate scalar quantizers, introduced by Max

OPTIMUM fixed-rate scalar quantizers, introduced by Max IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL 54, NO 2, MARCH 2005 495 Quantizer Design for Channel Codes With Soft-Output Decoding Jan Bakus and Amir K Khandani, Member, IEEE Abstract A new method of

More information

Predictive Coding. Prediction Prediction in Images

Predictive Coding. Prediction Prediction in Images Prediction Prediction in Images Predictive Coding Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Exercises with solutions (Set D)

Exercises with solutions (Set D) Exercises with solutions Set D. A fair die is rolled at the same time as a fair coin is tossed. Let A be the number on the upper surface of the die and let B describe the outcome of the coin toss, where

More information

THE dictionary (Random House) definition of quantization

THE dictionary (Random House) definition of quantization IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 6, OCTOBER 1998 2325 Quantization Robert M. Gray, Fellow, IEEE, and David L. Neuhoff, Fellow, IEEE (Invited Paper) Abstract The history of the theory

More information

Predictive Coding. Prediction

Predictive Coding. Prediction Predictive Coding Prediction Prediction in Images Principle of Differential Pulse Code Modulation (DPCM) DPCM and entropy-constrained scalar quantization DPCM and transmission errors Adaptive intra-interframe

More information

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157

Lecture 6: Gaussian Channels. Copyright G. Caire (Sample Lectures) 157 Lecture 6: Gaussian Channels Copyright G. Caire (Sample Lectures) 157 Differential entropy (1) Definition 18. The (joint) differential entropy of a continuous random vector X n p X n(x) over R is: Z h(x

More information

7.1 Sampling and Reconstruction

7.1 Sampling and Reconstruction Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau What is our SNR if we have a sinusoidal signal? What is its pdf? Basically

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE)

ECE Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) ECE 74 - Advanced Communication Theory, Spring 2009 Homework #1 (INCOMPLETE) 1. A Huffman code finds the optimal codeword to assign to a given block of source symbols. (a) Show that cannot be a Huffman

More information

Lecture 2. Capacity of the Gaussian channel

Lecture 2. Capacity of the Gaussian channel Spring, 207 5237S, Wireless Communications II 2. Lecture 2 Capacity of the Gaussian channel Review on basic concepts in inf. theory ( Cover&Thomas: Elements of Inf. Theory, Tse&Viswanath: Appendix B) AWGN

More information

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels

Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Performance Analysis and Code Optimization of Low Density Parity-Check Codes on Rayleigh Fading Channels Jilei Hou, Paul H. Siegel and Laurence B. Milstein Department of Electrical and Computer Engineering

More information

Class of waveform coders can be represented in this manner

Class of waveform coders can be represented in this manner Digital Speech Processing Lecture 15 Speech Coding Methods Based on Speech Waveform Representations ti and Speech Models Uniform and Non- Uniform Coding Methods 1 Analog-to-Digital Conversion (Sampling

More information

Revision of Lecture 5

Revision of Lecture 5 Revision of Lecture 5 Information transferring across channels Channel characteristics and binary symmetric channel Average mutual information Average mutual information tells us what happens to information

More information

16.36 Communication Systems Engineering

16.36 Communication Systems Engineering MIT OpenCourseWare http://ocw.mit.edu 16.36 Communication Systems Engineering Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 16.36: Communication

More information

FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY

FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY FRAMES IN QUANTUM AND CLASSICAL INFORMATION THEORY Emina Soljanin Mathematical Sciences Research Center, Bell Labs April 16, 23 A FRAME 1 A sequence {x i } of vectors in a Hilbert space with the property

More information

Lecture 4 Noisy Channel Coding

Lecture 4 Noisy Channel Coding Lecture 4 Noisy Channel Coding I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw October 9, 2015 1 / 56 I-Hsiang Wang IT Lecture 4 The Channel Coding Problem

More information

Randomized Quantization and Optimal Design with a Marginal Constraint

Randomized Quantization and Optimal Design with a Marginal Constraint Randomized Quantization and Optimal Design with a Marginal Constraint Naci Saldi, Tamás Linder, Serdar Yüksel Department of Mathematics and Statistics, Queen s University, Kingston, ON, Canada Email: {nsaldi,linder,yuksel}@mast.queensu.ca

More information

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut Finite Word Length Effects and Quantisation Noise 1 Finite Word Length Effects Finite register lengths and A/D converters cause errors at different levels: (i) input: Input quantisation (ii) system: Coefficient

More information

Source-Channel Coding Techniques in the Presence of Interference and Noise

Source-Channel Coding Techniques in the Presence of Interference and Noise Source-Channel Coding Techniques in the Presence of Interference and Noise by Ahmad Abou Saleh A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements

More information

arxiv: v1 [cs.it] 20 Jan 2018

arxiv: v1 [cs.it] 20 Jan 2018 1 Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits Alon Kipnis, Yonina C. Eldar and Andrea J. Goldsmith fs arxiv:181.6718v1 [cs.it] Jan 18 X(t) sampler smp sec encoder R[ bits

More information

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals

ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals ELEG 5633 Detection and Estimation Signal Detection: Deterministic Signals Jingxian Wu Department of Electrical Engineering University of Arkansas Outline Matched Filter Generalized Matched Filter Signal

More information

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback

Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback 2038 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 9, SEPTEMBER 2004 Capacity of Memoryless Channels and Block-Fading Channels With Designable Cardinality-Constrained Channel State Feedback Vincent

More information

Quantization for Distributed Estimation

Quantization for Distributed Estimation 0 IEEE International Conference on Internet of Things ithings 0), Green Computing and Communications GreenCom 0), and Cyber-Physical-Social Computing CPSCom 0) Quantization for Distributed Estimation uan-yu

More information