Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

Size: px
Start display at page:

Download "Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization"

Transcription

1 Multimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization

2 Overview Course page (D.I.R.): Consulting: Office hours by appointment: Office #182 (in front of Sala Seminari ) me any time

3 Outline (of the following lectures) Discrete signals: quantization Linear quantization Mid-riser quantization Mid-tread quantization Non-linear quantization µ-law A-law Quantization error, noise and proper quantization settings Vector quantization Storage of discrete digital signals: tradeoff between quality and space Reconstruction of discrete digital signals

4 Architecture of a A/D converter Sample And hold Quantizer Sampling clock An analog/digital converter is a device which samples the input signal at fixed time intervals and produces the corresponding digital version ADC is composed by: A low (or band) -pass filter, to filter out part of the analog signal s noise A clock, regulating the sampling intervals A quantizer, transforming into discrete values the sampled data

5 Architecture of a A/D converter We want to transform an analog signal into a discrete and digital signal Sampling is the step of transforming the analog signal into a discrete one, using a sampling frequency fs, properly chosen Quantization is the step of digitalizing the sampled values into digital codewords, representing discretized amplitude values The final output will be a digital signal, which is discretized both in time and amplitude

6 Quantizer As Sampling, quantization is a lossy operation: since amplitude values (real, continuous values) are quantized into discrete levels, the original values are lost Different types of quantizers exist: Scalar quantizers: Uniform (mid-reiser, mid-tread) Non-uniform (A-law, µ-law) Vector quantizers Adaptive quantizers

7 Scalar quantizers

8 Definition of a Scalar Quantizer Scalar Quantization maps a scalar input value x to a scalar value y q by a function Q: Where: y q = Q(x) q = 0, 1,2,, M 1 is called the quantization index M is the number of quantization levels y q is called the reconstruction (or quantizing) level Q is called the input-output function (or characteristic function)

9 Scalar quantization A scalar quantizer partitions the codomain of a signal into M subsets, called quantization regions Each interval I q, q= 0, 1,, M-1, is represented by: An integer quantization index q A quantization level, also called reconstruction level y q A binary codeword Quantization boundaries Reconstruction levels Quantization regions Binary codewords Quantization indexes

10 Scalar quantization The scalar quantizer processes one sample at time, and: For each sample, substitutes its value with the reconstruction level of the quantization region it falls in Usually, reconstruction levels are in the middle of the intervals Reconstruction levels Quant. indexes Binary codewords V 3V 1.5V -1.5V -3V -4.5V

11 Scalar quantization Reconstruction levels Quant. indexes Binary codewords V 3V 1.5V -1.5V -3V -4.5V Sampling and quanization will generate the following sequences: Reconstruction levels Quantization indexes Binary codewords

12 Scalar quantization (possible) reconstruction from samples: Reconstruction levels Quant. indexes Binary codewords V 3V 1.5V -1.5V -3V -4.5V

13 Scalar quantization Summarizing, given a value x of the input signal, scalar quantization maps it into a new value: y q =Q(x), if and only if x I q = [x q, x q+1 ) I q = [x q, x q+1 ), q = 0,, M 1 are M non-overlapping intervals The M values x q, q= 0,1,,M, are called (boundary) decision levels If the input signal is unbounded, x 0 = and x M =+ The length Δq = x q+1 x q of each I q is called step size

14 Scalar quantization For each input value x, the function Q performs three stages: 1. Classification: finds the value q such that x Iq 2. Reconstruction: given the classification q, maps x into the reconstruction level y q I q 3. Encoding: the reconstruction levels are encoded as binary numbers

15 Scalar quantization Classification: the amplitude range of the input signal is classified into the correct interval I q = [x q, xq +1 ), finding the correct value of q

16 Scalar quantization Reconstruction: each interval I q is represented by a reconstruction level y q I q which implements the mapping y q =Q(x), x I q

17 Scalar Quantization Encoding: each reconstruction level y q is represented by one of the M R-bits binary numbers the number of bits R is called quantization rate In case of fixed-length binary numbers, R = log 2 M

18 Quantization error Quantization causes loss of information: the reconstructed quantized value obtained applying Q(x) is different than the input x This difference is called quantization error e q (x) e q (x) = x Q(x) It is also referred to as quantization distortion or quantization noise

19 Quantization error In audio, it is commonly perceived as background noise

20 Wanna hear it?

21 Quantization error In images causes a contouring effect Original image Quantization at 2 levels Quantization at 4 levels Quantization at 8 levels

22 Quantization error Quantization error for a sample x generates two types of noise: Granular quantization noise: caused when x lies inside an interval I q Overload quantization noise: the value x lies over the designed quantization boundaries Reconstruction levels Quant. indexes Binary codewords Granular quant. noise Overload quant. noise V 3V 1.5V -1.5V -3V -4.5V

23 Overload quantization noise It is not always bad: it can be due to a bad design of the quantizer, but it could be a precise design choice: Bad design: the signal s main shape lies beyond the quantizing region Not necessarily a bad choice: the very few spots over the quantization region can be considered outliers

24 Example/Exercise For the following sequence: {1.2,-0.2,-0.5,0.4,0.89,1.3, 0.7} Quantize it using a quantizer dividing the range of (-1.5,1.5) with 4 equal levels, and write: the quantized sequence; the binary codewords; and the quantization error for each sample

25 Example/Exercise Binary codewords: Sequence to be quantized: {1.2,-0.2,-0.5,0.4,0.89,1.3, 0.7} Suggestion: 1.2 fall between 0.75 and 1.5, and hence is quantized to 1.125, with codeword 11 and quantization error: e1= x Q(x) = = 0.075

26 Types of scalar quantizers Scalar quantizers can be classified in: Uniform quantizers: all the intervals I q have the same size Non-uniform quantizers: The size of the intervals I q can be adapted to match the signal shape

27 Uniform quantizers

28 Uniform quantization For amplitude bounded signals with amplitude in the interval S= [X mix, X max ], S is split into M uniform non-overlapping intervals Iq = [x q, x q+1 ), q = 0,, M 1 Size of I q = Δ = Xmax Xmin M Q(x q+1 ) Q(x q ) = x q+1 x q, q = 0,, M 1

29 Quantization function Quantization function is shown as follows: Output level Q(x) Q(x) Reconstruction levels y q Input level x Xmin Xmax Tread Riser Decision boundaries: Iq= [ Δ 2, 3Δ 2 )

30 Uniform quantizers On the basis of the quantization function, different uniform quantizers exist. For signals ranging from [-Xmax, Xmax]: Mid-Riser Mid-Tread Special case for signals ranging from [0, Xmax]

31 Mid-Tread quantizer To be used when M is odd The output scale is centered on the value Q(0) = 0. It can code the value 0 in output

32 Mid-Tread quantizer Δ = Xmax Xmin M Q x = sgn x x Index q: q = sgn x x Reconstruction level y q : y q = q Binary code: binary code of q (in 2-complement) (ex. q= 2 010)(ex. q = )

33 Exercise: Mid-tread Quantization Given a signal g whose amplitude varies in the range: [Xmin, Xmax] = [ 1.5, 1.5] and the following signal values: {1.2, 0.2, 0.5,0.4,0.89,1.3} quantize it with 5 levels (3 bits) and write down: The index values The reconstruction levels The binary codes of the index values The quantization errors

34 Exercise: Mid-tread Quantization M = 5 Δ = Xmax Xmin M = 1.5 ( 1.5) 5 x= 1.2 Q x = y q = sgn x x = = = q = sgn x x = = Binary code: q=2 BC = 010 Quantization error eq= x Q(x) = = 0 = 0. 6 Index q Binary code Reconstruction level Boundary levels [ 1.5, 0.9) [ 0.9, 0.3) [-0.3, 0.3) [0.3, 0.9) [0.9, 1.5) =

35 Mid-Riser quantizer To be used when M is even NO decision level = 0 in output

36 Mid-Riser quantizer Δ = Q x = Xmax Xmin M x Index q: q = x Reconstruction level y q : y q = q Binary code: binary code of q (in 2-complement) (ex. q= 2 010)(ex. q = )

37 Exercise: Mid-riser Quantization Given a signal g whose amplitude varies in the range: [Xmin, Xmax] = [ 1.5, 1.5] and the following signal values: {1.2, 0.2, 0.5,0.4,0.89,1.3} quantize it with 6 levels (how many bits?) and write: The index values The reconstruction levels The binary codes of the index values The quantization errors

38 Exercise: Mid-riser Quantization M = 6 Δ = Xmax Xmin M = 1.5 ( 1.5) 6 = 0. 5 Index q Binary code Reconstruction level Boundary levels [ 1.5, 1) [ 1, 0. 5) [ 0.5, 0) [0, 0.5) [0.5, 1) [1, 1.5) x x= 1.2 Q x = y q = + 1 = = = = q = x = = 2 Binary code: q=2 BC = 010 Quantization error eq= x Q(x) = = 0.05

39 Special case for positive signals Q x Δ = Xmax M = x Index q: q = x Reconstruction level y q :y q = q Binary code: binary code of q (ex. q= 2 10)

40 How to choose the correct value of M?

41 Performance criteria Our goal is to minimize the overall difference between the quantized samples and the original (continuous) values. We must find a measure which can predict a correct value of M on the basis of the signal properties Common performance criteria: Mean-square error (MSE) Signal-to-noise ratio (SNR)

42 Mean-square error Quantifies the difference between the values implied by the M-levels quantizer Q(x), and the real values x, in the interval [x i, x i+1 ]: MSEq = M 1 x i+1 i=0 e q x 2 f q x dx = M 1 i=0 x i x i+1 = x Q(x) 2 f q x dx x i F q (x) is the pdf (probability density function) of the error distribution Sometimes, MSE is referred to as average distortion D, with (x Q(x)) 2 being called distortion d(x)

43 Signal-to-noise ratio Measures the strength of the signal wrt the background noise Where: P signal and P noise are the RMS (Root Mean Square) power of the signal and the noise, respectively A signal and A noise are the RMS amplitude of the signal and the noise, respectively Usually expressed in decibels

44 Decibel The Decibel (db) is a logarithmic unit that indicates the ratio of a physical quantity P (usually power or intensity) relative to a given reference level P 0 Formally, a decibel is one tenth of a bel (B) 1 B = 10 db

45 Signal-to-noise ratio Signal-to-Noise Ratio (SNR or S/N) in decibels:

46 Signal-to-Quantization-Noise Ratio (SQNR): measure the strength of the sampled signal wrt the quantization error Where: δ G 2 is the variance of the input signal probability distribution with pdf f G and mean μ G MSE q is the mean-square quantization error

47 Signal-to-Quantization-Noise Ratio Signal-to-Quantization-Noise Ratio (SQNR) in decibels:

48 Choice of a good quantizer Given the properties introduced, a quantizer can be defined «good», if: its MSEq approaches to zero, or its SQNR is very high

49 Measures for uniform quantizers Uniformly distributed input We are going to analyze MSEq and SQNR of the noise generated by uniformly distributed input in uniform quantizers This noise has a «sawtooth» shape, with amplitude in the interval : e q = [- 2, + 2 ] Noise shape proper of uniform quantizers

50 MSEq for uniformly distributed input MSEq = M 1 i=0 x i+1 x i x Q(x) 2 f q x dx In case of uniformly distributed input, we know from literature that pdf is the following: Substituting, we have: We solve the integral using the formula:

51 MSEq for uniformly distributed input MSEq = 2 12

52 SQNR for uniformly distributed input In case of uniformly distributed signal, the pdf f g (x) has variance: Substituting, we have: M SQNR db = 10 log R = log 2 M M = 2 R = 20 log 10 M = 20 log 10 2 R = 20R log 10 2 SQNR db = 6. 02R

53 Choice of a good quiantizer

54 Sampling parameters for CD quality For what concerns sampling frequency: Add that the spectrum of signals a human can hear is approximately the range [20Hz, 20KHz] What frequency would you choose to sample the music, given these information?

55 Sampling parameters for CD quality For what concerns quantization bit depth, here is an excerpt from an audiophile book: Signal To Noise Ratio (SN or SNR) - measured in db, a measure of the ratio between the wanted signal - the music being recorded and played back - against unwanted noise introduced by the reproduction system - tape hiss, vinyl noise, turntable rumble, etc. For comfortable listening, say a good amplifier, this should be at least 100dB. Audio Cassette (AC) was typically around 40-50dB, perhaps 60dB using Dolby. By contrast, CD systems should be as good as an amplifier, around 100dB, although scratches or other damage to CDs and wear in the player can both cause jumps in the sound that are similar in effect to scratches on vinyl. What bit depth R would you choose to quantize music for a CD, given these information? And if you would quantize for audio cassette quality?

56 Non-uniform quantization

57 Uniform/Non-uniform Quantization

58 Non-uniform quantization Problems with uniform quantization Only optimal for uniformly distributed signal Real audio signals (speech and music) are more concentrated near zeros Human ear is more sensitive to quantization errors at small values Solution Using non-uniform quantization quantization interval is smaller near zero

59 Non-uniform quantizers Quantizing intervals are not uniform In general, Δ i Δ j for i j Suitable when the input signal is not uniformly distributed E.g., Gaussian distribution,...

60 Non-uniform quantizers A non-uniform quantizer can be designed as: A pre-defined function Q(x) written to map input and output using different quantization intervals, or: A Companding quantizer

61 Companding Quantization A compander consists of three stages: Compression Uniform quantization Expansion

62 Companding quantization Compression The input signal is compressed with a nonlinear characteristic C (e.g., C(x)=log(x)) Uniform Quantization The compressed signal is quantized with the uniform characteristc Q Expansion The uniformly quantized signal is expanded inversely with the expansion characteristic E=C 1

63 Companding quantization Compression Uniform quantization Expansion Resulting quantizer

64 Companding quantization The result of this process is a non-uniform quantizer: x Q n (x) = E(Q(C(x)))

65 Companding vs non-linear Q(x) Non-linear Q(x): Faster then companding: the function Q(x) is immediately applicable to the input x Design is not straightforward we need algorithms for adaptive quantization Companding quantizer: Not faster than non-linear Q(x) logarithmic functions must be applied to transform x But look-up tables can be built to speed-up calculations Design and implementation are simpler modular architecture, we just need to configure the function C(x), the rest is a classic uniform quantizer

66 Companding quantizer: the µ-law µ-law is a configurable compression function used in North America and Japan for telecommunications It generates different uniform quantizations by changing the value of µ (N.A. and Japan use µ= 255)

67 µ-law quantizer Compression: compress the sample value x with the formula: x Xmax log 1 + μ log 1 + μ y = C(x) = sgn(x) Uniform Quantization: quantize y with a uniform R-bits quantizer: y = Q(y) Expansion: transform back the value of y using the inverse formula: x = C 1 (y ) = sgn(y ) Xmax μ log 1+μ 10 Xmax y 1

68 Exercise For the following sequence {1.2,-0.2,-0.5,0.4,0.89,1.3 }, Quantize it using a µ -law quantizer (with µ = 9), in the range of [-1.5,1.5] with 4 levels and, and write: the quantized sequence the sequence of binary codes the quantization error for each sample Solution (indirect method): apply the inverse formula to the partition and reconstruction levels found for the uniform quantizer at 4 levels. Because the mu-law mapping is symmetric, we only need to find the inverse values for y = 0.375, 0.75, μ=9, Xmax=1.5, >0.1297, 0.75->0.3604, > Then quantize each sample using the above partition and reconstruction levels.

69 Exercise x = C 1 y = sgn(y ) Xmax 10 μ log 1+μ Xmax y For example: C -1 (-1.125)= sgn log = = = = 0.767

70 µ-law quantizer Uniform, M = 6 µ-law, M = 6; µ= 16

71 A-Law Companding quantizer A-Law is the compression function used in Europe for telecommunications When connecting international systems, A- Law becomes the standard over the µ-law It generates different uniform quantizations by changing the value of compression parameter A (in Europe, A= 87.7, or A= 87.6) Same principles as µ-law, using the following compression and decompression functions:

72 Exercise (for home!) For the following sequence {1.2,-0.2,-0.5,0.4,0.89,1.3 }, Quantize it using a A -law quantizer (with A = 87.7), in the range of [-1.5,1.5] with 4 levels and, and write: the quantized sequence the sequence of binary codes the quantization error for each sample Solution (indirect method): apply the inverse formula to the partition and reconstruction levels found for the uniform quantizer at 4 levels. Because the A-law mapping is symmetric, we only need to find the inverse values for y = 0.375, 0.75, Then quantize each sample using the above partition and reconstruction levels. A=87.7, Xmax= 1.5, >?, >?, >?

73 Adaptive Quantization

74 Adaptive quantization Adaptive quantization allows non-uniform quantizers to decrease the average distortion by assigning more levels to more probable regions. For given M and input pdf, we need to choose {x i } and {y i } to minimize the distortion

75 Lloyd-Max Scalar Quantizer Also known as pdfoptimized quantizer For given M, to reduce MSE (σ q 2 ), we want narrow regions when f(x) is high and wider regions when f(x) is low x 0 x 1 x 2 x 3 x 4 x 5 pdf Given M, the optimal bi and yi that minimize MSE satisfy the Lagrangian condition: σ q 2 regions σ q 2

76 Lloyd-Max Scalar Quantizer Solving the differential equations, we obtain the Lloyd-Max conditions: y i = x i x i 1 x i x i 1 xf x dx f x dx x i = y i + y i+1 2 x i-1 x i X i+1 Given {x i }, it is easy to calculate {y i } Given {y i }, it is easy to calculate {x i } Problem: How can we calculate {x i } and {y i } simultaneously? {x i } depends on {y i }; at the same time {y i } depends on {x i } Solution: iterative method

77 An iterative algorithm known as the Lloyd algorithm solves the problem by iteratively optimizing the encoder and decoder until both conditions are met with sufficient accuracy. The Lloyd Algorithm

78 Lloyd-Max Scalar Quantizer Input: threshold ε Output: values of {y i } and {x i } 1. Initialize all y i ; let j = 1; d 0 = + (distortion). 2. Update all decision levels: 3. Update all y i : y i = 4. Compute MSE (d j ): x i x i 1 x i x i 1 x i = y i + y i+1 2 xf x dx f x dx 5. If d j 1 d j < ε, stop. d j 1 otherwise, set j = j + 1; go to step 2

79 Adaptive quantization Adaptive quantization can be used to reduce the number of colors of an image to a finite number (e.g. 256 for GIF). 7 colors

80 Vector quantization

81 Vector quantization Quantization can be extended to more than one dimension: Scalar quantizer quantizes one sample at a time Vector quantizer quantizes a vector of N samples into a vector of N quantized values Useful when: Input signals show a strong correlation to the patterns stored into the quantizer vectors The sampled signal has codomain with cardinality > 1, e.g. R 3 Mostly used for: Color quantization: Quantize all colors appearing in an image to L (few) colors Image quantization: Quantize every NxN block into one of the L typical patterns (obtained through training).

82 Vector quantization The encoder is loaded with a Codebook: list of predetermined quantizer vectors, each one with an associated index The same Codebook is loaded in the decoder

83 Vector quantization Encoding: given an input vector of N consecutive signal samples, the encoder searches for the most similar vector in the codebook The corresponding index is stored as the output

84 Vector quantization Decoding: the decoder receives the encoded index, and retrieves the corresponding vector in the codebook as reconstruction levels

85 Vector quantization Performance of VQ is asymmetric: Encoding is a heavy process: not straightforward to find the closest vector in the codebook Decoding is very easy: just the retrieval of an indexed vector

86 Encoding performance Encoding: comparing an input vector I v, to all the vectors in the Codebook (V 1, M ), to find the closest one (1-NN) We must calculate the distance between I v and all the M Codebook vectors Distance measure: usually euclidean distance Given two vectors: N V1= <x 1, x 2,, x n > 2 V2= <y 1, y 2,, y n > d V 1, V 2 = x i y i i=1 This is also the Quantization Error!

87 Encoding performance Complexity - if the vectors contain N elements each, and the Codebook contains M vectors: The complexity of calculating the Euclidean Distance on 2 vectors is linear in N Θ(N) Calculating Euclidean Distance between Iv and all the M Codebook vectors is Θ(N*M)

88 Encoding performance But: Quantizing at R bits, means that we have M = 2 R Therefore, the encoder must perform the search of the most closest vector with time: Θ(N* 2 R ) The encoder performance is esponential in the number of bits R!

89 Decoder performance Decoder complexity: the decoder receives the Codebook index, therefore it can immediately retrieve the quantized vector: Θ(1) Many encoders/decoders for video, images and audio have asymmetric performance!!!

90 Quantization Codebook The simplest Codebook can be generated dividing the N-space in M equally shaped regions

91 Quantization Codebook Adaptive techniques can be applied to find the best subdivision of the N-dimensional space, to minimize the quantization error: Adaptive clustering of image samples/colors Generalized Lloyd algorithm

92 Exercise Given the following Codebook, with M = 4 (2 bits): INDEX X1 X2 X And given the following input vectors: <1, 4, 3> <2, 2, 2> <1, 2, 3> <5, 4, 2> Write: The sequence of quantization indexes The quantization error for each vector

93 Reconstruction: main techniques

94 Reconstruction Signal Decoder: Sometimes, it is necessary to convert a digital signal into an analog one: A PC sound card must do it in order to let sounds to be heard through speakers Signal decoders apply different techniques to reconstruct the original signal, using only the sampled and quantized data. The main techniques include: Zero-order hold Linear interpolation Ideal interpolation

95 Zero-order hold the value of the each sample y(n) is held constant for duration T: x(t) = y(n) for the time interval [nt, (n+1)t]

96 Linear interpolation Intuitively, this converter connects the samples with straight lines in the time interval [nt, (n+1)t] x(t)= segment connecting y(n) to y(n+1),

97 Ideal interpolation This converter calculates a smooth curve that passes through the samples The curve can be calculated with methods from numerical analysis. For example, Fourier polynomials or piecewise polynomials (spline)

Multimedia Communications. Scalar Quantization

Multimedia Communications. Scalar Quantization Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing

More information

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)

Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D

More information

The information loss in quantization

The information loss in quantization The information loss in quantization The rough meaning of quantization in the frame of coding is representing numerical quantities with a finite set of symbols. The mapping between numbers, which are normally

More information

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014

Scalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014 Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values

More information

Pulse-Code Modulation (PCM) :

Pulse-Code Modulation (PCM) : PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from

More information

Principles of Communications

Principles of Communications Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization

More information

CS578- Speech Signal Processing

CS578- Speech Signal Processing CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction

More information

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009

The Secrets of Quantization. Nimrod Peleg Update: Sept. 2009 The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the

More information

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University

C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

7.1 Sampling and Reconstruction

7.1 Sampling and Reconstruction Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion

More information

CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals

CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals CMPT 889: Lecture 3 Fundamentals of Digital Audio, Discrete-Time Signals Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University October 6, 2005 1 Sound Sound waves are longitudinal

More information

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression

Vector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18

More information

Example: for source

Example: for source Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby

More information

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut

Finite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut Finite Word Length Effects and Quantisation Noise 1 Finite Word Length Effects Finite register lengths and A/D converters cause errors at different levels: (i) input: Input quantisation (ii) system: Coefficient

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER

Quantization 2.1 QUANTIZATION AND THE SOURCE ENCODER 2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section

More information

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition

Review of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to

More information

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking

More information

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak 4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the

More information

Design of Optimal Quantizers for Distributed Source Coding

Design of Optimal Quantizers for Distributed Source Coding Design of Optimal Quantizers for Distributed Source Coding David Rebollo-Monedero, Rui Zhang and Bernd Girod Information Systems Laboratory, Electrical Eng. Dept. Stanford University, Stanford, CA 94305

More information

at Some sort of quantization is necessary to represent continuous signals in digital form

at Some sort of quantization is necessary to represent continuous signals in digital form Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for

More information

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau

Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 3, SNR, non-linear Quantisation Gerald Schuller, TU Ilmenau What is our SNR if we have a sinusoidal signal? What is its pdf? Basically

More information

Lecture 20: Quantization and Rate-Distortion

Lecture 20: Quantization and Rate-Distortion Lecture 20: Quantization and Rate-Distortion Quantization Introduction to rate-distortion theorem Dr. Yao Xie, ECE587, Information Theory, Duke University Approimating continuous signals... Dr. Yao Xie,

More information

ETSF15 Analog/Digital. Stefan Höst

ETSF15 Analog/Digital. Stefan Höst ETSF15 Analog/Digital Stefan Höst Physical layer Analog vs digital Sampling, quantisation, reconstruction Modulation Represent digital data in a continuous world Disturbances Noise and distortion Synchronization

More information

Class of waveform coders can be represented in this manner

Class of waveform coders can be represented in this manner Digital Speech Processing Lecture 15 Speech Coding Methods Based on Speech Waveform Representations ti and Speech Models Uniform and Non- Uniform Coding Methods 1 Analog-to-Digital Conversion (Sampling

More information

Vector Quantization and Subband Coding

Vector Quantization and Subband Coding Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block

More information

Proyecto final de carrera

Proyecto final de carrera UPC-ETSETB Proyecto final de carrera A comparison of scalar and vector quantization of wavelet decomposed images Author : Albane Delos Adviser: Luis Torres 2 P a g e Table of contents Table of figures...

More information

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l

Vector Quantization Encoder Decoder Original Form image Minimize distortion Table Channel Image Vectors Look-up (X, X i ) X may be a block of l Vector Quantization Encoder Decoder Original Image Form image Vectors X Minimize distortion k k Table X^ k Channel d(x, X^ Look-up i ) X may be a block of l m image or X=( r, g, b ), or a block of DCT

More information

Compression methods: the 1 st generation

Compression methods: the 1 st generation Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic

More information

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau

Audio /Video Signal Processing. Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau Audio /Video Signal Processing Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau Quantization Signal to Noise Ratio (SNR). Assume we have a A/D converter with a quantizer with a certain number of

More information

Digital Signal Processing

Digital Signal Processing COMP ENG 4TL4: Digital Signal Processing Notes for Lecture #3 Wednesday, September 10, 2003 1.4 Quantization Digital systems can only represent sample amplitudes with a finite set of prescribed values,

More information

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization

CHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization 3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images

More information

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ

1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ Digital Speech Processing Lecture 16 Speech Coding Methods Based on Speech Waveform Representations and Speech Models Adaptive and Differential Coding 1 Speech Waveform Coding-Summary of Part 1 1. Probability

More information

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters

Signals, Instruments, and Systems W5. Introduction to Signal Processing Sampling, Reconstruction, and Filters Signals, Instruments, and Systems W5 Introduction to Signal Processing Sampling, Reconstruction, and Filters Acknowledgments Recapitulation of Key Concepts from the Last Lecture Dirac delta function (

More information

Lab 4: Quantization, Oversampling, and Noise Shaping

Lab 4: Quantization, Oversampling, and Noise Shaping Lab 4: Quantization, Oversampling, and Noise Shaping Due Friday 04/21/17 Overview: This assignment should be completed with your assigned lab partner(s). Each group must turn in a report composed using

More information

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur

Module 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur Module Quantization and Coding ersion, ECE IIT, Kharagpur Lesson Logarithmic Pulse Code Modulation (Log PCM) and Companding ersion, ECE IIT, Kharagpur After reading this lesson, you will learn about: Reason

More information

E303: Communication Systems

E303: Communication Systems E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17

More information

Signal types. Signal characteristics: RMS, power, db Probability Density Function (PDF). Analogue-to-Digital Conversion (ADC).

Signal types. Signal characteristics: RMS, power, db Probability Density Function (PDF). Analogue-to-Digital Conversion (ADC). Signal types. Signal characteristics:, power, db Probability Density Function (PDF). Analogue-to-Digital Conversion (ADC). Signal types Stationary (average properties don t vary with time) Deterministic

More information

Chapter 10 Applications in Communications

Chapter 10 Applications in Communications Chapter 10 Applications in Communications School of Information Science and Engineering, SDU. 1/ 47 Introduction Some methods for digitizing analog waveforms: Pulse-code modulation (PCM) Differential PCM

More information

Optical Storage Technology. Error Correction

Optical Storage Technology. Error Correction Optical Storage Technology Error Correction Introduction With analog audio, there is no opportunity for error correction. With digital audio, the nature of binary data lends itself to recovery in the event

More information

EE 230 Lecture 40. Data Converters. Amplitude Quantization. Quantization Noise

EE 230 Lecture 40. Data Converters. Amplitude Quantization. Quantization Noise EE 230 Lecture 40 Data Converters Amplitude Quantization Quantization Noise Review from Last Time: Time Quantization Typical ADC Environment Review from Last Time: Time Quantization Analog Signal Reconstruction

More information

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU

Audio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression

More information

ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals. 1. Sampling and Reconstruction 2. Quantization

ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals. 1. Sampling and Reconstruction 2. Quantization ELEN E4810: Digital Signal Processing Topic 11: Continuous Signals 1. Sampling and Reconstruction 2. Quantization 1 1. Sampling & Reconstruction DSP must interact with an analog world: A to D D to A x(t)

More information

Quantisation. Uniform Quantisation. Tcom 370: Principles of Data Communications University of Pennsylvania. Handout 5 c Santosh S.

Quantisation. Uniform Quantisation. Tcom 370: Principles of Data Communications University of Pennsylvania. Handout 5 c Santosh S. Tcom 370: Principles of Data Communications Quantisation Handout 5 Quantisation involves a map of the real line into a discrete set of quantisation levels. Given a set of M quantisation levels {S 0, S

More information

Analysis of Finite Wordlength Effects

Analysis of Finite Wordlength Effects Analysis of Finite Wordlength Effects Ideally, the system parameters along with the signal variables have infinite precision taing any value between and In practice, they can tae only discrete values within

More information

Lecture 7 Predictive Coding & Quantization

Lecture 7 Predictive Coding & Quantization Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization

More information

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.

Quantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding. Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate

More information

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.

Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems

More information

Chapter 2: Problem Solutions

Chapter 2: Problem Solutions Chapter 2: Problem Solutions Discrete Time Processing of Continuous Time Signals Sampling à Problem 2.1. Problem: Consider a sinusoidal signal and let us sample it at a frequency F s 2kHz. xt 3cos1000t

More information

ECE521 week 3: 23/26 January 2017

ECE521 week 3: 23/26 January 2017 ECE521 week 3: 23/26 January 2017 Outline Probabilistic interpretation of linear regression - Maximum likelihood estimation (MLE) - Maximum a posteriori (MAP) estimation Bias-variance trade-off Linear

More information

Proc. of NCC 2010, Chennai, India

Proc. of NCC 2010, Chennai, India Proc. of NCC 2010, Chennai, India Trajectory and surface modeling of LSF for low rate speech coding M. Deepak and Preeti Rao Department of Electrical Engineering Indian Institute of Technology, Bombay

More information

EE-597 Notes Quantization

EE-597 Notes Quantization EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both

More information

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING

CODING SAMPLE DIFFERENCES ATTEMPT 1: NAIVE DIFFERENTIAL CODING 5 0 DPCM (Differential Pulse Code Modulation) Making scalar quantization work for a correlated source -- a sequential approach. Consider quantizing a slowly varying source (AR, Gauss, ρ =.95, σ 2 = 3.2).

More information

Lecture 5 Channel Coding over Continuous Channels

Lecture 5 Channel Coding over Continuous Channels Lecture 5 Channel Coding over Continuous Channels I-Hsiang Wang Department of Electrical Engineering National Taiwan University ihwang@ntu.edu.tw November 14, 2014 1 / 34 I-Hsiang Wang NIT Lecture 5 From

More information

EE67I Multimedia Communication Systems

EE67I Multimedia Communication Systems EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Number Representation and Waveform Quantization

Number Representation and Waveform Quantization 1 Number Representation and Waveform Quantization 1 Introduction This lab presents two important concepts for working with digital signals. The first section discusses how numbers are stored in memory.

More information

Time-domain representations

Time-domain representations Time-domain representations Speech Processing Tom Bäckström Aalto University Fall 2016 Basics of Signal Processing in the Time-domain Time-domain signals Before we can describe speech signals or modelling

More information

VID3: Sampling and Quantization

VID3: Sampling and Quantization Video Transmission VID3: Sampling and Quantization By Prof. Gregory D. Durgin copyright 2009 all rights reserved Claude E. Shannon (1916-2001) Mathematician and Electrical Engineer Worked for Bell Labs

More information

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

On Optimal Coding of Hidden Markov Sources

On Optimal Coding of Hidden Markov Sources 2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University

More information

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256 General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)

More information

Roundoff Noise in Digital Feedback Control Systems

Roundoff Noise in Digital Feedback Control Systems Chapter 7 Roundoff Noise in Digital Feedback Control Systems Digital control systems are generally feedback systems. Within their feedback loops are parts that are analog and parts that are digital. At

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi Signal Modeling Techniques in Speech Recognition Hassan A. Kingravi Outline Introduction Spectral Shaping Spectral Analysis Parameter Transforms Statistical Modeling Discussion Conclusions 1: Introduction

More information

Lecture 12. Block Diagram

Lecture 12. Block Diagram Lecture 12 Goals Be able to encode using a linear block code Be able to decode a linear block code received over a binary symmetric channel or an additive white Gaussian channel XII-1 Block Diagram Data

More information

Chapter 3. Quantization. 3.1 Scalar Quantizers

Chapter 3. Quantization. 3.1 Scalar Quantizers Chapter 3 Quantization As mentioned in the introduction, two operations are necessary to transform an analog waveform into a digital signal. The first action, sampling, consists of converting a continuous-time

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation

More information

HARMONIC VECTOR QUANTIZATION

HARMONIC VECTOR QUANTIZATION HARMONIC VECTOR QUANTIZATION Volodya Grancharov, Sigurdur Sverrisson, Erik Norvell, Tomas Toftgård, Jonas Svedberg, and Harald Pobloth SMN, Ericsson Research, Ericsson AB 64 8, Stockholm, Sweden ABSTRACT

More information

EE 5345 Biomedical Instrumentation Lecture 12: slides

EE 5345 Biomedical Instrumentation Lecture 12: slides EE 5345 Biomedical Instrumentation Lecture 1: slides 4-6 Carlos E. Davila, Electrical Engineering Dept. Southern Methodist University slides can be viewed at: http:// www.seas.smu.edu/~cd/ee5345.html EE

More information

Optimization of Quantizer s Segment Treshold Using Spline Approximations for Optimal Compressor Function

Optimization of Quantizer s Segment Treshold Using Spline Approximations for Optimal Compressor Function Applied Mathematics, 1, 3, 143-1434 doi:1436/am1331 Published Online October 1 (http://wwwscirporg/ournal/am) Optimization of Quantizer s Segment Treshold Using Spline Approimations for Optimal Compressor

More information

FACULTY OF ENGINEERING MULTIMEDIA UNIVERSITY LAB SHEET

FACULTY OF ENGINEERING MULTIMEDIA UNIVERSITY LAB SHEET FACULTY OF ENGINEERING MULTIMEDIA UNIVERSITY LAB SHEET ETM 3136 Digital Communications Trimester 1 (2010/2011) DTL1: Pulse Code Modulation (PCM) Important Notes: Students MUST read this lab sheet before

More information

Various signal sampling and reconstruction methods

Various signal sampling and reconstruction methods Various signal sampling and reconstruction methods Rolands Shavelis, Modris Greitans 14 Dzerbenes str., Riga LV-1006, Latvia Contents Classical uniform sampling and reconstruction Advanced sampling and

More information

What does such a voltage signal look like? Signals are commonly observed and graphed as functions of time:

What does such a voltage signal look like? Signals are commonly observed and graphed as functions of time: Objectives Upon completion of this module, you should be able to: understand uniform quantizers, including dynamic range and sources of error, represent numbers in two s complement binary form, assign

More information

Logarithmic quantisation of wavelet coefficients for improved texture classification performance

Logarithmic quantisation of wavelet coefficients for improved texture classification performance Logarithmic quantisation of wavelet coefficients for improved texture classification performance Author Busch, Andrew, W. Boles, Wageeh, Sridharan, Sridha Published 2004 Conference Title 2004 IEEE International

More information

CMPT 365 Multimedia Systems. Final Review - 1

CMPT 365 Multimedia Systems. Final Review - 1 CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression

More information

Analysis of methods for speech signals quantization

Analysis of methods for speech signals quantization INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs

More information

1. Quantization Signal to Noise Ratio (SNR).

1. Quantization Signal to Noise Ratio (SNR). Digital Signal Processing 2/ Advanced Digital Signal Processing Lecture 2, Quantization, SNR Gerald Schuller, TU Ilmenau 1. Quantization Signal to Noise Ratio (SNR). Assume we have a A/D converter with

More information

Higher-Order Σ Modulators and the Σ Toolbox

Higher-Order Σ Modulators and the Σ Toolbox ECE37 Advanced Analog Circuits Higher-Order Σ Modulators and the Σ Toolbox Richard Schreier richard.schreier@analog.com NLCOTD: Dynamic Flip-Flop Standard CMOS version D CK Q Q Can the circuit be simplified?

More information

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1 PCM Reference Chapter 1.1, Communication Systems, Carlson. PCM.1 Pulse-code modulation (PCM) Pulse modulations use discrete time samples of analog signals the transmission is composed of analog information

More information

Optimal Multiple Description and Multiresolution Scalar Quantizer Design

Optimal Multiple Description and Multiresolution Scalar Quantizer Design Optimal ultiple Description and ultiresolution Scalar Quantizer Design ichelle Effros California Institute of Technology Abstract I present new algorithms for fixed-rate multiple description and multiresolution

More information

Seminar: D. Jeon, Energy-efficient Digital Signal Processing Hardware Design Mon Sept 22, 9:30-11:30am in 3316 EECS

Seminar: D. Jeon, Energy-efficient Digital Signal Processing Hardware Design Mon Sept 22, 9:30-11:30am in 3316 EECS EECS 452 Lecture 6 Today: Announcements: Rounding and quantization Analog to digital conversion Lab 3 starts next week Hw3 due on tuesday Project teaming meeting: today 7-9PM, Dow 3150 My new office hours:

More information

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative

Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative Being edited by Prof. Sumana Gupta 1 Quantization This involves representation the sampled data by a finite number of levels based on some criteria such as minimizing of the quantifier distortion. Quantizer

More information

Maximum Likelihood and Maximum A Posteriori Adaptation for Distributed Speaker Recognition Systems

Maximum Likelihood and Maximum A Posteriori Adaptation for Distributed Speaker Recognition Systems Maximum Likelihood and Maximum A Posteriori Adaptation for Distributed Speaker Recognition Systems Chin-Hung Sit 1, Man-Wai Mak 1, and Sun-Yuan Kung 2 1 Center for Multimedia Signal Processing Dept. of

More information

EE123 Digital Signal Processing

EE123 Digital Signal Processing EE123 Digital Signal Processing Lecture 19 Practical ADC/DAC Ideal Anti-Aliasing ADC A/D x c (t) Analog Anti-Aliasing Filter HLP(jΩ) sampler t = nt x[n] =x c (nt ) Quantizer 1 X c (j ) and s < 2 1 T X

More information

Analog Digital Sampling & Discrete Time Discrete Values & Noise Digital-to-Analog Conversion Analog-to-Digital Conversion

Analog Digital Sampling & Discrete Time Discrete Values & Noise Digital-to-Analog Conversion Analog-to-Digital Conversion Analog Digital Sampling & Discrete Time Discrete Values & Noise Digital-to-Analog Conversion Analog-to-Digital Conversion 6.082 Fall 2006 Analog Digital, Slide Plan: Mixed Signal Architecture volts bits

More information

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional

More information

EEO 401 Digital Signal Processing Prof. Mark Fowler

EEO 401 Digital Signal Processing Prof. Mark Fowler EEO 401 Digital Signal Processing Pro. Mark Fowler Note Set #14 Practical A-to-D Converters and D-to-A Converters Reading Assignment: Sect. 6.3 o Proakis & Manolakis 1/19 The irst step was to see that

More information

Classification: The rest of the story

Classification: The rest of the story U NIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN CS598 Machine Learning for Signal Processing Classification: The rest of the story 3 October 2017 Today s lecture Important things we haven t covered yet Fisher

More information

6.003: Signals and Systems. Sampling and Quantization

6.003: Signals and Systems. Sampling and Quantization 6.003: Signals and Systems Sampling and Quantization December 1, 2009 Last Time: Sampling and Reconstruction Uniform sampling (sampling interval T ): x[n] = x(nt ) t n Impulse reconstruction: x p (t) =

More information

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS

This examination consists of 11 pages. Please check that you have a complete copy. Time: 2.5 hrs INSTRUCTIONS THE UNIVERSITY OF BRITISH COLUMBIA Department of Electrical and Computer Engineering EECE 564 Detection and Estimation of Signals in Noise Final Examination 6 December 2006 This examination consists of

More information

ESE 250: Digital Audio Basics. Week 4 February 5, The Frequency Domain. ESE Spring'13 DeHon, Kod, Kadric, Wilson-Shah

ESE 250: Digital Audio Basics. Week 4 February 5, The Frequency Domain. ESE Spring'13 DeHon, Kod, Kadric, Wilson-Shah ESE 250: Digital Audio Basics Week 4 February 5, 2013 The Frequency Domain 1 Course Map 2 Musical Representation With this compact notation Could communicate a sound to pianist Much more compact than 44KHz

More information

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source

An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source INFORMATICA, 007, Vol. 18, No., 79 88 79 007 Institute of Mathematics and Informatics, Vilnius An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian

More information

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

ON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106

More information

SPEECH ANALYSIS AND SYNTHESIS

SPEECH ANALYSIS AND SYNTHESIS 16 Chapter 2 SPEECH ANALYSIS AND SYNTHESIS 2.1 INTRODUCTION: Speech signal analysis is used to characterize the spectral information of an input speech signal. Speech signal analysis [52-53] techniques

More information

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for MARKOV CHAINS A finite state Markov chain is a sequence S 0,S 1,... of discrete cv s from a finite alphabet S where q 0 (s) is a pmf on S 0 and for n 1, Q(s s ) = Pr(S n =s S n 1 =s ) = Pr(S n =s S n 1

More information