Being edited by Prof. Sumana Gupta 1. only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative
|
|
- Nigel Ellis
- 6 years ago
- Views:
Transcription
1 Being edited by Prof. Sumana Gupta 1 Quantization This involves representation the sampled data by a finite number of levels based on some criteria such as minimizing of the quantifier distortion. Quantizer design includes input(decision) levels and output (reconstruction) levels as well as number of levels. The decision can be enhanced by psychovisual or psychoacoustic perception. Quantizers can be classified as memoryless(assumes each sample is quant independent) or with memory (takes into account previous sample) Limit our discussion to memoryless quantizers. Another classification is uniform or non-uniform Uniform quantizers Completely defined by the no of levels it has, its step size and whether it is midriser or midtreader. We will consider only symmetric quantizers ie the input and output levels in the 3rd quadrant are negative Non-uniform quantizers Step sizes not constant. Hence non uniform quantization specified by input and output levels (Ist or 3rd quad) Quantizer design Given the numbers of input or output levels quantizer design involves minimizing any meaningful quantizer distortion such as: E[(f ˆf) ]= (f ˆf) p(f)df where input f ranges from to a u and p(f) is probability density function of f above(missing). Mean absolute quantization error (MAQE) a u E[ f ˆf ] = a u f ˆf p(f)df 3. Mean L N norm quantization error E[ f ˆf N ]= f ˆf p(f)df N is any positive integer 4. Wted quantization error a u a u w(f) f ˆf p(f)df
2 Uniform quantizers Symmetric type o/p reconstuction levels (o/p) r 3 r -d 3 r decision levels 1 (i/p levels) -d -d 1 i/p f d1 d d 3 dead zont o/p zero for i/p [- d 1,d 1 ] -r 1 -r mid treader for f: [ d, d 3 ] quantization level is r quantization is lossy -d 3 -d -d 1 r 3 r r 1 -r 1 d 1 d d 3 -r -r 3 mid riser Step size constant
3 Non-Uniform Quantization r r 1 d 1 d d 3 mid treader (has zero o/p level) f (i/p) r 3 r -d -d 1 r 1 d 1 d mid riser (has no zero o/p level) Max Lloyd Quantizer Designed the quantizers minimizing MSQE and developed tables for input governed by standard distribution functions such as gamma, Laplacian, Gaussian, Rayleigh, Uniform. Quantizers can also be designed tailored to histograms. For a well designed predictor, the histogram of predicted errors tend to follow Laplacian distribution. Given the range of input function as from d 0 d 1 d d 3 d 4 d 5 d j-1 d j d j+1 d J r 0 r 1 r r 3 r 4 r j-1 r j r J-1 a u d i =decision level(i/p level) r i =reconst level (O/P level) 3
4 If d j f d j+1 then ˆf = Q(f) =r j and MSQE = ɛ = E[(f ˆf) ]= where p(f)= Probability density function of r.v.f The error ɛ can also be written as J 1 ɛ = (f r) p(f)df r k and are variables, to minimize ɛ set ɛ r k = ɛ =0 ɛ = r k l=0 a u 1 (f r k 1) p(f)df + (f ˆf) p(f)df +1 (f r k ) p(f)df ( r k 1 ) p( ) ( r k )p( )=0 ( r k 1 )=±( r k ) Since ( r k 1 ) > 0and( r k ) < 0 solution that is valid is ( r k 1 )= ( r k ) or =(r k + r k+1 )/ ie I/P level is average of two adjacent O/P levels. Also +1 ɛ (f r k ) df r k Hence = r k = +1 d k 1 (f r k )p(f)df =0 fp(f)df / +1 p(f)df O/P level is centroid of adj i/p levels. This solution is not closed form. To find i/p level one has to find r k and vice versa. However any iterative techs both r k and can be found When number of O/P levels J is large the... design can be approximated as follows: Assuming p(f) is constant over each quantization level p(f) p(d j ) 4
5 or J 1 ɛ = p(d j ) d j+1 d j (f r j ) df = 1 p(d j )[(d j+1 r j ) 3 (d j r j ) 3 ] 3 j ɛ =0=p( )[(+1 r k ) ( r k ) ] r k (+1 r k ) =( r k ) (+1 r k )= ( r k )as+1 >r k > r k = +1 + ie, each reconstruction level is mid way between two adjacent decision levels. substituting () in (1) we have [ ɛ = 1 j 1 (dj+1 dj ) ( ) ] 3 dj d j+1 p(d j ) 3 = 1 p(d j )(d j+1 d j ) 3 j () minimise ɛ further wrt d j Set d j+1 d j = d j,then ɛ = 1 p(d j )( d j ) 3 j = j ɛ j where ɛ j = 1 p(d j)( d j ) 3 This is minimum when ɛ j is constant ie independent of j th level where k = constant. Let µ j =[p(dj)] 1/3 d j Then [ thus 1 J 1 µ j = k. Constraint... Set [ 1 ] µ 3 µ j + λ() =0 a u Set 1 J 1 [p(d j )] 1/3 d j 1 p 1/3 (f)df = k (A) J p(d j)( d j ) 3 = ɛ = 1 J 1 ] J 1 µ j k =0 µ 3 j 3 µ l + λ/ = 0 or λ = 3µ l 5
6 µ 0 = µ 1 =...µ J 1 =[p(dj)] 1/3 d j 1 µ lj = k or µ l = k J and or and substitute for k we get [p(d j )] 1/3 d j = µ j = k J k d j = (3) J[p(d j )] 1/3 ɛ = 1 J 1 µ 3 j = () k3 J ɛ = 1 J a u p 1/3 (f)df 3 d 0 = d 1 = +(d 1 d 0 ) d = d 1 +(d d 1 )= +(d 1 d 0 ) Substituting for d m from equ(3) d j = + j (d m d m 1 ) m=1 d j = + d j == + j 1 m=0 j 1 m=0 d m d m j d j = + m d m 1 )= + m=1(d m d j = + k J = + k J d j a u = d J = + k J j 1 m=0 1 [p(d m )] 1/3 [p(d j )] 1/3 df d j [p(d f )] 1/3 df d m [p(r m )] 1/3 [p(r m)] 1/3 6
7 Therefore decision level d j = + k J = a u a u [p(d j )] 1/3 df (a U ) a u +ka/l [p(d j )] 1/3 df [p(d j )] 1/3 df for j = 0,1,...J Quantizers using above equation can be designed for any pdf. when 1 p(f) = = 1 a u A then +1 / +1 r k = fk(f)df p(f)df Since then = r k + r k 1 = 1 (+1 + ) = = dk +1 = constant step size =(a u )/J = q or = 1 + q; r k = +1 + z = q + dk ie, all decision and recent levels are equally spaced(show) Since quantity error is distributed uniformly over each step sixe=q with zero mean, MSQE is MSQE ɛ = 1 q q/ e df = q / q/ Let range of f be A and σ f the var... σ f = 1 A A/ f df = A / A/ A b bit quantizer can rep J o/p levels ie. b = J q = A b ie MSQE= 1 A / b 1. The quantizer o/p is an unbiased estimate of i/p ie E[ ˆf] =E[f] 7
8 . The quantizer error is orthogonal to the quantizer o/p ie, E[(f ˆf) ˆf] =0 quantizer noise is uncorrelated with quantization o/p. 3. The variance of quant o/p is reduced by the factor 1 f(b) wheref(b) denotes the mean square distortion of the B-bit quantizer for unity variance i/p s σ ˆf =(1 f(b))σ ˆf 4. It is sufficient to design mean square quantizers for zero mean and unity variance distributions. Suppose o/p of image sensor takes values from 0 to 10. if samples are quant- uniformly to 56 levels then decision and reconstruction levels are: 1 = q or d m = 10 (k 1) for k =1, and r k = + q/ =10/56(k 1) + 5/ If p k is the probability of r k ie. p k = +1 p f (f z )df then E( ˆf) L p k E[f fɛt k ] L = +1 fp f (f)df d L+1 = fp f (f)df = E(f) d 1. E(f ˆf) =E[( ˆf) ] an interesting model for the quantizer follows from this ie f opt MSQuant fˆ f + η - fˆ The quantizer noise η is uncorrelated with ˆf and we ca write f = ˆf + η σ η = E[(f ˆf) ]=E[(f )] E[( ˆf) ] Since σ η 0 average power of quantizer output is reduced by average power σ η Also quantizer noise is dependent on input since... 8
9 ... = E(f ) E( ˆf ) = E(η ) 3. Since for any mean square quantizer we get σ η = σ ff(b) σ ˆf =(1 f(b))σ f Although uniform quantization is straight forward and appears to be a natural approach it may not be optimal. Suppose f(or u) is much more likely to be in one region than in others. It is reasonable to assign more reconstruction levels to that region. If u(or f) falls rarely between t 1 (ord 1 )andt (ord ) the reconstruction level r 11 is rarely used.. u 7/8 r 4 5/8 r 3 3/8 r 1/8 r 1 u 0 (d 0 ) 1/4 t 1 /4 t 3/4 t 3 t 4 Rearranging reconstruction levels r 1,r,r 3,r 4 that they all lie between t 1,t 4 and d 1,d 4 makes more sense. Quantizer in which reconstruction and transition levels do not have even spacing is called non-uniform quantization. The notion that uniform quantizer is the optimal MMSE when p u (u 0 )orf is uniform suggests another approach. Specifically we can... uniform: quantizer g with a uniform quantization and then perform the inverse nonlinearity. f u Nonlinear g f (u) u g Uniform quantization ĝ g ĝ (Nonlinear) -1 û ĝ The non linearity is called companding one choice of nonlinearity or companding is given by g = C[u] = u x 0 = p u (x)dx 1 9
10 The resulting p g (g) is uniform in the int(-1/,1/). The nonuniform quantization companding minimizes the distortion D = E[(ĝ g) ] In case of unique coding, one has to quantize many scalars. One approach is to quantize each scalar independently. This approach is called scalar quantization of a vector source. Suppose we have N scalars...and each scalar fi is quantized to L i reconstruction levels. If L i canbeexpressedas a power of and each reconstruction level is coded with an equal number of bits, L i will be related to required number of bits B i by L i = B i or B i =log L i Total number of bits B required in coding N scalars is B = N i=1 B i The total number of reconstruction levels L is L = N L i = c B i=1 If we have a fixed number of bits B to code all N scalars using scalar quantization of a vector source, B has to be divided among N scalars. The optimal strategy for this used and the probability density functions of the scalar. The optimal strategy type involves assigning more bits to scalars with larger variance and fewer bits to scalars with small variance. As an example suppose we minimize the mean square error E[( ˆf f i ) ]wrtb i for 1 i N, where ˆf i is the result of quantization fi. If the probability density functions are the same for all scalar except for their variance, and we use the same quantization method for their variance and we use the same quantization method for each of the scalars, an approximate solution to the bit allocation problem is B i = B N + 1 log σ i [ N σj ]1/N j=1 L i = Bi = B/N σ i /(πσ i ) 1/N, 1 i N ie number of reconstruction levels for fi is... Although the above equation is an approximate solution obtained under certain specific conditions, it is useful as a reference in other bit allocation problems. As B i in the equation can be -ve and is not in general an integer, a constraint has to be imposed in solving the bit allocation problem s.t B i is a non -ve integer To prove B i = B N + 1 log σ fi [ N σfi ]1/N j=1 1 i N 10
11 We find an expression for the distortion and then find the bit allocation that minimizes the distortion. We perform the minimization using the method of Lagrange. Let average number of bits/sample to be used by the vector source be R = B/N and average number of bits/sample used by the k th scalar be R k,then R = 1 N R k (1) N where N is number of scalars in vector source. The MSQE ie reconstruction error variance is given by σ L k = α k Rk σ f k () α k is a factor that depends on the input σ f k variance of scalar f k... The objective of bit allocation procedure is to find R k that minimizes equ(3) subject given by eqn(1) Let us assume that alpha k is a constant α for... We can set up the minimization problem in terms of Lagrange s multipliers as J = α N R k σf k λ(r 1 N Taking the derivative of J wrt R k and setting it equal to zero, we obtain Substituting equ(5) in (1) we get a value for λ as: Substituting (6) in (5) we get N R k ) (4) R k = 1 log (α ln σ f k ) 1 log λ (5) λ = N (α ln σf k ) 1/N R (6) R k = R +... This will increase the average bit rate above the constraint. the non zero R k s are uniformly reduced until the average rate is equal to R. 11
Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y)
Gaussian source Assumptions d = (x-y) 2, given D, find lower bound of I(X;Y) E{(X-Y) 2 } D
More informationScalar and Vector Quantization. National Chiao Tung University Chun-Jen Tsai 11/06/2014
Scalar and Vector Quantization National Chiao Tung University Chun-Jen Tsai 11/06/014 Basic Concept of Quantization Quantization is the process of representing a large, possibly infinite, set of values
More informationMultimedia Communications. Scalar Quantization
Multimedia Communications Scalar Quantization Scalar Quantization In many lossy compression applications we want to represent source outputs using a small number of code words. Process of representing
More informationEE-597 Notes Quantization
EE-597 Notes Quantization Phil Schniter June, 4 Quantization Given a continuous-time and continuous-amplitude signal (t, processing and storage by modern digital hardware requires discretization in both
More informationQuantization. Introduction. Roadmap. Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory. Source coding.
Roadmap Quantization Optimal Quantizer Uniform Quantizer Non Uniform Quantizer Rate Distorsion Theory Source coding 2 Introduction 4 1 Lossy coding Original source is discrete Lossless coding: bit rate
More informationC.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University
Quantization C.M. Liu Perceptual Signal Processing Lab College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationQuantization 2.1 QUANTIZATION AND THE SOURCE ENCODER
2 Quantization After the introduction to image and video compression presented in Chapter 1, we now address several fundamental aspects of image and video compression in the remaining chapters of Section
More informationMultimedia Systems Giorgio Leonardi A.A Lecture 4 -> 6 : Quantization
Multimedia Systems Giorgio Leonardi A.A.2014-2015 Lecture 4 -> 6 : Quantization Overview Course page (D.I.R.): https://disit.dir.unipmn.it/course/view.php?id=639 Consulting: Office hours by appointment:
More informationReview of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition
Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to
More informationComputer Vision & Digital Image Processing
Computer Vision & Digital Image Processing Image Restoration and Reconstruction I Dr. D. J. Jackson Lecture 11-1 Image restoration Restoration is an objective process that attempts to recover an image
More informationEstimation techniques
Estimation techniques March 2, 2006 Contents 1 Problem Statement 2 2 Bayesian Estimation Techniques 2 2.1 Minimum Mean Squared Error (MMSE) estimation........................ 2 2.1.1 General formulation......................................
More informationExample: for source
Nonuniform scalar quantizer References: Sayood Chap. 9, Gersho and Gray, Chap.'s 5 and 6. The basic idea: For a nonuniform source density, put smaller cells and levels where the density is larger, thereby
More informationIntroduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.
Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems
More informationCh. 12 Linear Bayesian Estimators
Ch. 1 Linear Bayesian Estimators 1 In chapter 11 we saw: the MMSE estimator takes a simple form when and are jointly Gaussian it is linear and used only the 1 st and nd order moments (means and covariances).
More informationDigital Signal Processing
COMP ENG 4TL4: Digital Signal Processing Notes for Lecture #3 Wednesday, September 10, 2003 1.4 Quantization Digital systems can only represent sample amplitudes with a finite set of prescribed values,
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationPrinciples of Communications
Principles of Communications Weiyao Lin, PhD Shanghai Jiao Tong University Chapter 4: Analog-to-Digital Conversion Textbook: 7.1 7.4 2010/2011 Meixia Tao @ SJTU 1 Outline Analog signal Sampling Quantization
More information18.2 Continuous Alphabet (discrete-time, memoryless) Channel
0-704: Information Processing and Learning Spring 0 Lecture 8: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not
More informationClass of waveform coders can be represented in this manner
Digital Speech Processing Lecture 15 Speech Coding Methods Based on Speech Waveform Representations ti and Speech Models Uniform and Non- Uniform Coding Methods 1 Analog-to-Digital Conversion (Sampling
More informationCS578- Speech Signal Processing
CS578- Speech Signal Processing Lecture 7: Speech Coding Yannis Stylianou University of Crete, Computer Science Dept., Multimedia Informatics Lab yannis@csd.uoc.gr Univ. of Crete Outline 1 Introduction
More informationQUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS
QUANTIZATION FOR DISTRIBUTED ESTIMATION IN LARGE SCALE SENSOR NETWORKS Parvathinathan Venkitasubramaniam, Gökhan Mergen, Lang Tong and Ananthram Swami ABSTRACT We study the problem of quantization for
More informationRandom Signal Transformations and Quantization
York University Department of Electrical Engineering and Computer Science EECS 4214 Lab #3 Random Signal Transformations and Quantization 1 Purpose In this lab, you will be introduced to transformations
More informationVector Quantization. Institut Mines-Telecom. Marco Cagnazzo, MN910 Advanced Compression
Institut Mines-Telecom Vector Quantization Marco Cagnazzo, cagnazzo@telecom-paristech.fr MN910 Advanced Compression 2/66 19.01.18 Institut Mines-Telecom Vector Quantization Outline Gain-shape VQ 3/66 19.01.18
More information7.1 Sampling and Reconstruction
Haberlesme Sistemlerine Giris (ELE 361) 6 Agustos 2017 TOBB Ekonomi ve Teknoloji Universitesi, Guz 2017-18 Dr. A. Melda Yuksel Turgut & Tolga Girici Lecture Notes Chapter 7 Analog to Digital Conversion
More informationThe Secrets of Quantization. Nimrod Peleg Update: Sept. 2009
The Secrets of Quantization Nimrod Peleg Update: Sept. 2009 What is Quantization Representation of a large set of elements with a much smaller set is called quantization. The number of elements in the
More informationIMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1
IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding
More informationIMAGE COMPRESSION OF DIGITIZED NDE X-RAY RADIOGRAPHS. Brian K. LoveweIl and John P. Basart
IMAGE COMPRESSIO OF DIGITIZED DE X-RAY RADIOGRAPHS BY ADAPTIVE DIFFERETIAL PULSE CODE MODULATIO Brian K. LoveweIl and John P. Basart Center for ondestructive Evaluation and the Department of Electrical
More informationFinite Word Length Effects and Quantisation Noise. Professors A G Constantinides & L R Arnaut
Finite Word Length Effects and Quantisation Noise 1 Finite Word Length Effects Finite register lengths and A/D converters cause errors at different levels: (i) input: Input quantisation (ii) system: Coefficient
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationAdvanced Signal Processing Introduction to Estimation Theory
Advanced Signal Processing Introduction to Estimation Theory Danilo Mandic, room 813, ext: 46271 Department of Electrical and Electronic Engineering Imperial College London, UK d.mandic@imperial.ac.uk,
More informationWaveform-Based Coding: Outline
Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and
More informationSimultaneous SDR Optimality via a Joint Matrix Decomp.
Simultaneous SDR Optimality via a Joint Matrix Decomposition Joint work with: Yuval Kochman, MIT Uri Erez, Tel Aviv Uni. May 26, 2011 Model: Source Multicasting over MIMO Channels z 1 H 1 y 1 Rx1 ŝ 1 s
More informationEstimators as Random Variables
Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maimum likelihood Consistency Confidence intervals Properties of the mean estimator Introduction Up until
More informationJoint Optimum Bitwise Decomposition of any. Memoryless Source to be Sent over a BSC. Ecole Nationale Superieure des Telecommunications URA CNRS 820
Joint Optimum Bitwise Decomposition of any Memoryless Source to be Sent over a BSC Seyed Bahram Zahir Azami, Pierre Duhamel 2 and Olivier Rioul 3 cole Nationale Superieure des Telecommunications URA CNRS
More informationPulse-Code Modulation (PCM) :
PCM & DPCM & DM 1 Pulse-Code Modulation (PCM) : In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from
More informationFast Near-Optimal Energy Allocation for Multimedia Loading on Multicarrier Systems
Fast Near-Optimal Energy Allocation for Multimedia Loading on Multicarrier Systems Michael A. Enright and C.-C. Jay Kuo Department of Electrical Engineering and Signal and Image Processing Institute University
More informationImage Compression using DPCM with LMS Algorithm
Image Compression using DPCM with LMS Algorithm Reenu Sharma, Abhay Khedkar SRCEM, Banmore -----------------------------------------------------------------****---------------------------------------------------------------
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More information1. Probability density function for speech samples. Gamma. Laplacian. 2. Coding paradigms. =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ
Digital Speech Processing Lecture 16 Speech Coding Methods Based on Speech Waveform Representations and Speech Models Adaptive and Differential Coding 1 Speech Waveform Coding-Summary of Part 1 1. Probability
More informationProblem Set III Quantization
Problem Set III Quantization Christopher Tsai Problem #2.1 Lloyd-Max Quantizer To train both the Lloyd-Max Quantizer and our Entropy-Constrained Quantizer, we employ the following training set of images,
More informationHistogram Processing
Histogram Processing The histogram of a digital image with gray levels in the range [0,L-] is a discrete function h ( r k ) = n k where r k n k = k th gray level = number of pixels in the image having
More informationE4702 HW#4-5 solutions by Anmo Kim
E70 HW#-5 solutions by Anmo Kim (ak63@columbia.edu). (P3.7) Midtread type uniform quantizer (figure 3.0(a) in Haykin) Gaussian-distributed random variable with zero mean and unit variance is applied to
More informationChapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression
BSTT523: Kutner et al., Chapter 1 1 Chapter 1: Linear Regression with One Predictor Variable also known as: Simple Linear Regression Bivariate Linear Regression Introduction: Functional relation between
More informationProyecto final de carrera
UPC-ETSETB Proyecto final de carrera A comparison of scalar and vector quantization of wavelet decomposed images Author : Albane Delos Adviser: Luis Torres 2 P a g e Table of contents Table of figures...
More informationCHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM
CHAPITRE I-5 ETUDE THEORIQUE DE LA ROBUSTESSE DU QUANTIFICATEUR UNIFORME OPTIMUM - 96 - Présentation Après avoir mis au point et validé une méthode d allocation de bits avec un découpage de la DCT en bandes
More informationEstimation Tasks. Short Course on Image Quality. Matthew A. Kupinski. Introduction
Estimation Tasks Short Course on Image Quality Matthew A. Kupinski Introduction Section 13.3 in B&M Keep in mind the similarities between estimation and classification Image-quality is a statistical concept
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount
More informationCompression in the Space of Permutations
Compression in the Space of Permutations Da Wang Arya Mazumdar Gregory Wornell EECS Dept. ECE Dept. Massachusetts Inst. of Technology Cambridge, MA 02139, USA {dawang,gww}@mit.edu University of Minnesota
More informationVector Quantization and Subband Coding
Vector Quantization and Subband Coding 18-796 ultimedia Communications: Coding, Systems, and Networking Prof. Tsuhan Chen tsuhan@ece.cmu.edu Vector Quantization 1 Vector Quantization (VQ) Each image block
More informationE303: Communication Systems
E303: Communication Systems Professor A. Manikas Chair of Communications and Array Processing Imperial College London Principles of PCM Prof. A. Manikas (Imperial College) E303: Principles of PCM v.17
More informationEstimation, Detection, and Identification
Estimation, Detection, and Identification Graduate Course on the CMU/Portugal ECE PhD Program Spring 2008/2009 Chapter 5 Best Linear Unbiased Estimators Instructor: Prof. Paulo Jorge Oliveira pjcro @ isr.ist.utl.pt
More informationStochastic Analogues to Deterministic Optimizers
Stochastic Analogues to Deterministic Optimizers ISMP 2018 Bordeaux, France Vivak Patel Presented by: Mihai Anitescu July 6, 2018 1 Apology I apologize for not being here to give this talk myself. I injured
More information2 Statistical Estimation: Basic Concepts
Technion Israel Institute of Technology, Department of Electrical Engineering Estimation and Identification in Dynamical Systems (048825) Lecture Notes, Fall 2009, Prof. N. Shimkin 2 Statistical Estimation:
More informationTransform Coding. Transform Coding Principle
Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients
More information= α α ( ) ( ) thr. thr. min. Image quantization and false contours. Why 256 quantization levels? min α. min. max. min. min Φ Φ. max. 4. D α α.
L. Yaroslavsky. Course 5.72 Digital Image Processing: Applications Lect. 3. Image Quantization in Image and Transform Domains Optimal element-ise uantization. ε ( ( ˆ ; E I p( D( ε d; ( ( E L { } ( ( p(
More informationComputer Vision Group Prof. Daniel Cremers. 9. Gaussian Processes - Regression
Group Prof. Daniel Cremers 9. Gaussian Processes - Regression Repetition: Regularized Regression Before, we solved for w using the pseudoinverse. But: we can kernelize this problem as well! First step:
More informationEE6604 Personal & Mobile Communications. Week 13. Multi-antenna Techniques
EE6604 Personal & Mobile Communications Week 13 Multi-antenna Techniques 1 Diversity Methods Diversity combats fading by providing the receiver with multiple uncorrelated replicas of the same information
More informationIMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model
IMAGE COMRESSIO IMAGE COMRESSIO-II Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding redictive coding Transform coding Week IX 3/6/23 Image Compression-II 3/6/23
More informationEE368B Image and Video Compression
EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer
More informationCh. 10 Vector Quantization. Advantages & Design
Ch. 10 Vector Quantization Advantages & Design 1 Advantages of VQ There are (at least) 3 main characteristics of VQ that help it outperform SQ: 1. Exploit Correlation within vectors 2. Exploit Shape Flexibility
More informationLecture 7 Predictive Coding & Quantization
Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization
More informationBasic Sampling Methods
Basic Sampling Methods Sargur Srihari srihari@cedar.buffalo.edu 1 1. Motivation Topics Intractability in ML How sampling can help 2. Ancestral Sampling Using BNs 3. Transforming a Uniform Distribution
More informationVID3: Sampling and Quantization
Video Transmission VID3: Sampling and Quantization By Prof. Gregory D. Durgin copyright 2009 all rights reserved Claude E. Shannon (1916-2001) Mathematician and Electrical Engineer Worked for Bell Labs
More informationIntensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER? Intensity Transformations and Spatial Filtering: WHICH ONE LOOKS BETTER?
: WHICH ONE LOOKS BETTER? 3.1 : WHICH ONE LOOKS BETTER? 3.2 1 Goal: Image enhancement seeks to improve the visual appearance of an image, or convert it to a form suited for analysis by a human or a machine.
More informationImage Coding. Chapter 10. Contents. (Related to Ch. 10 of Lim.) 10.1
Chapter 1 Image Coding Contents Introduction..................................................... 1. Quantization..................................................... 1.3 Scalar quantization...............................................
More informationComputer Vision Group Prof. Daniel Cremers. 4. Gaussian Processes - Regression
Group Prof. Daniel Cremers 4. Gaussian Processes - Regression Definition (Rep.) Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More information> ", A " "<n. :-t. 'O "" (l, -...I. c-j "';> \ J, \l\ , iy-j: "a-j C 'C) \.,S \\ \ /tv '-> .,,J c,. <tt .
~ l' '" >. 1.... A " '.",... ".,,J c,..$ < 4S' .,. r 1 """..,. 3.3 3 "" "",.. Q?., 0... < Jt... 2
More informationObjectives of Image Coding
Objectives of Image Coding Representation of an image with acceptable quality, using as small a number of bits as possible Applications: Reduction of channel bandwidth for image transmission Reduction
More informationarxiv: v1 [cs.it] 20 Jan 2018
1 Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits Alon Kipnis, Yonina C. Eldar and Andrea J. Goldsmith fs arxiv:181.6718v1 [cs.it] Jan 18 X(t) sampler smp sec encoder R[ bits
More informationSoft-Output Trellis Waveform Coding
Soft-Output Trellis Waveform Coding Tariq Haddad and Abbas Yongaçoḡlu School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, K1N 6N5, Canada Fax: +1 (613) 562 5175 thaddad@site.uottawa.ca
More informationCMPT 365 Multimedia Systems. Final Review - 1
CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression
More informationconditional cdf, conditional pdf, total probability theorem?
6 Multiple Random Variables 6.0 INTRODUCTION scalar vs. random variable cdf, pdf transformation of a random variable conditional cdf, conditional pdf, total probability theorem expectation of a random
More informationON SCALABLE CODING OF HIDDEN MARKOV SOURCES. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose
ON SCALABLE CODING OF HIDDEN MARKOV SOURCES Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California, Santa Barbara, CA, 93106
More informationNovel Quantization Strategies for Linear Prediction with Guarantees
Novel Novel for Linear Simon Du 1/10 Yichong Xu Yuan Li Aarti Zhang Singh Pulkit Background Motivation: Brain Computer Interface (BCI). Predict whether an individual is trying to move his hand towards
More informationAn Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian Source
INFORMATICA, 007, Vol. 18, No., 79 88 79 007 Institute of Mathematics and Informatics, Vilnius An Effective Method for Initialization of Lloyd Max s Algorithm of Optimal Scalar Quantization for Laplacian
More informationCommunication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi
Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 41 Pulse Code Modulation (PCM) So, if you remember we have been talking
More informationVariations. ECE 6540, Lecture 10 Maximum Likelihood Estimation
Variations ECE 6540, Lecture 10 Last Time BLUE (Best Linear Unbiased Estimator) Formulation Advantages Disadvantages 2 The BLUE A simplification Assume the estimator is a linear system For a single parameter
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More information8 Basics of Hypothesis Testing
8 Basics of Hypothesis Testing 4 Problems Problem : The stochastic signal S is either 0 or E with equal probability, for a known value E > 0. Consider an observation X = x of the stochastic variable X
More informationLloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks
Lloyd-Max Quantization of Correlated Processes: How to Obtain Gains by Receiver-Sided Time-Variant Codebooks Sai Han and Tim Fingscheidt Institute for Communications Technology, Technische Universität
More informationTransform coding - topics. Principle of block-wise transform coding
Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts
More informationVariational Methods in Signal and Image Processing
Variational Methods in Signal and Image Processing XU WANG Texas A&M University Dept. of Electrical & Computer Eng. College Station, Texas United States xu.wang@tamu.edu ERCHIN SERPEDIN Texas A&M University
More informationBasic Statistical Tools
Structural Health Monitoring Using Statistical Pattern Recognition Basic Statistical Tools Presented by Charles R. Farrar, Ph.D., P.E. Los Alamos Dynamics Structural Dynamics and Mechanical Vibration Consultants
More informationEE67I Multimedia Communication Systems
EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original
More informationCHAPTER 3. P (B j A i ) P (B j ) =log 2. j=1
CHAPTER 3 Problem 3. : Also : Hence : I(B j ; A i ) = log P (B j A i ) P (B j ) 4 P (B j )= P (B j,a i )= i= 3 P (A i )= P (B j,a i )= j= =log P (B j,a i ) P (B j )P (A i ).3, j=.7, j=.4, j=3.3, i=.7,
More information' 3480 University, Montreal, Quebec H3A 2A7. QUANTIZERS FOR SYMMETRIC GAMMA DISTRIBUTIONS. 3. Uniqueness. 1. Introduction. INRS- TtlCeommunicationr
Proc. IEEE Globecom Conf. (San Diego, CA), pp. 214-218, Nov. 1983 QUANTIZERS FOR SYMMETRIC GAMMA DISTRIBUTIONS Peter Kabd Department of Electrical Engineerinat McGall University INRS- TtlCeommunicationr
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationTTIC 31230, Fundamentals of Deep Learning David McAllester, Winter Generalization and Regularization
TTIC 31230, Fundamentals of Deep Learning David McAllester, Winter 2019 Generalization and Regularization 1 Chomsky vs. Kolmogorov and Hinton Noam Chomsky: Natural language grammar cannot be learned by
More informationReceived Signal, Interference and Noise
Optimum Combining Maximum ratio combining (MRC) maximizes the output signal-to-noise ratio (SNR) and is the optimal combining method in a maximum likelihood sense for channels where the additive impairment
More informationData Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395
Data Mining Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Data Mining Fall 1395 1 / 42 Outline 1 Introduction 2 Feature selection
More informationModule 3. Quantization and Coding. Version 2, ECE IIT, Kharagpur
Module Quantization and Coding ersion, ECE IIT, Kharagpur Lesson Logarithmic Pulse Code Modulation (Log PCM) and Companding ersion, ECE IIT, Kharagpur After reading this lesson, you will learn about: Reason
More informationEncoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels
Encoder Decoder Design for Event-Triggered Feedback Control over Bandlimited Channels LEI BAO, MIKAEL SKOGLUND AND KARL HENRIK JOHANSSON IR-EE- 26: Stockholm 26 Signal Processing School of Electrical Engineering
More informationEECS490: Digital Image Processing. Lecture #26
Lecture #26 Moments; invariant moments Eigenvector, principal component analysis Boundary coding Image primitives Image representation: trees, graphs Object recognition and classes Minimum distance classifiers
More informationOn Optimal Coding of Hidden Markov Sources
2014 Data Compression Conference On Optimal Coding of Hidden Markov Sources Mehdi Salehifar, Emrah Akyol, Kumar Viswanatha, and Kenneth Rose Department of Electrical and Computer Engineering University
More informationOutline Lecture 2 2(32)
Outline Lecture (3), Lecture Linear Regression and Classification it is our firm belief that an understanding of linear models is essential for understanding nonlinear ones Thomas Schön Division of Automatic
More informationHomework Set 3 Solutions REVISED EECS 455 Oct. 25, Revisions to solutions to problems 2, 6 and marked with ***
Homework Set 3 Solutions REVISED EECS 455 Oct. 25, 2006 Revisions to solutions to problems 2, 6 and marked with ***. Let U be a continuous random variable with pdf p U (u). Consider an N-point quantizer
More informationMachine Learning. Lecture 4: Regularization and Bayesian Statistics. Feng Li. https://funglee.github.io
Machine Learning Lecture 4: Regularization and Bayesian Statistics Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall 207 Overfitting Problem
More informationSolution. (i) Find a minimal sufficient statistic for (θ, β) and give your justification. X i=1. By the factorization theorem, ( n
Solution 1. Let (X 1,..., X n ) be a simple random sample from a distribution with probability density function given by f(x;, β) = 1 ( ) 1 β x β, 0 x, > 0, β < 1. β (i) Find a minimal sufficient statistic
More information