Entropy Encoding Using Karhunen-Loève Transform
|
|
- Sheryl Holmes
- 6 years ago
- Views:
Transcription
1 Entropy Encoding Using Karhunen-Loève Transform Myung-Sin Song Southern Illinois University Edwardsville Sept 17, 2007
2 Joint work with Palle Jorgensen.
3 Introduction In most images their neighboring pixels are correlated and thus contain redundant information. Task : to find less correlated representation of the image, then perform redundancy reduction and irrelavancy reduction. Redundancy reduction removes duplication from the signal source (image) Irrelavancy reduction omits parts of the signal that will not be noticed by the Human Visual System (HVS).
4 Definition Spatial Redundancy : correlation between neighboring pixel values. Spectral Redundancy : correlation between different color planes or spectral bands. Aim : Reduce the number of bits needed to represent an image by removing redundancies as much as possible.
5 Outline Figure: Outline of the wavelet image compression process.
6 Image Decomposition using Forward Wavelet Transform A 1-level wavelet transform of an N M image can be represented as a 1 h 1 f v 1 d 1 where the subimages h 1, d 1, a 1 and v 1 each have the dimension of N/2 by M/2.
7 Image Decomposition using Forward Wavelet Transform-cont d a 1 = Vm 1 Vn 1 : ϕ A (x, y) = ϕ(x)ϕ(y) = i j h ih j ϕ(2x i)ϕ(2y j) h 1 = Vm 1 Wn 1 : ψ H (x, y) = ψ(x)ϕ(y) = i j g ih j ϕ(2x i)ϕ(2y j) v 1 = Wm 1 Vn 1 : ψ V (x, y) = ϕ(x)ψ(y) = i j h ig j ϕ(2x i)ϕ(2y j) d 1 = Wm 1 Wn 1 : ψ D (x, y) = ψ(x)ψ(y) = i j g ig j ϕ(2x i)ϕ(2y j) ϕ : the father function in sense of wavelet. ψ : is the mother function in sense of wavelet. V space : the average space and the from multiresolution analysis (MRA). W space : the difference space from MRA. h : low-pass filter coefficients g : high-pass filter coefficients.
8 Test Image Figure: Prof. Jorgensen in his office.
9 First-level Decomposition Figure: 1-level Haar Wavelet Decomposition of Prof. Jorgensen
10 Second-level Decomposition Figure: 2-level Haar Wavelet Decomposition of Prof. Jorgensen
11 Quantization The number of bits needed to store the wavelet transformed coefficients by reducing the precision of the values. This is a many-to-one mapping, meaning that it is a lossy process resulting in lossy compression.
12 Quantization-cont d Definition Let X be a set, and K be a discrete set. Let Q and D be mappings Q : X K and D : K X. Q and D are such that x D(Q(x)) x D(d), for all d K Applying Q to some x X is called quantization, and Q(x) is the quantized valued of x. Likewise, applying D to some k K is called dequantization and D(k) is the dequantized value of k.
13 Thresholding Another method of quantization is thresholding. Thresholding is a method of data reduction where it puts 0 for the pixel values below the thresholding value or something other appropriate value. Soft thresholding is defined as follows: 0 if x λ T soft (x) = x λ if x > λ x + λ if x < λ (1)
14 Thresholding Hard thresholding is as follows: { 0 if x λ T hard (x) = x if x > λ (2) where λ R + and x is a pixel value.
15 Entropy Encoding Entropy encoding further compresses the quantized values in lossless manner which gives better compression in overall. It uses a model to accurately determine the probabilities for each quantized value and produces an appropriate code based on these probabilities so that the resultant output code stream will be smaller than the input stream.
16 Various Entropy Encoding Schemes Kahunen-Loève Transform Shannon-Fano Entropy Huffman Coding Arithmetic Encoding Kolmogorov Entropy
17 Shannon-Fano Entropy For each data on an image, ie. pixel, a set of probabilities p i is computed, where n i=1 p i = 1. The entropy of this set gives the measure of how much choice is involved, in the selection of the pixel value on average.
18 Shannon-Fano Entropy-cont d Definition Shannon s entropy E(p 1, p 2,..., p n ) which satisfy the following: E is a continuous function of p i. E should be steadily increasing function of n. If the choice is made in k successive stages, then E = sum of the entropies of choices at each stage, with weights corresponding to the probabilities of the stages. E = k n i=1 p i log p i. k controls the units of the entropy, which is bits. logs are taken base 2.
19 Shannon-Fano Entropy-cont d Shannon-Fano entropy encoding is done according to the probabilities of data and the method is as follows: The data is listed with their probabilities in decreasing order of their probabilities. The list is divided into two parts that has roughly equal probability. Start the code for those data in the first part with a 0 bit and for those in the second part with a 1. Continue recursively until each subdivision contains just one data.
20 Example An example with letters in the text would better depict how the mechanism works. Suppose we have a text with letters a, e, f, q, r with the following probability distribution: Letter Probability a 0.3 e 0.2 f 0.2 q 0.2 r 0.1
21 Example-cont d Then applying the Shannon-Fano entropy encoding scheme on the above table gives us the following assignment. Letter Probability code a e f q r Note that instead of using 8-bits to represent a letter, 2 or 3-bits are being used to represent the letters in this case.
22 Description of the Algorithm for Karhunen-Loève transform entropy encoding 1. Perform the wavelet transform for the whole image. (i.e., wavelet decomposition.) 2. Do quantization to all coefficients in the image matrix, except the average detail. 3. Subtract the mean: Subtract the mean from each of the data dimensions. This produces a data set whose mean is zero. 4. Compute the covariance matrix cov(x, Y ) = n i=1 (X i X)(Y i Ȳ ). n 1
23 5. Compute the eigenvectors and eigenvalues of the covariance matrix. 6. Choose components and form a feature vector(matrix of vectors), (eig 1,..., eig n ). Eigenvectors are listed in decreasing order of the magnitude of their eigenvalues. Eigenvalues found in step 5 are different in values. The eigenvector with highest eigenvalue is the principle component of the data set. 7. Derive the new data set. Final Data = Row Feature Matrix Row Data Adjust.
24 Row Feature Matrix : the matrix that has the eigenvectors in its rows with the most significant eigenvector (i.e., with the greatest eigenvalue) at the top row of the matrix. Row Data Adjust : the matrix with mean-adjusted data transposed. That is, the matrix contains the data items in each column with each row having a separate dimension.
25 Karhunen-Loeve Expansion Consider an ensemble of a large number N of similar objects, of which Nw α, α = 1, 2,..., ν where the relative frequency w α satisfies the probability axioms: w α 0, ν w α = 1 α=1 Assume that each type specified by a value of the index α is represented by f α (ξ) in a real domain [a, b], which we normalize by b a f α (ξ) 2 dξ = 1
26 Karhunen-Loeve Expansion-cont d Let {ψ i (ξ)}, i = 1, 2,..., be a complete set of orthonomal base functions defined on [a, b] Then any function f α (ξ) can be expanded as f α (ξ) = i=1 x (α) i ψ i (ξ) (3) with b xi α = ψi (ξ)f α (ξ)dξ. (4) a Here, xi α is the component of f α in ψ i coordinate system. With the normalization of f α we have i=1 x α i 2 = 1 (5)
27 Karhunen-Loeve Expansion-cont d Then substituting (4) in (3) gives f α (ξ) = b a f α (ξ)[ in definition of ONB. i=1 ψ i (ξ)ψ i(ξ)]dξ = i=1 ψ i (ξ) f α ψ i (6)
28 Karhunen-Loeve Expansion-cont d Let H = L 2 (a, b). and ψ i : H l 2 (Z) U : l 2 (Z) l 2 (Z) where U is a unitary operator Note that the distance is invariant under a unitary transformation. Thus, using another coordinate system {φ j } in place of {ψ i }, would not change the distance.
29 Karhunen-Loeve Expansion-cont d Let {φ j }, j = 1, 2,..., be another set of ONB functions instead of {ψ i (ξ)}, i = 1, 2,...,. Let yj α be the component of f α in {φ j } where it can be expressed in terms of xi α by a linear relation where y α j = i=1 U is a unitary operator matrix φ j, ψ i x α i = i=1 U : l 2 (Z) l 2 (Z), U i,j = φ j, ψ i = b a U i,j x α i φ j (ξ)ψ i(ξ)dξ
30 Karhunen-Loeve Expansion-cont d Also, xi α relation where U 1 i,j can be written in terms of y α j x α i = j=1 ψ i, φ j y α j = = U i,j and U i,j = U j,i f α (ξ) = i=1 under the following j=1 U 1 i,j yj α x α i (ξ)ψ i (ξ) = y α i (ξ)φ i (ξ) So U(x i ) = (y i ) and i=1 x i α ψ i (ξ) = j=1 y j α φ j (ξ) b xi α =< ψ i, f α >= ψ i (ξ)f (α) (ξ)dξ a
31 Karhunen-Loeve Expansion-cont d The squared magnitude x (α) i 2 of the coefficient for ψ i in the expansion of f (α) can be considered as a good measure of the average in the ensemble Q i = n α=1 w (α) x (α) i 2 can be considered as the measure of importance of {ψ i }. Q i 0, Q i = 1 i
32 Karhunen-Loeve Expansion-cont d The entropy function in terms of the Q i s is defined as S({ψ i }) = i Q i log Q i. We are interested in minimizing the entropy, that is if {Θ j } is one such optimal coordinate system, we shall have S({Θ j }) = min {ψj }S({ψ i })
33 Karhunen-Loeve Expansion-cont d Let G(ξ, ξ ) = α w α f α (ξ)f α (ξ ). Then G is a Hermitian matrix and Q i = G(i, i) = α w α xi α xi α where normalization of Qi = 1 give us trace G = 1 where the trace means the diagonal sum. Then define a special function system {Θ k (ξ)} as the set of eigen-functions of G, i.e. b so GΘ k (ξ) = λ k Θ k (ξ). a G(ξ, ξ )Θ k (ξ)dξ = λ k Θ k (ξ). (7)
34 Karhunen-Loeve Expansion-cont d When the date are not functions but vectors v α s whose components are x (α) i in the ψ i coordinate system, we have G(i, i )ti k = λ kti k (8) i where ti k is the i th component of the vector Θ k in the coordinate system {ψ i }. So we get ψ : H (x i ) and also Θ : H (t i ). The two ONBs result in x α i = k c α k tk i for all i, c α k = i ti k xi α which is Karhunen-Loève expansion of f α (ξ) or vector v α.
35 Karhunen-Loeve Expansion-cont d Then {Θ k (ξ)} is the K-L coordiate system dependent on {w α } and {f α (ξ)}. Then we arrange the corresponding functions or vectors in the order of eigenvalues λ 1 λ 2... λ k 1 λ k....
36 Karhunen-Loeve Expansion-cont d Q i = G i,i = ψ i Gψ i = k A ikλ k where A ik = t k double stochastic matrix. Then λ 1 0 G = U U 1 0 λ k i ti k which is a
37 Karhunen-Loeve Expansion-cont d Theorem: If two probability distributions λ k, k = 1, 2,..., and Q i, i = 1, 2,... are related by Q i = k A ikλ k then S(G) = λ k log λ k Q i log Q i. k=1 i=1
38 References M.-S. Song, Wavelet Image Compression, Operator Theory, Operator Algebras, and Applications, Contemp. Math., vol. 414, American Mathematical Society, Providence P. E. T. Jorgensen and M.-S. Song, Entropy Encoding using Hilbert Space and Karhunen-Loeve Transforms Journal of Mathematical Physics to appear, P. E. T. Jorgensen and M.-S. Song, Comparison of Discrete and Continuous Wavelet Transforms Springer Encyclopedia of Complexity and Systems Science, to appear, M.-S. Song, Entropy Encoding in Wavelet Image Compression preprint, 2007 P. E. T. Jorgensen. Analysis and probability: wavelets, signals, fractals, volume 234 of Graduate Texts in Mathematics. Springer, New York, 2006.
39 References-cont d Smith, L. I., A Tutorial on Principal Components Analysis. tutorials/principal compo A. Skodras, C. Christopoulos, and T. Ebrahimi. Jpeg 2000 still image compression standard IEEE signal processing magazine. IEEE Signal processing Magazine, 18:36?58, Sept B. E. Usevitch. A tutorial on modern lossy wavelet image compression: Foundations of jpeg 2000 IEEE Signal processing Magazine, 18:22 35, Sept Watanabe, S., Karhunen-Loève Expansion and Factor Analysis Theoretical Remarks and Applications Transactions of the Fourth Prague Conference on Information Theory Statistical Decision Functions Random Process. (Adademia Press 1965). Ash, R. B. Information theory. Corrected reprint of the 1965 original (Dover Publications, Inc., New York, 1990).
Analysis of Fractals, Image Compression and Entropy Encoding
Analysis of Fractals, Image Compression and Entropy Encoding Myung-Sin Song Southern Illinois University Edwardsville Jul 10, 2009 Joint work with Palle Jorgensen. Outline 1. Signal and Image processing,
More informationContents. Acknowledgments
Table of Preface Acknowledgments Notation page xii xx xxi 1 Signals and systems 1 1.1 Continuous and discrete signals 1 1.2 Unit step and nascent delta functions 4 1.3 Relationship between complex exponentials
More informationIMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1
IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationLaboratory 1 Discrete Cosine Transform and Karhunen-Loeve Transform
Laboratory Discrete Cosine Transform and Karhunen-Loeve Transform Miaohui Wang, ID 55006952 Electronic Engineering, CUHK, Shatin, HK Oct. 26, 202 Objective, To investigate the usage of transform in visual
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationarxiv: v1 [cs.oh] 3 Oct 2014
M. Prisheltsev (Voronezh) mikhail.prisheltsev@gmail.com ADAPTIVE TWO-DIMENSIONAL WAVELET TRANSFORMATION BASED ON THE HAAR SYSTEM 1 Introduction arxiv:1410.0705v1 [cs.oh] Oct 2014 The purpose is to study
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationImage Data Compression
Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression
More informationCOMPARISON OF DISCRETE AND CONTINUOUS WAVELET TRANSFORMS
COMPARISON OF DISCRETE AND CONTINUOUS WAVELET TRANSFORMS PALLE E.T. JORGENSEN AND MYUNG-SIN SONG Article Outline Glossary 1 1. Definition 8 2. Introduction 9 3. The discrete vs continuous wavelet Algorithms
More informationCHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION
59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationWaveform-Based Coding: Outline
Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and
More informationIMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model
IMAGE COMRESSIO IMAGE COMRESSIO-II Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding redictive coding Transform coding Week IX 3/6/23 Image Compression-II 3/6/23
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More informationSYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression
SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data
More informationSingular Value Decompsition
Singular Value Decompsition Massoud Malek One of the most useful results from linear algebra, is a matrix decomposition known as the singular value decomposition It has many useful applications in almost
More informationHalf-Pel Accurate Motion-Compensated Orthogonal Video Transforms
Flierl and Girod: Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms, IEEE DCC, Mar. 007. Half-Pel Accurate Motion-Compensated Orthogonal Video Transforms Markus Flierl and Bernd Girod Max
More informationWavelets and Multiresolution Processing
Wavelets and Multiresolution Processing Wavelets Fourier transform has it basis functions in sinusoids Wavelets based on small waves of varying frequency and limited duration In addition to frequency,
More informationModule 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur
Module 4 Multi-Resolution Analysis Lesson Multi-resolution Analysis: Discrete avelet Transforms Instructional Objectives At the end of this lesson, the students should be able to:. Define Discrete avelet
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationDigital communication system. Shannon s separation principle
Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationReview of similarity transformation and Singular Value Decomposition
Review of similarity transformation and Singular Value Decomposition Nasser M Abbasi Applied Mathematics Department, California State University, Fullerton July 8 7 page compiled on June 9, 5 at 9:5pm
More informationWavelets and Image Compression. Bradley J. Lucier
Wavelets and Image Compression Bradley J. Lucier Abstract. In this paper we present certain results about the compression of images using wavelets. We concentrate on the simplest case of the Haar decomposition
More informationTransform coding - topics. Principle of block-wise transform coding
Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationEntropy in Classical and Quantum Information Theory
Entropy in Classical and Quantum Information Theory William Fedus Physics Department, University of California, San Diego. Entropy is a central concept in both classical and quantum information theory,
More informationObjective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2
Image Compression Objective: Reduction of data redundancy Coding redundancy Interpixel redundancy Psychovisual redundancy 20-Fall LIST 2 Method: Coding Redundancy Variable-Length Coding Interpixel Redundancy
More informationRevolutionary Image Compression and Reconstruction via Evolutionary Computation, Part 2: Multiresolution Analysis Transforms
Proceedings of the 6th WSEAS International Conference on Signal, Speech and Image Processing, Lisbon, Portugal, September 22-24, 2006 144 Revolutionary Image Compression and Reconstruction via Evolutionary
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationMultimedia. Multimedia Data Compression (Lossless Compression Algorithms)
Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationSIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding
SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental
More informationLinear Algebra and Dirac Notation, Pt. 2
Linear Algebra and Dirac Notation, Pt. 2 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 2 February 1, 2017 1 / 14
More informationImage Compression. 1. Introduction. Greg Ames Dec 07, 2002
Image Compression Greg Ames Dec 07, 2002 Abstract Digital images require large amounts of memory to store and, when retrieved from the internet, can take a considerable amount of time to download. The
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationWavelets For Computer Graphics
{f g} := f(x) g(x) dx A collection of linearly independent functions Ψ j spanning W j are called wavelets. i J(x) := 6 x +2 x + x + x Ψ j (x) := Ψ j (2 j x i) i =,..., 2 j Res. Avge. Detail Coef 4 [9 7
More informationLecture 16: Multiresolution Image Analysis
Lecture 16: Multiresolution Image Analysis Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu November 9, 2004 Abstract Multiresolution analysis
More informationEE67I Multimedia Communication Systems
EE67I Multimedia Communication Systems Lecture 5: LOSSY COMPRESSION In these schemes, we tradeoff error for bitrate leading to distortion. Lossy compression represents a close approximation of an original
More informationPrincipal Component Analysis
Principal Component Analysis Anders Øland David Christiansen 1 Introduction Principal Component Analysis, or PCA, is a commonly used multi-purpose technique in data analysis. It can be used for feature
More informationECE 634: Digital Video Systems Wavelets: 2/21/17
ECE 634: Digital Video Systems Wavelets: 2/21/17 Professor Amy Reibman MSEE 356 reibman@purdue.edu hjp://engineering.purdue.edu/~reibman/ece634/index.html A short break to discuss wavelets Wavelet compression
More informationIntroduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.
Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane
More informationLossless Image and Intra-frame Compression with Integer-to-Integer DST
1 Lossless Image and Intra-frame Compression with Integer-to-Integer DST Fatih Kamisli, Member, IEEE arxiv:1708.07154v1 [cs.mm] 3 Aug 017 Abstract Video coding standards are primarily designed for efficient
More informationCourse 495: Advanced Statistical Machine Learning/Pattern Recognition
Course 495: Advanced Statistical Machine Learning/Pattern Recognition Lecturer: Stefanos Zafeiriou Goal (Lectures): To present discrete and continuous valued probabilistic linear dynamical systems (HMMs
More informationState of the art Image Compression Techniques
Chapter 4 State of the art Image Compression Techniques In this thesis we focus mainly on the adaption of state of the art wavelet based image compression techniques to programmable hardware. Thus, an
More informationRounding Transform. and Its Application for Lossless Pyramid Structured Coding ABSTRACT
Rounding Transform and Its Application for Lossless Pyramid Structured Coding ABSTRACT A new transform, called the rounding transform (RT), is introduced in this paper. This transform maps an integer vector
More informationComputer Assisted Image Analysis
Computer Assisted Image Analysis Lecture 0 - Object Descriptors II Amin Allalou amin@cb.uu.se Centre for Image Analysis Uppsala University 2009-04-27 A. Allalou (Uppsala University) Object Descriptors
More informationWavelet Scalable Video Codec Part 1: image compression by JPEG2000
1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance
More informationLecture 7 Predictive Coding & Quantization
Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationWavelet transforms and compression of seismic data
Mathematical Geophysics Summer School, Stanford, August 1998 Wavelet transforms and compression of seismic data G. Beylkin 1 and A. Vassiliou 2 I Introduction Seismic data compression (as it exists today)
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies
ISSN: 2321-7782 (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at:
More informationMachine Learning: Basis and Wavelet 김화평 (CSE ) Medical Image computing lab 서진근교수연구실 Haar DWT in 2 levels
Machine Learning: Basis and Wavelet 32 157 146 204 + + + + + - + - 김화평 (CSE ) Medical Image computing lab 서진근교수연구실 7 22 38 191 17 83 188 211 71 167 194 207 135 46 40-17 18 42 20 44 31 7 13-32 + + - - +
More informationCompressing a 1D Discrete Signal
Compressing a D Discrete Signal Divide the signal into 8blocks. Subtract the sample mean from each value. Compute the 8 8covariancematrixforthe blocks. Compute the eigenvectors of the covariance matrix.
More informationLecture 3 : Algorithms for source coding. September 30, 2016
Lecture 3 : Algorithms for source coding September 30, 2016 Outline 1. Huffman code ; proof of optimality ; 2. Coding with intervals : Shannon-Fano-Elias code and Shannon code ; 3. Arithmetic coding. 1/39
More informationJacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN
Jacobi s Iterative Method for Solving Linear Equations and the Simulation of Linear CNN Vedat Tavsanoglu Yildiz Technical University 9 August 006 1 Paper Outline Raster simulation is an image scanning-processing
More informationOverview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )
Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,
More information7. Variable extraction and dimensionality reduction
7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality
More informationEmbedded Zerotree Wavelet (EZW)
Embedded Zerotree Wavelet (EZW) These Notes are Based on (or use material from): 1. J. M. Shapiro, Embedded Image Coding Using Zerotrees of Wavelet Coefficients, IEEE Trans. on Signal Processing, Vol.
More informationWavelets and Multiresolution Processing. Thinh Nguyen
Wavelets and Multiresolution Processing Thinh Nguyen Multiresolution Analysis (MRA) A scaling function is used to create a series of approximations of a function or image, each differing by a factor of
More informationExercise Sheet 1. 1 Probability revision 1: Student-t as an infinite mixture of Gaussians
Exercise Sheet 1 1 Probability revision 1: Student-t as an infinite mixture of Gaussians Show that an infinite mixture of Gaussian distributions, with Gamma distributions as mixing weights in the following
More informationComputational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms.
Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms. January 5, 25 Outline Methodologies for the development of classification
More informationLinear Algebra using Dirac Notation: Pt. 2
Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018
More informationRLE = [ ; ], with compression ratio (CR) = 4/8. RLE actually increases the size of the compressed image.
MP/BME 574 Application Solutions. (2 pts) a) From first principles in class, we expect the entropy of the checkerboard image to be since this is the bit depth of the image and the frequency of each value
More informationarxiv:math/ v1 [math.ca] 19 May 2004
Fractal Components of Wavelet Measures arxiv:math/040537v [math.ca] 9 May 004 Abstract Palle E.T. Jorgensen Department of Mathematics, The University of Iowa, 4 MacLean Hall, Iowa City, IA, 54-49, U.S.A.
More informationEigenimaging for Facial Recognition
Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition
More information3F1 Information Theory, Lecture 3
3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output
More information12.2 Dimensionality Reduction
510 Chapter 12 of this dimensionality problem, regularization techniques such as SVD are almost always needed to perform the covariance matrix inversion. Because it appears to be a fundamental property
More informationThe Principles of Quantum Mechanics: Pt. 1
The Principles of Quantum Mechanics: Pt. 1 PHYS 476Q - Southern Illinois University February 15, 2018 PHYS 476Q - Southern Illinois University The Principles of Quantum Mechanics: Pt. 1 February 15, 2018
More informationFault Tolerance Technique in Huffman Coding applies to Baseline JPEG
Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,
More information3F1 Information Theory, Lecture 3
3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2011, 28 November 2011 Memoryless Sources Arithmetic Coding Sources with Memory 2 / 19 Summary of last lecture Prefix-free
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationDenoising and Compression Using Wavelets
Denoising and Compression Using Wavelets December 15,2016 Juan Pablo Madrigal Cianci Trevor Giannini Agenda 1 Introduction Mathematical Theory Theory MATLAB s Basic Commands De-Noising: Signals De-Noising:
More informationIndependent Component Analysis and Its Application on Accelerator Physics
Independent Component Analysis and Its Application on Accelerator Physics Xiaoying Pang LA-UR-12-20069 ICA and PCA Similarities: Blind source separation method (BSS) no model Observed signals are linear
More informationECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.
ECE47/57 - Lecture Image Compression Fundamentals and Lossless Compression Techniques /03/ Roadmap Preprocessing low level Image Enhancement Image Restoration Image Segmentation Image Acquisition Image
More informationStatistical signal processing
Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationMultimedia & Computer Visualization. Exercise #5. JPEG compression
dr inż. Jacek Jarnicki, dr inż. Marek Woda Institute of Computer Engineering, Control and Robotics Wroclaw University of Technology {jacek.jarnicki, marek.woda}@pwr.wroc.pl Exercise #5 JPEG compression
More informationMultiscale Image Transforms
Multiscale Image Transforms Goal: Develop filter-based representations to decompose images into component parts, to extract features/structures of interest, and to attenuate noise. Motivation: extract
More informationMathematical Foundations of Quantum Mechanics
Mathematical Foundations of Quantum Mechanics 2016-17 Dr Judith A. McGovern Maths of Vector Spaces This section is designed to be read in conjunction with chapter 1 of Shankar s Principles of Quantum Mechanics,
More informationTransform Coding. Transform Coding Principle
Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients
More informationGAUSSIAN PROCESS TRANSFORMS
GAUSSIAN PROCESS TRANSFORMS Philip A. Chou Ricardo L. de Queiroz Microsoft Research, Redmond, WA, USA pachou@microsoft.com) Computer Science Department, Universidade de Brasilia, Brasilia, Brazil queiroz@ieee.org)
More informationencoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256
General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)
More informationWavelets and Image Compression Augusta State University April, 27, Joe Lakey. Department of Mathematical Sciences. New Mexico State University
Wavelets and Image Compression Augusta State University April, 27, 6 Joe Lakey Department of Mathematical Sciences New Mexico State University 1 Signals and Images Goal Reduce image complexity with little
More informationMachine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395
Machine Learning Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1395 1 / 47 Table of contents 1 Introduction
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationRandom Vectors, Random Matrices, and Matrix Expected Value
Random Vectors, Random Matrices, and Matrix Expected Value James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 16 Random Vectors,
More informationTutorial on Principal Component Analysis
Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationPrincipal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays
Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays Prof. Tesler Math 283 Fall 2015 Prof. Tesler Principal Components Analysis Math 283 / Fall 2015
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLinear Algebra and Dirac Notation, Pt. 3
Linear Algebra and Dirac Notation, Pt. 3 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 3 February 1, 2017 1 / 16
More informationLinear Algebra and Dirac Notation, Pt. 1
Linear Algebra and Dirac Notation, Pt. 1 PHYS 500 - Southern Illinois University February 1, 2017 PHYS 500 - Southern Illinois University Linear Algebra and Dirac Notation, Pt. 1 February 1, 2017 1 / 13
More informationDirect Learning: Linear Classification. Donglin Zeng, Department of Biostatistics, University of North Carolina
Direct Learning: Linear Classification Logistic regression models for classification problem We consider two class problem: Y {0, 1}. The Bayes rule for the classification is I(P(Y = 1 X = x) > 1/2) so
More informationEfficient Observation of Random Phenomena
Lecture 9 Efficient Observation of Random Phenomena Tokyo Polytechnic University The 1st Century Center of Excellence Program Yukio Tamura POD Proper Orthogonal Decomposition Stochastic Representation
More informationImage Compression - JPEG
Overview of JPEG CpSc 86: Multimedia Systems and Applications Image Compression - JPEG What is JPEG? "Joint Photographic Expert Group". Voted as international standard in 99. Works with colour and greyscale
More informationA Variation on SVD Based Image Compression
A Variation on SVD Based Image Compression Abhiram Ranade Srikanth S. M. Satyen Kale Department of Computer Science and Engineering Indian Institute of Technology Powai, Mumbai 400076 ranade@cse.iitb.ac.in,
More information