Symbol) and LPS (Less Probable Symbol). Table 1 Interval division table Probability interval division

Size: px
Start display at page:

Download "Symbol) and LPS (Less Probable Symbol). Table 1 Interval division table Probability interval division"

Transcription

1 Proceedings of the IIEEJ Image Electronics and Visual Computing Workshop 0 Kuching, alaysia, November -4, 0 AN ADAPTIVE BINARY ARITHETIC CODER USING A STATE TRANSITION TABLE Ikuro Ueno and Fumitaka Ono Graduate School, Tokyo Polytechnic University ABSTRACT We propose a fast and simple binary arithmetic coder, STT-coder, in which arithmetic operation of dividing the probability interval can be done using a state transition table. In order to simplify the state transition table by omitting to retain the both of the bottom address of the valid interval and renormalized interval, renormalization is mostly executed together with the flush operation of the register. For the prevention of the loss of the coding efficiency caused by flushing, the PS/LPS interval to be renormalized is further divided into subintervals having no code space loss in flushing, by assigning them the combination of the current and succeeding symbols. We then propose an adaptive control method of probability estimation in STT-coder for unknown or non-stationary information sources. The probability estimation is also executed based on another state transition table to simplify the operation. In this paper, we describe the concept of STT-coder taking the case of quite short register size, and evaluate its coding efficiency compared with well-known Q-coder. We further introduce out study to extend the register size in order to improve the performance of STT-coder for high PS probability sources.. INTRODUCTION Arithmetic coding has flexible adaptability to various information sources, and provides high coding efficiency. Because of its advantage, it is extensively used in many image and video coding standards such as JPEG000 and PEG-4 AVC. On the other hand, the disadvantage of arithmetic coding is its complexity of operations. The studies on arithmetic coding have hitherto been mainly tuned for the simplification of probability interval calculation such as Q-Coder[] and Q-Coder[,]. In this paper, we propose an adaptive binary arithmetic coder, STT-coder, which can be realized by simple and fast arithmetic operations using state transition table. For that purpose, renormalization is mostly executed together with flush operation, and in order to keep its high coding efficiency, source extension idea is newly introduced. We introduce the implementation method of STT-coder with a -bit interval register, and propose an adaptive probability estimation scheme for unknown information sources. We further study the longer interval register designing case by extending the register size to 6 bits and show the coding performance for stationary information sources.. ARITHETIC CODER USING A STATE TRANSITION TABLE.. STT-coder block diagram STT-coder can be divided into two main blocks, probability estimation and probability interval division as shown in Figure. In this section, we explain those two blocks using a system example with a -bit interval register. Note that this arithmetic coder deals with binary symbols as the combination of PS (ore Probable Figure STT-coder block diagram

2 Symbol) and LPS (Less Probable Symbol)... Probability interval division In the probability interval division of STT-coder, the interval division table, which outputs a codeword or a succeeding probability interval for each binary input symbol, according to the size of the current probability interval, is used to execute the probability interval division as shown in Figure, instead of calculating the interval width as in usual arithmetic coders. To realize simple and fast operation, renormalization of the probability interval is executed together with flush operation in many cases, omitting the operations to store the bottom address and the probability interval after renormalization. If a flush operation will cause a loss of the coding efficiency, the PS/LPS interval is further subdivided into two areas combining succeeding symbols to keep higher coding efficiency. Since this procedure can be considered as source extension, we hereafter call this flush procedure expanded symbol flush. We show an example of the interval division table of STT-coder with a -bit interval register in Table, and all of its probability interval division patterns in Figure... Probability estimation... Estimation by probability state transition In order to execute an adaptive control of probability estimation to select appropriate coding parameters in STT-coder for unknown information sources, we adopt probability estimation system also by using another state transition table. State transition table type probability estimation is useful for fast and simple operation, as used in many coders such as Q-coder. In the state transition table type probability estimation, each context will move from a state to another state according to the occurred symbols, and the symbol probability assigned to the current state is used as the estimated symbol probability. In this paper, we at first employ 0 states, S 0 to S 9 as shown in Table. For STT-coder having a -bit interval register, the states are defined by the combination of the probability interval ( to 8) and the LPS width ( to 4). At the state S 0, the LPS width of one is always selected for all probability intervals, while at the state S 9 the LPS width is set to the maximum width possible to each probability interval. We call these states as probability states, hereafter. In Table, border PS probabilities P Bi between neighboring probability states S i and S i+ are shown. Also occurrence probabilities of probability intervals are shown in Table, which are calculated on the assumption that PS probabilities will distribute uniformly between 0.5 and.0 for the multi-context source model. Table Interval division table Input output bit bit bit bit bit bit Prob LPS PS interval Code / Prob index width /LPS interval index Code length outbit (R8) 0(R7) 0(R6) 00(R5) 4 0(N) - 00(R) - 00(R4a) 00(R5a) 000(R6a) 4 0(R7) 00 - L - 0(R6) 00 - L 0-00(R5) 00 - L 000(R6a) 0 (R8) 0 0 L (R8) 0 0(R6) 00 - L 0-00(R5) 00 - L 00(R4a) 0 (R8) 0 0 L 0(R6) 0 00(R5) 00 - L 0 - (R8) 0 0 L 0 0-0(R6) 0 0 L 00(R) 00 - (R8) 0 0 L 00-0(N) 00 - L L L 0-00(R) 00 - L L (R8) 0 L 0-00(R) 00 - L 0-00(R5a) 00 - L 00 - (R8) 0 L (R6a) 0 L 0(N) 00 - To encode a binary symbol, probability state is given by referring to the probability state table which stores the current probability state of each context. Then the LPS width is decided from the probability state and the probability interval index by referring to the LPS width table, and finally the probability interval is divided into an appropriate PS/LPS width.

3 R8 (prob interval=8) R7 (prob interval=7) R6 (prob interval=6) R5 (prob interval=5) L L 0 L 0 L R6a L L 0 0 L +R4a 0 L0 0 +R6 L 0 L R 00 L00 L LL0 0 N R6 L LPS width 4 R6a (prob interval=6) R5a (prob interval=5) R4a (prob interval=4) 0 0 L 0 0 +R6a 0 0 R5a LL00 00 R 00 R 0 N 0 L0 L0 0 L0 L 0 L0 L 0 00 L L LPS width LPS width Figure Probability interval division patterns : PS L : LPS Numerals after /L denote codeword or the next probility interval index after renormalization.... Dynamic adaptive control In order to estimate the occurrence probability of each binary symbol for each context, the content of probability state table should be updated by the following dynamic adaptive control algorithm as in Figure. () Define probability states from S 0 to S 9. () If an LPS happens, update the state for the current context from S i to S i+. If the current state is S J- corresponding to the minimum PS probability, we do not change the probability state, but exchange the sense of PS with that of LPS. () If an PS to force the state transition (which we call T-PS) happens, update the state for the current context from S i to S i-. If a non-transit PS (NT-PS) happens, we do not change the probability state. In S 0 which corresponds to the maximum PS probability, any PS will be NT- PS. How to define T-PS will be described later. If an PS probability P i will give the best coding efficiency for the coding parameter used at a given probability state S i, the two probabilities, transition caused by an LPS and transition caused by T-PS, shall be equal at the PS probability P i. This balance will give the most desirable transition, since it guarantees to stay at the best probability state longer than other states. Although we can change the probability state whenever an LPS happens, we need to judge the PS, whether we shall change the probability state, since an PS has larger probability than that of an LPS. To balance the transition, the following equation has to be satisfied. -P i =r i P i Figure Transition of probability states In this equation, r i is the ratio of T-PS and total PS (T-PSR, 0<r i <=). Therefore, the T-PSR r i is given by r i = (-P i )/ P i T-PSR value r i being calculated by the equation above is shown in Table for each probability state. Note that we use the just center value of the upper and lower border values of PS probabilities as the optimum PS probability P i for each probability state. For the selection of T-PSs, we shall control the value of T-PSR according to each probability state. As the distribution of the probability intervals to encounter is common to all probability states, we can control the

4 value of T-PSR by selected combination of probability intervals to encounter. For each probability state, we select the combination of probability intervals, to satisfy that the total occurrence probability is equal to its T-PSR, and probability state transition will be executed when an PS happens at the selected probability intervals for transition. For instance, for the probability state S (T-PSR, r =0.4), we classify PSs occurred at the probability interval 5 or as T- PSs, and PSs occurred at the other probability intervals (6,7, 8 and 4) as NT-PSs since the total occurrence probability of probability intervals of 5 and amounts to 0.45 which gives nearest value to the theoretical T-PSR value shown in Table.. EVALUATION OF CODING EFFICIENCY Assuming the target source model is composed of multicontext sources having PS probabilities uniformly distributed between 0.5 to.0, we evaluate the coding efficiency of STT-coder. Figure 4 shows coding efficiency calculated by computer simulation. To evaluate the effect of expanded symbol flush in STT-coder, we show the efficiency of a pure arithmetic coder (denoted as () in Figure 4) having a - bit register, in which renormalization is performed theoretically whenever the probability interval becomes less than 5, as a reference. The proposed STT-coder with ideal probability estimation (denoted as () in Figure 4), shows a little lower coding efficiency than pure arithmetic coder for high PS probabilities, because of the effect of the restriction of interval subdivision even though the expanded symbol flush is introduced. However, as for lower PS probabilities, STT-coder obtains coding efficiency pretty close to that of the pure arithmetic coder. It can be said that redundancy induced by flushing the register is efficiently removed for low PS probabilities by expanded symbol flush. We also show the coding efficiency of STT-coder with adaptive probability estimation, having 0 probability states, which is denoted as () in Figure 4. Its coding efficiency degradation to the STT-coder with ideal probability estimation () is found to be or %, but it obtains same or better coding efficiency than Qcoder except high PS probability sources. The efficiency of STT-coder () tends to become lower as the PS probability gets close to 0.5. To improve the efficiency, we tried to add an additional probability state after S 9. It is expected that this modification improves the coding efficiency, since it would work to stay longer at the probability states representing the lowest PS probability. Also we added one more state before the state S 0 to improve the efficiency at high PS probabilities. Adding these states above, we designed the STT-coder with states, and calculated its coding efficiency. It is shown as (4) in Figure 4. You can see that this modification improves the coding efficiency at high and low PS probabilities, though efficiency around the middle PS probability could be degraded very little. It will be necessary to design the transition system according to the statistics of the targeted information sources. Coding Efficiency () pure coder (no flush) () STT-coder: ideal prob estimation (static) 0.86 () STT-coder: prob estimation (0 states) 0.84 (4) STT-coder: prob estimation ( states) 0.8 (5) Q-coder PS probability Figure 4 Coding efficiency Probability state Table LPS width of each probability state for each probability interval (LPS width table in Fig.) Occurrence probability Probability interval PS prob. border P Bi T-PSR r i Prob. intervals for T-PS (sum of prob.) S S 0.4 5, (0.45) 0.79 S 0.9 6,4, (0.78) S ,5 (0.8) S ,5, (0.447) S ,4, (0.48) S ,5 (0.585) 0.6 S ,5, (0.650) S ,7,6 (0.745) 0.56 S ,6,5,4,(0.86)

5 4. EXTENSION OF INTERVAL REGISTER As shown in Figure 4, the coding efficiency of STTcoder with a -bit register declines as the PS probability becomes higher, due to the short register size. In this section, we explain a study method to design the interval division table of a coder with a more precise register in order to improve the coding efficiency for high PS probability. To design the interval division table, we should consider three parameters, probability interval A L, LPS width A L and offset D, which vary along the probability interval division for arithmetic coding, where offset is the distance between the bottom of the whole probability interval and the currently transmitted address of the valid interval as depicted in Figure 5. As the bit length of the interval register becomes longer, the number of possible combinations of A L, A and H would become so larger. So, we tried to restrict the combination under the following manner. Also we assumed that the LPS interval can be assigned either the upper or lower side in the interval. In this section, we deal with a 6-bit interval register case. To improve the coding efficiency for high PS probability, LPS width A L = has to be prepared for every probability interval and A L might be decreased one by one if PSs occur consecutively in high PS probability sources. Therefore every possible value of A L, from to 64, needs to be considered to remain. We restricted offset D to 8 values; 0, 8, 6, 0, 4, 6, 8 and 0, although it may take all values between 0 and, since these 8 offset values occur more frequently than other possible values, especially if we allow small LPS widths for each probability interval. As for LPS width A L, we tried to restrict the values by employing probability states by the similar manner as in section.. Also here, we assign the PS probability range to each probability state, and let P i denote the best PS probability to perform best efficiency for each probability state S i. The range of the sources to be covered by a probability state is determined so that theoretical coding efficiency by the coding method tuned for the PS probability P i, would be larger than Table LPS width of probability state for each probability interval (6-bit register, offset D=0) Probability state Probability interval A L Best PS Prob. P i S S S S S S S S (Underlined LPS width denotes that LPS interval is assigned to the lower side of the probability interval.) LPS PS Probability interval A L Offset D LPS width A L Figure 5 Probability interval, LPS width and offset Coding Efficiency Coding Efficiency S7 S6 S5 S4 S S PS probability Figure 6 Coding efficiency of the coding method tuned for each probability state () STT-coder (-bit, static) (6) STT-coder (6-bit, static) PS probability Figure 7 Coding efficiency

6 a predetermined minimum coding efficiency. Figure 6 shows the coding efficiency for each probability state with the predetermined minimum coding efficiency of After the set of probability states or set of P i is determined, we chose LPS width A L in A L, for all probability states. Table shows LPS width of each probability state for each A L, in the case of offset D=0. Note that ideal LPS width for given probability state will not increase as the probability interval decreases. However, you can see an example of inversion in the case of A L = at A L =4, and A L = at A L =, for S in Table. This phenomenon is caused by the restriction of offset values. If you allow more values for offset, such inversion may disappear and you can expect some improvement of the coding efficiency, but at the same time you have to grant larger RO size of interval division table. To evaluate the performance of the coder with a 6- bit interval register, we calculated the static coding efficiency (namely under the ideal probability estimation) with a 6-bit register. Compared with the coder with a -bit register (denoted as () in Figure 7, also () in Figure 4) the coding efficiency of the 6-bit coder (denoted as (6) in Figure 7) improves for high PS probability, while it maintains comparable efficiency to the -bit coder for low and medium PS probability. The designing of dynamic STT-Coder with 6-bit register will be done as computer-simulation basis and will be published later. 6. REFERENCES [] W. B. Pennebaker. J. L. itchell., G. G. Langdon, Jr., R. B. Arps An overview of the basic principles of Q-Coder, IB J. Res. Develop., vol., No.6 (Nov.988) [] ISO/IEC 449 Lossy/lossless coding of bi-level images (00) [] ISO/IEC JPEG 000 image coding system (000) 5. CONCLUSION Aiming at fast and simple arithmetic coding, we have proposed an arithmetic coder using a state transition table, STT-coder, in which renormalization and flush of the coder are executed at the same timing in most cases. To prevent the loss of the coding efficiency at the flush procedure, we introduced the idea of source extension to avoid the waste of codeword in available code space, and we introduced the practical design case of very short register size to show that STT-coder can be realized using a simple state transition table. We also proposed dynamic adaptive control of probability estimation, which is also based on the state transition table, noticing the balancing of the transition caused by an LPS and that by selected PS. It was found that STT-coder with this adaptive probability estimation provides comparable or better performance to Q-coder except higher PS probability sources. To improve the performance for high PS probability sources, we also studied the design of STTcoder with a 6-bit interval register and confirmed the improvement of the coding efficiency for stationary information sources.

Design Principle and Static Coding Efficiency of STT-Coder: A Table-Driven Arithmetic Coder

Design Principle and Static Coding Efficiency of STT-Coder: A Table-Driven Arithmetic Coder International Journal of Signal Processing Systems Vol., No. December Design Principle and Static Coding Efficiency of STT-Coder: A Table-Driven Arithmetic Coder Ikuro Ueno and Fumitaka Ono Graduate School,

More information

+ (50% contribution by each member)

+ (50% contribution by each member) Image Coding using EZW and QM coder ECE 533 Project Report Ahuja, Alok + Singh, Aarti + + (50% contribution by each member) Abstract This project involves Matlab implementation of the Embedded Zerotree

More information

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive

More information

Information and Entropy

Information and Entropy Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication

More information

Shannon-Fano-Elias coding

Shannon-Fano-Elias coding Shannon-Fano-Elias coding Suppose that we have a memoryless source X t taking values in the alphabet {1, 2,..., L}. Suppose that the probabilities for all symbols are strictly positive: p(i) > 0, i. The

More information

A New Pipelined Architecture for JPEG2000 MQ-Coder

A New Pipelined Architecture for JPEG2000 MQ-Coder , October 24-26, 2012, San Francisco, USA A New Pipelined Architecture for JPEG2000 MQ-Coder M. Ahmadvand, and A. Ezhdehakosh Abstract JPEG2000 has become one of the most rewarding image coding standards.

More information

Compression and Coding

Compression and Coding Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 2: Text Compression Lecture 5: Context-Based Compression Juha Kärkkäinen 14.11.2017 1 / 19 Text Compression We will now look at techniques for text compression. These techniques

More information

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur Module 5 EMBEDDED WAVELET CODING Lesson 13 Zerotree Approach. Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the principle of embedded coding. 2. Show the

More information

Digital Image Processing Lectures 25 & 26

Digital Image Processing Lectures 25 & 26 Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image

More information

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0

Chapter 2 Source Models and Entropy. Any information-generating process can be viewed as. computer program in executed form: binary 0 Part II Information Theory Concepts Chapter 2 Source Models and Entropy Any information-generating process can be viewed as a source: { emitting a sequence of symbols { symbols from a nite alphabet text:

More information

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University

Huffman Coding. C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University Huffman Coding C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.csie.nctu.edu.tw/~cmliu/courses/compression/ Office: EC538 (03)573877 cmliu@cs.nctu.edu.tw

More information

State of the art Image Compression Techniques

State of the art Image Compression Techniques Chapter 4 State of the art Image Compression Techniques In this thesis we focus mainly on the adaption of state of the art wavelet based image compression techniques to programmable hardware. Thus, an

More information

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen,

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane

More information

Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding

Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding Wen-Hsiao Peng, Tihao Chiang, Hsueh-Ming Hang, and Chen-Yi Lee National Chiao-Tung University 1001 Ta-Hsueh Rd., HsinChu 30010,

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au

More information

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB) Image Compression Qiaoyong Zhong CAS-MPG Partner Institute for Computational Biology (PICB) November 19, 2012 1 / 53 Image Compression The art and science of reducing the amount of data required to represent

More information

Codes for Partially Stuck-at Memory Cells

Codes for Partially Stuck-at Memory Cells 1 Codes for Partially Stuck-at Memory Cells Antonia Wachter-Zeh and Eitan Yaakobi Department of Computer Science Technion Israel Institute of Technology, Haifa, Israel Email: {antonia, yaakobi@cs.technion.ac.il

More information

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5 Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement

More information

Image and Multidimensional Signal Processing

Image and Multidimensional Signal Processing Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount

More information

Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression

Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression Kirkpatrick (984) Analogy from thermodynamics. The best crystals are found by annealing. First heat up the material to let

More information

Multimedia & Computer Visualization. Exercise #5. JPEG compression

Multimedia & Computer Visualization. Exercise #5. JPEG compression dr inż. Jacek Jarnicki, dr inż. Marek Woda Institute of Computer Engineering, Control and Robotics Wroclaw University of Technology {jacek.jarnicki, marek.woda}@pwr.wroc.pl Exercise #5 JPEG compression

More information

Summary of Last Lectures

Summary of Last Lectures Lossless Coding IV a k p k b k a 0.16 111 b 0.04 0001 c 0.04 0000 d 0.16 110 e 0.23 01 f 0.07 1001 g 0.06 1000 h 0.09 001 i 0.15 101 100 root 1 60 1 0 0 1 40 0 32 28 23 e 17 1 0 1 0 1 0 16 a 16 d 15 i

More information

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C Image Compression Background Reduce the amount of data to represent a digital image Storage and transmission Consider the live streaming of a movie at standard definition video A color frame is 720 480

More information

Waveform-Based Coding: Outline

Waveform-Based Coding: Outline Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and

More information

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256 General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)

More information

Source Coding Techniques

Source Coding Techniques Source Coding Techniques. Huffman Code. 2. Two-pass Huffman Code. 3. Lemple-Ziv Code. 4. Fano code. 5. Shannon Code. 6. Arithmetic Code. Source Coding Techniques. Huffman Code. 2. Two-path Huffman Code.

More information

( c ) E p s t e i n, C a r t e r a n d B o l l i n g e r C h a p t e r 1 7 : I n f o r m a t i o n S c i e n c e P a g e 1

( c ) E p s t e i n, C a r t e r a n d B o l l i n g e r C h a p t e r 1 7 : I n f o r m a t i o n S c i e n c e P a g e 1 ( c ) E p s t e i n, C a r t e r a n d B o l l i n g e r 2 0 1 6 C h a p t e r 1 7 : I n f o r m a t i o n S c i e n c e P a g e 1 CHAPTER 17: Information Science In this chapter, we learn how data can

More information

EE368B Image and Video Compression

EE368B Image and Video Compression EE368B Image and Video Compression Homework Set #2 due Friday, October 20, 2000, 9 a.m. Introduction The Lloyd-Max quantizer is a scalar quantizer which can be seen as a special case of a vector quantizer

More information

Distributed Arithmetic Coding

Distributed Arithmetic Coding Distributed Arithmetic Coding Marco Grangetto, Member, IEEE, Enrico Magli, Member, IEEE, Gabriella Olmo, Senior Member, IEEE Abstract We propose a distributed binary arithmetic coder for Slepian-Wolf coding

More information

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria

Source Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal

More information

Can the sample being transmitted be used to refine its own PDF estimate?

Can the sample being transmitted be used to refine its own PDF estimate? Can the sample being transmitted be used to refine its own PDF estimate? Dinei A. Florêncio and Patrice Simard Microsoft Research One Microsoft Way, Redmond, WA 98052 {dinei, patrice}@microsoft.com Abstract

More information

Entropy Coders of the H.264/AVC Standard

Entropy Coders of the H.264/AVC Standard Signals and Communication Technology Entropy Coders of the H.264/AVC Standard Algorithms and VLSI Architectures Bearbeitet von Xiaohua Tian, Thinh M Le, Yong Lian 1st Edition. 2010. Buch. XXIV, 180 S.

More information

A Nonlinear Predictive State Representation

A Nonlinear Predictive State Representation Draft: Please do not distribute A Nonlinear Predictive State Representation Matthew R. Rudary and Satinder Singh Computer Science and Engineering University of Michigan Ann Arbor, MI 48109 {mrudary,baveja}@umich.edu

More information

Digital communication system. Shannon s separation principle

Digital communication system. Shannon s separation principle Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation

More information

CSEP 590 Data Compression Autumn Arithmetic Coding

CSEP 590 Data Compression Autumn Arithmetic Coding CSEP 590 Data Compression Autumn 2007 Arithmetic Coding Reals in Binary Any real number x in the interval [0,1) can be represented in binary as.b 1 b 2... where b i is a bit. x 0 0 1 0 1... binary representation

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 13 Competitive Optimality of the Shannon Code So, far we have studied

More information

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur

Module 4. Multi-Resolution Analysis. Version 2 ECE IIT, Kharagpur Module 4 Multi-Resolution Analysis Lesson Multi-resolution Analysis: Discrete avelet Transforms Instructional Objectives At the end of this lesson, the students should be able to:. Define Discrete avelet

More information

AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD

AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD 4th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP AN IMPROVED CONTEXT ADAPTIVE BINARY ARITHMETIC CODER FOR THE H.264/AVC STANDARD Simone

More information

EBCOT coding passes explained on a detailed example

EBCOT coding passes explained on a detailed example EBCOT coding passes explained on a detailed example Xavier Delaunay d.xav@free.fr Contents Introduction Example used Coding of the first bit-plane. Cleanup pass............................. Coding of the

More information

Optimum Soft Decision Decoding of Linear Block Codes

Optimum Soft Decision Decoding of Linear Block Codes Optimum Soft Decision Decoding of Linear Block Codes {m i } Channel encoder C=(C n-1,,c 0 ) BPSK S(t) (n,k,d) linear modulator block code Optimal receiver AWGN Assume that [n,k,d] linear block code C is

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 1: Entropy Coding Lecture 4: Asymmetric Numeral Systems Juha Kärkkäinen 08.11.2017 1 / 19 Asymmetric Numeral Systems Asymmetric numeral systems (ANS) is a recent entropy

More information

IOSR Journal of Mathematics (IOSR-JM) e-issn: , p-issn: x. Volume 9, Issue 2 (Nov. Dec. 2013), PP

IOSR Journal of Mathematics (IOSR-JM) e-issn: , p-issn: x. Volume 9, Issue 2 (Nov. Dec. 2013), PP IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn:2319-765x. Volume 9, Issue 2 (Nov. Dec. 2013), PP 33-37 The use of Algorithmic Method of Hamming Code Techniques for the Detection and Correction

More information

Order Adaptive Golomb Rice Coding for High Variability Sources

Order Adaptive Golomb Rice Coding for High Variability Sources Order Adaptive Golomb Rice Coding for High Variability Sources Adriana Vasilache Nokia Technologies, Tampere, Finland Email: adriana.vasilache@nokia.com Abstract This paper presents a new perspective on

More information

Chapter 6. BCH Codes

Chapter 6. BCH Codes Chapter 6 BCH Codes Description of the Codes Decoding of the BCH Codes Outline Implementation of Galois Field Arithmetic Implementation of Error Correction Nonbinary BCH Codes and Reed-Solomon Codes Weight

More information

Sequential Logic. Handouts: Lecture Slides Spring /27/01. L06 Sequential Logic 1

Sequential Logic. Handouts: Lecture Slides Spring /27/01. L06 Sequential Logic 1 Sequential Logic Handouts: Lecture Slides 6.4 - Spring 2 2/27/ L6 Sequential Logic Roadmap so far Fets & voltages Logic gates Combinational logic circuits Sequential Logic Voltage-based encoding V OL,

More information

Lecture 4 : Adaptive source coding algorithms

Lecture 4 : Adaptive source coding algorithms Lecture 4 : Adaptive source coding algorithms February 2, 28 Information Theory Outline 1. Motivation ; 2. adaptive Huffman encoding ; 3. Gallager and Knuth s method ; 4. Dictionary methods : Lempel-Ziv

More information

CSCI 2570 Introduction to Nanocomputing

CSCI 2570 Introduction to Nanocomputing CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication

More information

2018/5/3. YU Xiangyu

2018/5/3. YU Xiangyu 2018/5/3 YU Xiangyu yuxy@scut.edu.cn Entropy Huffman Code Entropy of Discrete Source Definition of entropy: If an information source X can generate n different messages x 1, x 2,, x i,, x n, then the

More information

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding

SIGNAL COMPRESSION Lecture Shannon-Fano-Elias Codes and Arithmetic Coding SIGNAL COMPRESSION Lecture 3 4.9.2007 Shannon-Fano-Elias Codes and Arithmetic Coding 1 Shannon-Fano-Elias Coding We discuss how to encode the symbols {a 1, a 2,..., a m }, knowing their probabilities,

More information

Assume that the follow string of bits constitutes one of the segments we which to transmit.

Assume that the follow string of bits constitutes one of the segments we which to transmit. Cyclic Redundancy Checks( CRC) Cyclic Redundancy Checks fall into a class of codes called Algebraic Codes; more specifically, CRC codes are Polynomial Codes. These are error-detecting codes, not error-correcting

More information

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise 9 THEORY OF CODES Chapter 9 Theory of Codes After studying this chapter you should understand what is meant by noise, error detection and correction; be able to find and use the Hamming distance for a

More information

Multiplying Products of Prime Powers

Multiplying Products of Prime Powers Problem 1: Multiplying Products of Prime Powers Each positive integer can be expressed (in a unique way, according to the Fundamental Theorem of Arithmetic) as a product of powers of the prime numbers.

More information

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have EECS 229A Spring 2007 * * Solutions to Homework 3 1. Problem 4.11 on pg. 93 of the text. Stationary processes (a) By stationarity and the chain rule for entropy, we have H(X 0 ) + H(X n X 0 ) = H(X 0,

More information

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental

More information

3F1 Information Theory, Lecture 3

3F1 Information Theory, Lecture 3 3F1 Information Theory, Lecture 3 Jossy Sayir Department of Engineering Michaelmas 2013, 29 November 2013 Memoryless Sources Arithmetic Coding Sources with Memory Markov Example 2 / 21 Encoding the output

More information

Compression and Coding. Theory and Applications Part 1: Fundamentals

Compression and Coding. Theory and Applications Part 1: Fundamentals Compression and Coding Theory and Applications Part 1: Fundamentals 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why

More information

DSP Design Lecture 2. Fredrik Edman.

DSP Design Lecture 2. Fredrik Edman. DSP Design Lecture Number representation, scaling, quantization and round-off Noise Fredrik Edman fredrik.edman@eit.lth.se Representation of Numbers Numbers is a way to use symbols to describe and model

More information

Visual State Feedback Digital Control of a Linear Synchronous Motor using Generic Video-Camera Signal

Visual State Feedback Digital Control of a Linear Synchronous Motor using Generic Video-Camera Signal Visual State Feedback Digital Control of a Linear Synchronous Motor using Generic Video-Camera Signal Takafumi Koseki 1, Genevieve Patterson 1 and Takeomi Suzuki 2 1 The University of Tokyo/Department

More information

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Multimedia Communications. Mathematical Preliminaries for Lossless Compression Multimedia Communications Mathematical Preliminaries for Lossless Compression What we will see in this chapter Definition of information and entropy Modeling a data source Definition of coding and when

More information

Lec 04 Variable Length Coding (VLC) in JPEG

Lec 04 Variable Length Coding (VLC) in JPEG ECE 5578 Multimedia Communication Lec 04 Variable Length Coding (VLC) in JPEG Zhu Li Dept of CSEE, UMKC Z. Li Multimedia Communciation, 2018 p.1 Outline Lecture 03 ReCap VLC JPEG Image Coding Framework

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution CS 70 Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Discussion 6A Solution 1. Polynomial intersections Find (and prove) an upper-bound on the number of times two distinct degree

More information

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance

More information

Exercise 1. = P(y a 1)P(a 1 )

Exercise 1. = P(y a 1)P(a 1 ) Chapter 7 Channel Capacity Exercise 1 A source produces independent, equally probable symbols from an alphabet {a 1, a 2 } at a rate of one symbol every 3 seconds. These symbols are transmitted over a

More information

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding Amir Said (said@ieee.org) Hewlett Packard Labs, Palo Alto, CA, USA Abstract We analyze the technique for reducing the complexity

More information

UNIT I INFORMATION THEORY. I k log 2

UNIT I INFORMATION THEORY. I k log 2 UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper

More information

The prediction of passenger flow under transport disturbance using accumulated passenger data

The prediction of passenger flow under transport disturbance using accumulated passenger data Computers in Railways XIV 623 The prediction of passenger flow under transport disturbance using accumulated passenger data T. Kunimatsu & C. Hirai Signalling and Transport Information Technology Division,

More information

Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels

Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels Image Dependent Log-likelihood Ratio Allocation for Repeat Accumulate Code based Decoding in Data Hiding Channels Anindya Sarkar and B. S. Manjunath Department of Electrical and Computer Engineering, University

More information

Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source

Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source Implementation of Lossless Huffman Coding: Image compression using K-Means algorithm and comparison vs. Random numbers and Message source Ali Tariq Bhatti 1, Dr. Jung Kim 2 1,2 Department of Electrical

More information

ASYMMETRIC NUMERAL SYSTEMS: ADDING FRACTIONAL BITS TO HUFFMAN CODER

ASYMMETRIC NUMERAL SYSTEMS: ADDING FRACTIONAL BITS TO HUFFMAN CODER ASYMMETRIC NUMERAL SYSTEMS: ADDING FRACTIONAL BITS TO HUFFMAN CODER Huffman coding Arithmetic coding fast, but operates on integer number of bits: approximates probabilities with powers of ½, getting inferior

More information

SOBER Cryptanalysis. Daniel Bleichenbacher and Sarvar Patel Bell Laboratories Lucent Technologies

SOBER Cryptanalysis. Daniel Bleichenbacher and Sarvar Patel Bell Laboratories Lucent Technologies SOBER Cryptanalysis Daniel Bleichenbacher and Sarvar Patel {bleichen,sarvar}@lucent.com Bell Laboratories Lucent Technologies Abstract. SOBER is a new stream cipher that has recently been developed by

More information

JPEG2000 High-Speed SNR Progressive Decoding Scheme

JPEG2000 High-Speed SNR Progressive Decoding Scheme 62 JPEG2000 High-Speed SNR Progressive Decoding Scheme Takahiko Masuzaki Hiroshi Tsutsui Quang Minh Vu Takao Onoye Yukihiro Nakamura Department of Communications and Computer Engineering Graduate School

More information

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam:

Marks. bonus points. } Assignment 1: Should be out this weekend. } Mid-term: Before the last lecture. } Mid-term deferred exam: Marks } Assignment 1: Should be out this weekend } All are marked, I m trying to tally them and perhaps add bonus points } Mid-term: Before the last lecture } Mid-term deferred exam: } This Saturday, 9am-10.30am,

More information

Part 2: Analyzing and Visualizing Numerical Methods

Part 2: Analyzing and Visualizing Numerical Methods Analyzing Part 2: Analyzing and visualizing and Visualizing numerical Numerical methods. Methods Part 2: Analyzing and Visualizing Numerical Methods Summary: Methods used to determine the roots of complex

More information

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at

More information

Structure Design of Neural Networks Using Genetic Algorithms

Structure Design of Neural Networks Using Genetic Algorithms Structure Design of Neural Networks Using Genetic Algorithms Satoshi Mizuta Takashi Sato Demelo Lao Masami Ikeda Toshio Shimizu Department of Electronic and Information System Engineering, Faculty of Science

More information

Santa Claus Schedules Jobs on Unrelated Machines

Santa Claus Schedules Jobs on Unrelated Machines Santa Claus Schedules Jobs on Unrelated Machines Ola Svensson (osven@kth.se) Royal Institute of Technology - KTH Stockholm, Sweden March 22, 2011 arxiv:1011.1168v2 [cs.ds] 21 Mar 2011 Abstract One of the

More information

Image Data Compression

Image Data Compression Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression

More information

Channel Coding and Interleaving

Channel Coding and Interleaving Lecture 6 Channel Coding and Interleaving 1 LORA: Future by Lund www.futurebylund.se The network will be free for those who want to try their products, services and solutions in a precommercial stage.

More information

CSCI3390-Assignment 2 Solutions

CSCI3390-Assignment 2 Solutions CSCI3390-Assignment 2 Solutions due February 3, 2016 1 TMs for Deciding Languages Write the specification of a Turing machine recognizing one of the following three languages. Do one of these problems.

More information

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO

INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC29/WG11 CODING OF MOVING PICTURES AND AUDIO INTERNATIONAL ORGANISATION FOR STANDARDISATION ORGANISATION INTERNATIONALE DE NORMALISATION ISO/IEC JTC1/SC9/WG11 CODING OF MOVING PICTURES AND AUDIO ISO/IEC JTC1/SC9/WG11 MPEG 98/M3833 July 1998 Source:

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 7 Sept 11 th, 2018 Pranav Mantini Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Today Review Binary Image Processing Opening and Closing Skeletonization

More information

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms)

Multimedia. Multimedia Data Compression (Lossless Compression Algorithms) Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr

More information

2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify!

2. Polynomials. 19 points. 3/3/3/3/3/4 Clearly indicate your correctly formatted answer: this is what is to be graded. No need to justify! 1. Short Modular Arithmetic/RSA. 16 points: 3/3/3/3/4 For each question, please answer in the correct format. When an expression is asked for, it may simply be a number, or an expression involving variables

More information

- An Image Coding Algorithm

- An Image Coding Algorithm - An Image Coding Algorithm Shufang Wu http://www.sfu.ca/~vswu vswu@cs.sfu.ca Friday, June 14, 2002 22-1 Agenda Overview Discrete Wavelet Transform Zerotree Coding of Wavelet Coefficients Successive-Approximation

More information

Lecture notes on Turing machines

Lecture notes on Turing machines Lecture notes on Turing machines Ivano Ciardelli 1 Introduction Turing machines, introduced by Alan Turing in 1936, are one of the earliest and perhaps the best known model of computation. The importance

More information

1 Floating point arithmetic

1 Floating point arithmetic Introduction to Floating Point Arithmetic Floating point arithmetic Floating point representation (scientific notation) of numbers, for example, takes the following form.346 0 sign fraction base exponent

More information

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com

on a per-coecient basis in large images is computationally expensive. Further, the algorithm in [CR95] needs to be rerun, every time a new rate of com Extending RD-OPT with Global Thresholding for JPEG Optimization Viresh Ratnakar University of Wisconsin-Madison Computer Sciences Department Madison, WI 53706 Phone: (608) 262-6627 Email: ratnakar@cs.wisc.edu

More information

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract

What Every Programmer Should Know About Floating-Point Arithmetic DRAFT. Last updated: November 3, Abstract What Every Programmer Should Know About Floating-Point Arithmetic Last updated: November 3, 2014 Abstract The article provides simple answers to the common recurring questions of novice programmers about

More information

Physical Layer and Coding

Physical Layer and Coding Physical Layer and Coding Muriel Médard Professor EECS Overview A variety of physical media: copper, free space, optical fiber Unified way of addressing signals at the input and the output of these media:

More information

Compression and Coding. Theory and Applications Part 1: Fundamentals

Compression and Coding. Theory and Applications Part 1: Fundamentals Compression and Coding Theory and Applications Part 1: Fundamentals 1 What is the problem? Transmitter (Encoder) Receiver (Decoder) Transformation information unit Channel Ordering (significance) 2 Why

More information

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved

Communications II Lecture 9: Error Correction Coding. Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Communications II Lecture 9: Error Correction Coding Professor Kin K. Leung EEE and Computing Departments Imperial College London Copyright reserved Outline Introduction Linear block codes Decoding Hamming

More information

Fast Progressive Wavelet Coding

Fast Progressive Wavelet Coding PRESENTED AT THE IEEE DCC 99 CONFERENCE SNOWBIRD, UTAH, MARCH/APRIL 1999 Fast Progressive Wavelet Coding Henrique S. Malvar Microsoft Research One Microsoft Way, Redmond, WA 98052 E-mail: malvar@microsoft.com

More information

Product Obsolete/Under Obsolescence. Quantization. Author: Latha Pillai

Product Obsolete/Under Obsolescence. Quantization. Author: Latha Pillai Application Note: Virtex and Virtex-II Series XAPP615 (v1.1) June 25, 2003 R Quantization Author: Latha Pillai Summary This application note describes a reference design to do a quantization and inverse

More information

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus Turbo Compression Andrej Rikovsky, Advisor: Pavol Hanus Abstract Turbo codes which performs very close to channel capacity in channel coding can be also used to obtain very efficient source coding schemes.

More information

4 An Introduction to Channel Coding and Decoding over BSC

4 An Introduction to Channel Coding and Decoding over BSC 4 An Introduction to Channel Coding and Decoding over BSC 4.1. Recall that channel coding introduces, in a controlled manner, some redundancy in the (binary information sequence that can be used at the

More information

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Compression What Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Why 720x480x20x3 = 31,104,000 bytes/sec 30x60x120 = 216 Gigabytes for a 2 hour movie

More information

BASIC COMPRESSION TECHNIQUES

BASIC COMPRESSION TECHNIQUES BASIC COMPRESSION TECHNIQUES N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lectures # 05 Questions / Problems / Announcements? 2 Matlab demo of DFT Low-pass windowed-sinc

More information

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -

More information