Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG

Similar documents
Multimedia & Computer Visualization. Exercise #5. JPEG compression

encoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256

Basic Principles of Video Coding

Run-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE

L. Yaroslavsky. Fundamentals of Digital Image Processing. Course

Digital communication system. Shannon s separation principle

Image Data Compression

UNIT I INFORMATION THEORY. I k log 2

Compression and Coding

Overview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )

Information and Entropy

IMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Wavelet Scalable Video Codec Part 1: image compression by JPEG2000

Multimedia Networking ECE 599

4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak

Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image

ECE533 Digital Image Processing. Embedded Zerotree Wavelet Image Codec

Image Compression - JPEG

+ (50% contribution by each member)

Image and Multidimensional Signal Processing

6. H.261 Video Coding Standard

On Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University

Image Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)

BASICS OF COMPRESSION THEORY

On the minimum neighborhood of independent sets in the n-cube

Introduction to Video Compression H.261

Digital Communications III (ECE 154C) Introduction to Coding and Information Theory

Digital Systems Roberto Muscedere Images 2013 Pearson Education Inc. 1

Compression. Encryption. Decryption. Decompression. Presentation of Information to client site

Fast Progressive Wavelet Coding

IMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model

- An Image Coding Algorithm

A Systematic Description of Source Significance Information

EE-597 Notes Quantization

The Karhunen-Loeve, Discrete Cosine, and Related Transforms Obtained via the Hadamard Transform

( c ) E p s t e i n, C a r t e r a n d B o l l i n g e r C h a p t e r 1 7 : I n f o r m a t i o n S c i e n c e P a g e 1

17.1 Binary Codes Normal numbers we use are in base 10, which are called decimal numbers. Each digit can be 10 possible numbers: 0, 1, 2, 9.

EECS150 - Digital Design Lecture 26 - Faults and Error Correction. Types of Faults in Digital Designs

Bandwidth: Communicate large complex & highly detailed 3D models through lowbandwidth connection (e.g. VRML over the Internet)

Distributed Arithmetic Coding

Module 5 EMBEDDED WAVELET CODING. Version 2 ECE IIT, Kharagpur

Coding for Discrete Source

Optical Storage Technology. Error Correction

Lecture 7 Predictive Coding & Quantization

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments

Implementation of CCSDS Recommended Standard for Image DC Compression

Objective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2

Example: sending one bit of information across noisy channel. Effects of the noise: flip the bit with probability p.

INTEGER SUB-OPTIMAL KARHUNEN-LOEVE TRANSFORM FOR MULTI-CHANNEL LOSSLESS EEG COMPRESSION

SCALABLE AUDIO CODING USING WATERMARKING

Multimedia Communications. Mathematical Preliminaries for Lossless Compression

Codes for Partially Stuck-at Memory Cells

EE67I Multimedia Communication Systems

Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5

ECEN 248: INTRODUCTION TO DIGITAL SYSTEMS DESIGN. Week 9 Dr. Srinivas Shakkottai Dept. of Electrical and Computer Engineering

Lec 04 Variable Length Coding (VLC) in JPEG

SIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding

Digital Image Processing Lectures 25 & 26

Optimization of the Hamming Code for Error Prone Media

Waveform-Based Coding: Outline

Outline. EECS Components and Design Techniques for Digital Systems. Lec 18 Error Coding. In the real world. Our beautiful digital world.

Research on Unequal Error Protection with Punctured Turbo Codes in JPEG Image Transmission System

9 THEORY OF CODES. 9.0 Introduction. 9.1 Noise

Progressive Wavelet Coding of Images

Reduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C

Optimization of the Hamming Code for Error Prone Media

Chapter 3 Linear Block Codes

Compression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color

! Where are we on course map? ! What we did in lab last week. " How it relates to this week. ! Compression. " What is it, examples, classifications

PAST EXAM PAPER & MEMO N3 ABOUT THE QUESTION PAPERS:

Shannon's Theory of Communication

LORD: LOw-complexity, Rate-controlled, Distributed video coding system

6.003: Signals and Systems. Sampling and Quantization

CpE358/CS381. Switching Theory and Logical Design. Summer

Transform Coding. Transform Coding Principle

DEPARTMENT OF EECS MASSACHUSETTS INSTITUTE OF TECHNOLOGY. 6.02: Digital Communication Systems, Fall Quiz I. October 11, 2012

Compression methods: the 1 st generation

at Some sort of quantization is necessary to represent continuous signals in digital form

Identifying multiple RFID tags in high interference environments

Soft-Output Trellis Waveform Coding

PCM Reference Chapter 12.1, Communication Systems, Carlson. PCM.1

Turbo Compression. Andrej Rikovsky, Advisor: Pavol Hanus

EECS 229A Spring 2007 * * (a) By stationarity and the chain rule for entropy, we have

channel of communication noise Each codeword has length 2, and all digits are either 0 or 1. Such codes are called Binary Codes.

Channel Coding and Interleaving

Number Representation and Waveform Quantization

Compression. Reality Check 11 on page 527 explores implementation of the MDCT into a simple, working algorithm to compress audio.

MARKOV CHAINS A finite state Markov chain is a sequence of discrete cv s from a finite alphabet where is a pmf on and for

The information loss in quantization

Information Theory. Coding and Information Theory. Information Theory Textbooks. Entropy

CSCI 2570 Introduction to Nanocomputing

Multiple Description Transform Coding of Images

SYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression

Chapter 9 Fundamental Limits in Information Theory

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Enhanced Stochastic Bit Reshuffling for Fine Granular Scalable Video Coding

Chapter 2: Source coding

Efficient Alphabet Partitioning Algorithms for Low-complexity Entropy Coding

Coset Decomposition Method for Decoding Linear Codes

Transcription:

Fault Tolerance Technique in Huffman Coding applies to Baseline JPEG Cung Nguyen and Robert G. Redinbo Department of Electrical and Computer Engineering University of California, Davis, CA email: cunguyen, redinbo@ece.ucdavis.edu Abstract Faults due to the incorrect functioning of the computation system, or the transmission errors of the internal data, could corrupt the output stream of the Huffman enr. In this paper, a fault detection method is proposed for the Huffman encoding system, which is implemented in the JPEG image compression standard [1]. The detection method based on the information input and the stream output is described. The practical test results show the improvement of the reconstructed image quality. Index Terms: Huffman coding, zigzag sequence, runlength coding. I. INTRODUCTION Huffman coding [2] is a very effective technique for compression data; saving of 20% to 90% are typical, depending on the characteristics of the data being compressed. Huffman coding starts by assign the shorter words for more probable symbols and longer words for the less probable symbols. This coding technique is adopted by JPEG image compression standard. In JPEG image compression standard, two statistical models are used in the Huffman DCT-based sequential mode. First, the DCT basis functions are almost completely decorrelated [3] (pp. 150-57), such that they can be compressed independently without concern about correlation between coefficients. Second, the 63 AC coefficients arranged in zigzag order of decreasing probability of occurrence. Based on the statistical models, tables for DC and AC coefficients are generalized and stored in memory storages before the bit stream is produced. Those s will be selected to retrieve when the compression is in progressing. It has long been recognized that error s can seriously degrade any data have been compressed whether using lossless or lossy methods [4], [5]. Combined source-channel coding techniques have been developed integrating the compression steps with transmission error-correcting coding [6]. Such techniques are useful for modern video compression methods [7]. However, there are other sources of errors resulting from failures in the computing resources that compress and transmit the data. Huffman r is performed using computing resources that potentially affected by failures: the unexpected change of data bits due to physical faults in the system; the transient faults due to the noise, such as absorbtion of alpha particles in space [8] and electromagnetic interference, X-ray from the critical equipments, overheating. These source of errors could corrupt the output stream. For these reasons, This research was supported in part by the NSF through grant CCR-0104851.

1 fault tolerance becomes the main issue to be solved. Fault tolerance technique for the Huffman r is based on the extra hardware components and extra time to combine with the original tasks. In this work, redundancy check bits are generated and used to compare with parity bits that are generated at the output stream. When an error is detected, the system will request for a repeat encoding the defected data block. The technique discussed in this paper is motivated The rest of this paper is divided into three sections. The first section overviews the Huffman encoding algorithm and data structures support the coding processes. In second section, a redundancy subsystem is proposed to detect errors occur in the Huffman stream output. The third section provides the experiment results and the improvements in terms of of the image quality and image reconstruction error. II. OVERVIEW THE JPEG HUFFMAN CODING To simplify the problem, only the fault tolerance JPEG Huffman entropy r implementations for the baseline mode operation is discussed in this paper. However the design principle can be applied to the extended systems [1]. Baseline sequential coding is for images with 8-bit samples and uses Huffman coding only, and its der can store only one AC and DC Huffman table. Prior to entropy coding, there usually are few nonzero and many zeros-valued coefficients. The task of entropy coding is to en these few coefficients efficiently. The description of Baseline sequential entropy coding is given in two steps: conversion of quantized DCT coefficients into an intermediate sequence of symbols and assignment of variable-length s to the symbols. In the intermediate symbol sequence, each nonzero AC coefficient is represented in combination with the runlength of zero-valued AC coefficients which precede it in the zigzag. Each such runlength-nonzero coefficient combination is represented by a pair of symbols: symbol-1 (RUNLENGTH, SIZE) (1) symbol-2 (AMPLITUDE) (2) symbol-1 represents two pieces of information, RUNLENGTH and SIZE; symbol-2 represents the single piece of information designated AMPLITUDE, which is simply the amplitude of the nonzero AC coefficient. RUNLENGTH is the number of consecutive zero-valued AC coefficients in the zigzag sequence preceding the nonzero AC coefficient being represented. SIZE is the number of bits used to en AMPLITUDE. RUNLENGTH represents zero-runs of length 0 to 15. Actual zero-runs in the zigzag sequence can be greater than 15, so the symbol-1 value (15, 0) is interpreted as the extension symbol with runlength = 16. There can be up to three consecutive (15, 0) extensions before the terminating symbol-1 whose RUNLENGTH value complete the actual runlength. The terminating symbol-1 is always followed by a single symbol-2 except for the case in which the last run of zeros include the last (63 rd ) AC coefficient. In this frequent case, the special symbol-1 value (0, 0) means EOB (end-of-block) symbol, is used to terminate the 8 8 sample block. A DC difference Huffman is concatenated by two s: The first, d DC, so called difference category, defines the SIZE for the difference value. The second, v DC, which denotes the additional

2 bits of the DC difference. Let DC denotes the DC difference, k denotes the SIZE of the DC difference, B denotes the equivalent decimal value of the CODE. If B 2 k 1, then DC = B. Otherwise, DC = B 2 k + 1. The possible range of quantized AC coefficients determines the range of values of both the AMPLITUDE and the SIZE information. If the input data are N-bit integers, then the non-fractional part of the DCT coefficients can grow by at most 3 bits. Baseline sequential has 8-bit integer source samples in the range 2 7, 2 7 1, so quantized AC coefficient amplitudes are covered by integers in the range [ 2 10, 2 10 1]. The signed-integer encoding uses symbol-2 AMPLITUDE s of 1 to 10 bits in length, hence SIZE represents values from 1 to 10, and RUNLENGTH represents values from 0 to 15 as discussed previously. Figure 1 (RUNLENGTH-SIZE) Huffman table address AMPLITUDE (RUNLENGTH-SIZE) AC Huffman Enr (subject of errors) Code stream for AC coefficients address DIFFERENCE DC VALUE Huffman table address DIF. DC CATEGORY DIF. DC VALUE DC Huffman Enr (subject of errors) Code stream for diff. DC values Fig. 1. General Structure for a Baseline JPEG Huffman Coder shows a general structure of the baseline Huffman enr for DC difference and AC coefficients. The AC enr is represented by block, which accepts the intermediate symbols, performs all computations include generation the addresses for accessing s from the tables. As a result, the enr generated an appropriated compressed streams represents for the AC coefficients. The input of the enr includes a pair of intermediate symbols generated by the zigzag r, once for each time. If the zero-runs is greater than 15 or the EOB (end of block) is reached, symbol-2 input is omitted. When the zero-runs is greater than 15, the symbol-1 value, (15,0), is interpreted as the extension symbol with runlength is 16. In the case of EOB, the symbol-1 value is (0,0). The AC enr shares the DC difference table with the DC r, because the AC enr uses this table to get the for the AMPLITUDE symbol. The encoding for DC coefficients is similar to that of the AC coefficients, except the input for the DC part contains the DC difference category and the DC difference value. There are 11 DC difference categories (excluded the 0 category), and therefore, 2 11 DC difference values. The general structure of the DC and AC Huffman enrs is shown in Figure 2. Figure 2 illustrates the procedure to en an 8x8 block of DCT coefficients. The top array shows the 8x8 block of DCT coefficients arranged in zigzag order. The second array is the result in the conversion the block into a sequence of symbols. The last line in Figure 2 shows the output bit stream of the 8x8 DCT coefficients. In the presence of hardware or the instruction faults, the stream output may contain bit errors. The errors may prevent 8x8 block from ded, or the reconstructed image for that block is degraded. These errors are due to the enr itself and channel

3 DC value of the previous block 12 15 0-2 -1-1 -1 0 0-1 0... 0 (15-12) 55 zeros (2)(3) (1,2)(-2) (0,1)(-1) (0,1)(-1) (0,1)(-1) (2,1)(-1) (0,0) Symbols 110 11 11011 01 00 0 00 0 00 0 11100 0 1010 Code stream Fig. 2. Encoding processes for samples an 8x8 DCT block control coding cannot correct them. To increase the enr s reliability and improve the image quality, the fault tolerance Huffman encoding system is proposed as shown in the next section III. FAULT TOLERANCE MODEL FOR BASELINE JPEG ENCODER (RUNLENGTH-SIZE) Huffman table address AMPLITUDE (RUNLENGTH-SIZE) Address Der Address Der (III) AC Huffman Enr (subject of errors) Parity table for AMPLITUDE (I) Parity table for (RUNLENGTH-SIZE) (II) Table for the length of combine (RUNLENGTH-SIZE) + AMPLITUDE Code stream output for an 8x8 block Parity generator Combine Compare length control signal for comparison Error Flag Fig. 3. Fault tolerance model for the AC Huffman enr The method of inserting error-detecting features of the AC Huffman enr is shown in Figure 3. Failures are modelled as corrupting one or more variable-length vectors s 1 s 2 of the stream output. These corrupted binary vectors can be expressed as flipped the correct bits randomly. The method is presented for fault detection throughout the data path so that the momentarily failure of any subsystem will not contaminated data to go undetected. The error detection for the stream output is performed by generating the parity symbols via the spare subsystem connected parallels with the main enr. These parity symbols are ready for checking as soon the output stream is emerged. The goal of this fault tolerance design is to protect against at most one subsystem producing erroneous data. In the context of this paper, error detection of such situation is the only requirement and no correction methods will be addressed. Let s 1 = b 1 b 2...b n1 denotes the of (RUNLENGTH,SIZE) symbol; s 2 = c 1 c 2...c n2 denotes the of AMPLITUDE symbol. To reduce the delay in computation the parity symbols, the parity table for the s 1 is created and maintained in memory storage throughout the encoding process. Assume the table for s 1 was created by using the statistic model [1] (pp. 510-16). The correspond parity for s 1 is shown in Tab. I. A parity symbol can be retrieved quickly when a (RUNLENGTH,SIZE) symbol is valid at the address der for the memory storage (I). The details of how to implement the address der is not discussed in this paper, however, based on the (RUNLENGTH,SIZE) table, the address is well known to be unique.

4 TABLE I PARITY SYMBOL TABLE OF THE CODES OF (RUNLENGTH,SIZE) SYMBOLS RUNLENGTH 0 0 1 0 1 1 0 0 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 1 SIZE 1 1 1 0 1 0 1 1 0 1 0 1 1 1 1 0 0 0 1 1 0 1 0 0 1 1 1 0 0 1 0 0 1 1 0 1 0 1 0 0 1 0 0 1 0 0 1 1 0 1 1 0 1 0 1 1 0 1 1 1 1 0 1 0 0 0 0 1 0 1 0 0 1 0 1 0 1 1 0 1 1 0 0 0 1 0 1 1 0 0 0 0 0 1 1 1 The memory storage (II) shares the same address der with the storage (I), because the memory storage (II) contains the length of concatenated s 1 s 2. Both have the same number of items and have a one-by-one relationship. As far the (RUNLENGTH,SIZE) symbol is known, the length of the s 1 s 2 is determined by length(s 1 s 2 ) = length(s 1 ) + length(s 2 ) (3) = length(s 1 ) + SIZE In this context, the SIZE represents for the length of nonzero AC coefficient. The length parameter is used to determine the parity check points for the stream output. When the stream for an 8x8 block is continuously coming out, it is virtually partitioned into segments following a certain rule. The length memory storage is used to determine the endpoints of these variable length s. The storage (III) is used to hold the parity of s 2 s. There are 2047 different s for s 2, which represent for the AC values in the range [-1023, 1023] [1] (pg. 445). The s 2 is computed similar to that of the v DC [1] (pg.191). The computed parities for the s 2 s are stored in the storage (III). Figure 4 shows the construction of the s 2. The zero AC value is unused in this construction due to the zigzag encoding result. Therefore the memory cell at the address 1023 is empty. However the existence of this cell will save some operations for the address der. The parity of the AC values -1023...... -7-6 -5-4 -3-2 -1 x 1 2 3 4 5 6 7...... 1023 Code length 10...... 3 3 3 3 2 2 1 x 1 2 2 3 3 3 3...... 10 Code value 0...... 0 1 2 3 0 1 0 x 1 2 3 4 5 6 7...... 1023 Parity bit 0...... 0 1 1 0 0 1 0 x 1 1 0 1 0 0 1...... 0 Fig. 4. The parity bit structure of non-zero AC coefficients. This is the pre-computed parity bits are stored in memory and will be retrieved uniquely by the AMPLITUDE symbol. stream represents for (RUNLENGTH,SIZE)+AMPLITUDE symbol pair is determined by combine the parity retrieved from the memory storages (I) and (III). This parity is used to compare with the parity of the stream generated by the Huffman enr. The check point is determined by the length parameter retrieved from the memory storage (II). The AC Huffman enr is used to compute the stream for the (RUNLENGTH,SIZE)+AMPLITUDE pair using the Huffman table and the built-in arithmetic

5 processor. The operation principle of the AC Huffman enr is known from the JPEG compression standard. Let s 1 = b 1 b 2...b n1 be the of a (RUNLENGTH,SIZE) symbol; s 2 = c 1 c 2...c n2 be the of an AMPLITUDE symbol. The parity symbols of these s are pre-computed and stored in memory (I) and (III) as shown below: p 1 =b 1 b 2... b n1 p 2 =c 1 c 2... c n2 (4a) (5a) where 2 n 1 16 and 1 n 1 10. Thus there is required five bits to store the total length n (n = n 1 + n 2 ). The overall parity of the sequence b 1 b 2...b n1 c 1 c 2...c n2 is determined by p = p 1 p 2. Consider a specific pair of symbols (RUNLENGTH,SIZE) and AMPLITUDE resulting from the zero runlength coding of a zigzag sequence. As soon this symbol pair is available at the input of the Huffman enr, the parity bit p and the length n is retrieved from the memory storages and ready for parity checking. If error free, the parity p of the stream b 1 b2... b n1 c 1 c 2... c n2 generated by the enr must identical to the stored parity p. When this stream sequentially pulling out, the parity bit for that stream is dynamically computed. The control signal use the length n to set the comparison point for the stream output, where the comparison is taken place. The error flag is only active at that check points. The structure of the error checking for the output bit stream is shown in Figure 5, where the parity Huffman stream output p (III) (I) (II) Parity table for AMPLITUDE Parity table for (RUNLENGTH-SIZE) Table for the length of combine (RUNLENGTH-SIZE) + AMPLITUDE p 2 p 1 p length of ( s1s2) timing signal generator Error Flag Fig. 5. The logic structure of the parity checking for the stream output p is referred only at the n th bit in the sequence b 1 b2... b n1 c 1 c 2... c n2. To clarify for the generation of the parity p of the sequence b 1 b2... b n1 c 1 c 2... c n2, the parity p can be analyzed as shown below p 1 = b 1 (6a) p 2 = p 1 b 2 (6b). (6c) p n1 = p n1 1 b n1 (6d) p n1 +1 = p n1 c 1 (6e). (6f) p = p n 1 c n2 (6g)

6 When the length of the stream is matched with the pre-stored length, the comparison is conducted. Assume the synchronization is correct, then the two parity bits will be matched, if no error has occurred in the encoding process. Otherwise, the error flag is turned on and sent re-en repeat request to the control center. IV. EXPERIMENT RESULTS The protection designed for Huffman enr was implemented using a computer software resources. The computer program, first, performed the image reading, color component transformation, DCT and Huffman coding processes for both encoding and decoding. The error detection was implemented restricted only in the enr side. The error source corrupted the system by changing data bits of the stream at random locations. The error rate is varies is a typical range from 10 4 to10 1. Since the error correction is not implemented in this paper, to improve the quality of image in the existence of error, the alternate form of error correction is repeat encoding for the corrupted blocks. The error correction performance is evaluated based on the Mean Square Error (MSE) between the reconstructed and the original data images. Figure 6 shows the quality performance curves of the Huffman enr with and without error correction. The dash line and solid line curves present the reconstruction performance of image with and without repeat encoding request, respectively, versus error rate injection. As a matter of fact, the fidelity is much improved 0.35 0.3 error correction performance vs. error rate w/o correction with correction 0.25 0.2 1/MSE 0.15 0.1 0.05 0 10 4 10 3 10 2 10 1 error rate Fig. 6. Quality performance curves with and without repeat encoding request. when the error correction system is implemented. Figure 7 shows the reconstructed of color images in the effects of corrupted blocks. The four particular error images with coding errors with error rates 10 4 (top-left), 10 3 (top-right), 3 10 3 (bottom-left), and 3 10 2 (top-right) are reconstructed. Each color square is a result of discarding the bad blocks. In contrast with the above results, Figure 8 shows the reconstructed of images in the same error rates, but with repeat encoding request. Each corrupted block is discarded and replaced by a re-end block. This method reduced the number of error spots with the cost of delay.

7 Recoved image, error rate = 0.0001 Fig. 7. Recoved image, error rate = 0.005 Recoved image, error rate = 0.03 Recoved image, error rate = 0.005 Recoved image, error rate = 0.03 Reconstructed image without repeat encoding request Recoved image, error rate = 0.0001 Fig. 8. Recoved image, error rate = 0.001 Recoved image, error rate = 0.001 Reconstructed image with repeat encoding request V. C ONCLUSIONS In this paper, the error detection and correction for JPEG Huffman entropy coding was implemented successfully in software. The computer program takes the color input images, performs color transform, DCT, scalar quantization and Huffman coding. Although the implementation only detects single error occurred in a word, the design principle can be expanded to solve for the complicated problems. In the future research, error control coding techniques will be employed in order to generate more sophisticated coding techniques for detection and correction difference types of errors. The simulation results show that when repeat encoding process is performed on the corrupted part of the stream, the significant visual quality improvement is perceived in the ded image. R EFERENCES [1] W. Pennebaker and J. Mitchell, JPEG Still Image Data Compression Standard. New York, NY: International Thomson Publishing, 1993. [2] D. Huffman, A method for the construction of minimum-redundancy s, Proc. IRE, vol. 40, pp. 1098 101, 1952. [3] A. K. Jain, Fundamental of Digital Image Processing. Englewood Cliffs, NJ: Prentice Hall Information and System Science Series, 1989. [4] D. Salomin, Data Compression, The Complete Reference. New York: Springer-Verlag, 1998. [5] N. Demir and K. Syood, Joint source/channel coding for variable length s, in Procedings Data Compression Conference (DCC 98), (Snow Bird, UT), pp. 139 148, 1998. [6] G. Elmasry, Arithmetic coding algorithm with embedded channel coding, Electronic letter, vol. 33, pp. 1687 1688, Sep. 1997. [7] Y. Wang and Q. F. Zhu, Error control and concelment for video communication: A review, in Proc. IEEE, vol. 86, pp. 974 977, May 1998. [8] M. Lovellette, K. Wood, D. Wood, J. Beall, P. Shirvani, N. Oh, and E. McCluskey, Strategies for fault-tolerant, space-based computing: Lessons learned from the argos testbed, Aerospace Conference Proceedings, 2002, vol. 5, pp. 2109 2119, 2002.