Fundamentals of Image Compression
|
|
- Lisa Bishop
- 6 years ago
- Views:
Transcription
1 Fundaentals of Iage Copression Iage Copression reduce the size of iage data file while retaining necessary inforation Original uncopressed Iage Copression (encoding) Decopression (decoding) Copressed File Decopressed Iage 1 Fundaentals of Iage Copression Fundaentals of Iage Copression Copression ratio the ration of the original, uncopressed iage file and the copressed file Uncopressed File Size SIZE Copressio n Ratio = = Copressed File Size SIZE U C 2 Fundaentals of Iage Copression
2 Iage Copression (cont.) Bits per pixel (bpp) denote the nuber of bits required to represent an iage pixel Nuber of Bits 8(Nuber of Bytes) bpp = = Nuber of Pixels N N Data the pixel gray-level values that correspond to the brightness of a pixel at a point in space Inforation an interpretation of the data in a eaningful way 3 Fundaentals of Iage Copression Iage Copression Techniques Inforation preserving (lossless copression) No data are lost original iage can be recreated exactly fro copressed data Copression ratio 2:1 to 3:1 Useful in iage archiving (such as legal or edical iages) Inforation lossy (lossy copression) Allow a loss in the actual iage data original iage cannot be recovered exactly fro copressed data Copression ratio 10:1 to 20:1or ore Useful for broadcast television, video-conferencing, and facsiile transission, etc. 4 Fundaentals of Iage Copression
3 Iage/Video Copression Techniques Iage/Video Copression Predictive Coding Transfor Coding Iportance Oriented Coding Hybrid Coding Delta odulation DCT Bit allocation JPEG DPCM KL Sub-sapling H.26X ME/MC Subband Wavelet Quantization Scalar Vector MPEG-X 5 Fundaentals of Iage Copression How To Do Iage Copression Basis - redundancy exists in iage data and can be reoved Types of iage data redundancy: Coding redundancy (statistical redundancy) -variablelength code (RLC, Huffan Coding, Arithetic Coding, LZW) Interpixel redundancy (spatial redundancy) apper (DPCM, DCT, Subband, Wavelet) Psychovisual redundancy (perceptual redundancy) quantization (SQ, VQ, fractal) Teporal redundancy (interfrae redundancy), for video) DPCM, MC/ME 6 Fundaentals of Iage Copression
4 Interpixel Redundancy The gray levels of adjacent pixels in noral iages are highly correlated, resulting in interpixel redundancy The gray level of a given pixel can be predicted by its neighbors and the difference is used to represent the iage; this type of transforation is called apping Run-length coding can also be eployed to utilize interpixel redundancy in iage copression Reoving interpixel redundancy is lossless 7 Fundaentals of Iage Copression Interpixel Redundancy (cont.) In run-length coding, each run R i is represented by a pair (g i, r i ) with g i = gray level of R i r i = length of R i 8 Fundaentals of Iage Copression
5 Psychovisual Redundancy Certain inforation are ore iportant than other inforation to the huan visual syste Huans perceives spatial frequencies below about 50 cycles per degree so that higher-frequency inforation is less iportant Requantization using less bits can eliinate psychovisual redundancy, but false contour will appear For color iages, color subsapling can be used to reduce the size of color coponents since huan eyes are less sensitive to variations in color than variations in light intensity Reoving psychovisual redundancy is inforation lossy 9 Fundaentals of Iage Copression Requantization Requantization the process of liiting the value of a function at any saple to one of a predeterined nuber of perissible values A general usage or gray-scale reduction is to reduce the nuber of bits per pixel (bpp) False contouring as the nuber of gray levels decreases, false edges or lines will appear in the iage The false contouring effect can be visually iproved upon by using an IGS (Iproved Gray-Scale) quantization adding a sall rando nuber to each pixel before quantization 10 Fundaentals of Iage Copression
6 Requantization (cont.) 8 bits 7 bits 6 bits 5 bits 4 bits 3 bits 2 bits 1 bits 11 Fundaentals of Iage Copression Requantization (cont.) Types of Quantizer Scalar Quantizers» Unifor Scalar Quantizer» Non-unifor Scalar Quantizer» The Lloyd-Max Quantizer» Entropy-constrained Quantizer Vector Quantizer 12 Fundaentals of Iage Copression
7 Unifor Scalar Quantizer Unifor scalar quantizer the siplest quantizer Masking the lower bits via an AND operation Exaple reduce 8-bit pixel (256 levels) to be 3-bit pixel (8 levels) by ANDing with the bit string Fundaentals of Iage Copression Unifor Scalar Quantizer Input range 0-31 Output code 000 Reconstruction value (0) MSE Reconstruction value (16) MSE (32) (48) (64) (80) (96) (112) (128) (144) (160) (176) (192) (208) (224) (240) 14 Fundaentals of Iage Copression
8 IGS quantization IGS quantization procedure: 1. Set initial SUM = if ost significant 4 bits of current pixel A = 1111 new_sum = A else new_sum = A + least significant 4 bits of old SUM Pixel Gray Level Su IGS Code i - 1 N/A N/A i i i i Fundaentals of Iage Copression IGS quantization (cont.) 16 Fundaentals of Iage Copression
9 Non-unifor Scalar Quantizer Idea - the quantization levels are finely spaced in the range that gray levels occur frequently, while coarsely spaced outside of it 17 Fundaentals of Iage Copression The Lloyd-Max Quantizer Optial quantization A quantization function can be regarded as a staircase, in which s i is called decision levels and t i reconstruction levels s is apped to t i if it lies in the half-open interval (s i -1, s i ] 18 Fundaentals of Iage Copression
10 The Lloyd-Max Quantizer (cont.) 19 Fundaentals of Iage Copression The Lloyd-Max Quantizer (cont.) How to select s i and t i? The Llooyd-Max quantizer Assuing the criterion is MSE, E{(s - t i ) 2 }, and p(s) is even The condition for inial error are si s i 1 ( s ti ) p( s) ds = 0 i = 1,2,..., L/2 0 i = 0 si = ( ti + ti +1)/2 i = 1, 2,..., L/2 1 and i = L/2 s -i = -s i, t -i = -t i» Each t i is the centroid of the area under p(s) over two s i» s i is halfway between two t i 20 Fundaentals of Iage Copression
11 Entropy-constrained Quantizers Ai at transitting the iage with as few bits as possible rather than looking for a quantizer that provides the lowest distortion for a given nuber of output levels Entropy-constrained quantizer (ECQ) a quantizer that iniizes distortion for a given entropy easured at the quantizer output The quantizer output is variable-length encoded In iage copression, the unifor quantizer followed by variable-length coding will produce the results close to the optiu ECQ 21 Fundaentals of Iage Copression Color Quantization 16,777,216 colors 256 colors 22 Fundaentals of Iage Copression
12 Color subsapling Interpolation Fundaentals of Iage Copression Fidelity Criteria Fidelity criteria is used to easure inforation loss Two classes: objective fidelity criteria - easured atheatically about the aount of error in the reconstructed data subjective fidelity criteria - easured by huan observation 24 Fundaentals of Iage Copression
13 Fidelity Criteria (cont.) Subjective fidelity criteria: Ipairent test how bad the iages are Quality test how good the iages are Coparison test Ipairent 5 - Iperceptible 4 Perceptible, not annoying 3 Soewhat annoying 2 Severely annoying 1 - Unusable Quality A - Excellent B - Good C - Fair D - Poor E - Bad Coparison +2 uch better +1 better 0 the sae -1 worse -2 uch worse 25 Fundaentals of Iage Copression Fidelity Criteria (cont.) Objective criteria root-ean-square (rs) error: e RMS e 1 ˆ M 1 N 1 2 RMS = [ I ( r,c) I( r,c)] MN r = 0 c= 0 26 Fundaentals of Iage Copression
14 Fidelity Criteria (cont.) Objective criteria (cont.) root-ean-square signal-to-noise ratio: SNR RMS SNR RMS r= 0 c= 0 = M 1 N 1 r= 0 c= 0 M 1 N 1 Iˆ( r, c) [ Iˆ( r, c) I( r, c)] Fundaentals of Iage Copression Fidelity Criteria (cont.) Objective criteria (cont.) peak signal-to-noise ratio: SNR PEAK SNR PEAK = 10 log10 1 M 1 MN where L is the axiu N 1 r= 0 c= 0 saple ( L 1) 2 [ Iˆ( r, c) I( r, c)] value 2 28 Fundaentals of Iage Copression
15 Iage Copression Model I(r, c) Preprocessing Encoder Copression (encoding) Copressed File Î(r, c) Postprocessing Decoder Decopression (decoding) The encoder reoves input redundancies (interpixel, psychovisual, and coding redundancy) 29 Fundaentals of Iage Copression Encoder and Decoder I(r, c) Î(r, c) Mapper Inverse apper Quantizer encoder decoder Sybol encoder Sybol decoder Copressed File apper - reduce interpixel redundancy, reversible (predictive coding, run-length coding, Transfor coding) quantizer - reduce psychovisual redundancy, irreversible (unifor quantization, nonunifor quantization) sybol encoder - reduce coding redundancy, reversible (Huffan coding, arithetic coding, LZW) 30 Fundaentals of Iage Copression
16 Predictive Coding 31 Fundaentals of Iage Copression Predictive Coding + + Quantization Bit Allocation Predictor Fundaentals of Iage Copression
17 Predictive Coding (cont.) Input data I n + Ĩ n - e n Predictor Encoder Quantizer Î n + + ê n Sybol encoder Copressed data Decoder Copressed data Sybol decoder ê n + + Î n Decopressed data Ĩ n Predictor 33 Fundaentals of Iage Copression Predictive Coding (cont.) In general, the prediction is fored by a linear cobination of previous saples: Iˆ n = round [ α I i = 1 n i where is the order of the linear predictor, round is a function to denote the rounding or nearest integer operation, and the α i are prediction coefficients i ] 34 Fundaentals of Iage Copression
18 Predictive Coding (cont.) For 2-D iages, the prediction is a function of the previous pixels in a left-to-right, top-to-botto scan of an iage I(r-1, c-1) I(r-1, c) I(r-1, c+1) I(r, c-1) I(r, c) 35 Fundaentals of Iage Copression Predictor (cont.) I(r-1, c-1) I(r-1, c) Exaples: First-order predictor: ˆ (, ) ( - I(r, c-1) I(r, c) I r c = I r,c 1) Second-order predictor: I ˆ( r, c) = 1/ 2[ I ( r,c -1) + I ( r -1, c)] Third-order predictor:» I ˆ( r, c) = I ( r,c -1) - I ( r -1,c -1) + I ( r -1, c) I(r-1, c+1)» I ˆ( r, c) = 0.75 I ( r,c -1) I ( r -1,c -1) I ( r -1, c) 36 Fundaentals of Iage Copression
19 Predictive Coding (cont.) In 3-D case, the prediction is based on the above pixels and the previous pixels of preceding fraes 37 Fundaentals of Iage Copression Differential Pulse Code Modulation (DPCM) Differential pulse code odulation (DPCM) a for of predictive coding suitable for use with Markov sources Use the fact that the intensity value of a pixel is likely to be siilar to that of the surrounding pixels Eliinating interpixel redundancies Instead of coding an intensity value, we predict the value fro the values of nearby pixels The difference between the actual and predicted value of that pixel is coded There is a high probability that the difference will be sall 38 Fundaentals of Iage Copression
20 DPCM (cont.) The predictive coding syste consists of an encoder and a decoder, each containing an identical predictor The predictor transfors a set of high-entropy but correlated values into a set of low-entropy but less correlated values The aount of inforation is reduced Optial predictor We shall iniize the variance of e = x i.e., σ e = E{ ( x xˆ ) } = E x α i 0 = i xˆ, 2 xi 39 Fundaentals of Iage Copression DPCM (cont.) To find the inial value, we partial differentiate the following and let it be zero : E 2 [ ( x xˆ ) ] α E = i = -2E 2 {[ x ( α x + α x + L + α x )] } α 1 1 [( x xˆ ) x ] = 0 i = 0, 1, L,-1 (1) i i 40 Fundaentals of Iage Copression
21 DPCM (cont.) Define the autocorrelation as R = E Fro (1) : E R { x x } = E{ xˆ x } i i = E ij ( xi x j { α x x + α x x + L + α x x } = α R 0 0 0i 0 i i + α R 1 = E ( 1 1i the equations in (2) 1 i 1 i= 0 + L + α αixi ) xi -1 R 1 ( 1 )i 1 i = 0, 1, L, -1 (2) α, i = 0,1, L, 1can be obtained by solving i i ) 41 Fundaentals of Iage Copression DPCM (cont.) If xˆ i is obtained by the predictor that uses α thus obtained, then 2 e But E σ R σ σ 2 e 2 e = E 2 {( x xˆ ) } = E{ ( x ˆ } { ˆ ˆ x ) x -E ( x x ) x} {( x ˆ ) ˆ x x} = 0 fro Eq. (1) E{ ( x xˆ ) x } = = E = R : can : can 2 [ x ]- E[ xˆ x ] 0 ( α R 0 be considered + α R be considered 1 1 as the + L + α as the Thus, 1 variance R variance the optial 1 of of ) the (3) error original signal signal 42 Fundaentals of Iage Copression
22 DPCM (cont.) Eq.(3) verifies that variance of saller than that of the original signal! the error signal is Thus, it worths DPCM! 43 Fundaentals of Iage Copression DPCM (cont.) 44 Fundaentals of Iage Copression
23 Adaptive Predictor Adaptive predictor several predictor are predefined, select the best predictor according to the characteristics of the data Try out a nuber of predictors and deterine which is the best, then tell the decoder which predictor has been used Consider blocks of pixels and choose aong n predictors, the overhead is log 2 n bits per block, or (log 2 n)/ bpp Deterine the choice of predictor fro the data values in[ I( r,c-1), I( r -1,c )] Iˆ( r, c) = ax[ I( r,c-1), I( r -1,c )] I( r -1,c) + I( r,c-1)- I( r -1,c- 1) if I( r -1,c- 1) ax[ I( r,c-1), I( r -1,c)] if I( r -1,c- 1) in[ I( r,c-1), I( r -1,c)] otherwise 45 Fundaentals of Iage Copression Results (cont.) DPCM at 1.0 bpp 46 Fundaentals of Iage Copression
24 Results (cont.) DPCM at 2.0 bpp 47 Fundaentals of Iage Copression Results (cont.) DPCM at 3.0 bpp 48 Fundaentals of Iage Copression
25 Results (cont.) DPCM at 2.0 bpp 49 Fundaentals of Iage Copression Conclusions Lossless JPEG 0 1 Prediction No A 前一行... B C D A X B C A+B-C 5 A-(B-C)/2 6 B-(A-C)/2 7 (A+B)/2 50 Fundaentals of Iage Copression
26 Conclusions (cont.) Post processor JPEG: transfored dc coponents MPEG-4: transfored dc, ac 51 Fundaentals of Iage Copression Entropy Coding 52 Fundaentals of Iage Copression
27 Entropy Coding Inforation Theory Huffan Coding Arithetic Coding 53 Fundaentals of Iage Copression Inforation Theory Inforation theory provides the atheatical fraework to deterine the iniu aount of data that is sufficient to describe copletely an iage without loss of inforation A event E that occurs with probability P(E) will contain 1 I( E) = log = log P( E) P( E) unit of inforation, I(E) is called the self-inforation of E 54 Fundaentals of Iage Copression
28 Inforation Theory (cont.) I(E) is a easure of uncertainty/unpredictability and is inversely related to P(E) If P(E) = 1, I(E) = 0: no inforation (surprise) is contained On the other hand, if the probabilities are all very different, then when a sybol with a low probability arrives, you feel ore surprised, get ore inforation, than when a sybol with a higher probability arrives If the base r logarith is used, the easureent is said to be in r-ary unit; if r = 2, the resulting is called a bit 55 Fundaentals of Iage Copression Inforation Theory (cont.) If we get I(s i ) units of inforation when we receive the sybol s i, how uch do we get on the average? A source alphabet S for a set of n sybols {s 1, s 2,..., s n } with the sybol probabilities {P(s 1 ), P(s 2 ),..., P(s n )}, define H ( S) = p( s ) I( s ) + p( s ) I( s ) + L+ p( s = n i= p( s ) I( s ) = i i 2 n i= 1 2 p( s )logp( s ) i i n ) I( s n ) H(S) is called the uncertainty or entropy of the source The entropy defines the average aount of inforation obtained by observing a single source output 56 Fundaentals of Iage Copression
29 Inforation Theory (cont.) If the source sybols are equally probable, the entropy is axiized and the source provides the greatest possible average inforation per source sybol H(P) P 57 Fundaentals of Iage Copression Inforation Theory (cont.) Exaple: S p(s i ) I(s i ) p(s i ) I(s i ) p(s i ) I(s i ) s s s s H(S) Fundaentals of Iage Copression
30 Inforation Theory (cont.) Instantaneous Codes : a code is said to be instantaneous if when a coplete sybol is received, the receiver iediately knows this, and do not have to look further before deciding what essage sybol is received. No encoded sybol of this code is a prefix of any other sybol 59 Fundaentals of Iage Copression Inforation Theory (cont.) Exaple : s 1 = 0 s 1 = 0 s 2 = 10 s 2 = 01 s 3 = 110 s 3 = 011 s 4 = 111 s 4 = 111 instantaneous non-instantaneous s 2 s 3 s 1 s s 3 s 4 s 4 No code word s i which is a prefix of another code word, s j 60 Fundaentals of Iage Copression
31 Inforation Theory (cont.) Shannon s noiseless source coding theore for a uniquely decodable code, the average nuber of bits per sybol used by the code ust be at least equal to the entropy of the source H ( S) L avg = q i= 1 p l The entropy H(S) provides the iniu aount of data that is sufficient to describe data without loss of inforation i i 61 Fundaentals of Iage Copression Inforation Theory (cont.) Exaple: S p(s i ) I(s i ) Code (length) p(s i ) I(s i ) Code (length) s (1) (1) s (2) (2) s (3) (3) s (3) (3) Average bits Fundaentals of Iage Copression
32 Variable-Length Coding Idea - assigning the shortest possible code words to the ost probable sybols The source sybols ay be the gray levels of an iage or a gray-level apping operation (pixel differences, run-lengths, etc.) Two widely used variable-length coding ethod Huffan coding generate an optial code in ters of the nuber of code sybols subject to the constraint that the source sybols are coded one at a tie» Adopted in JPEG, MPEG Arithetic coding an entire sequence of sybols is assigned a single arithetic codeword 63 Fundaentals of Iage Copression Huffan Coding Encoding steps: Initialization: Given the probability distribution of the source sybols Step 1: source reduction source reduction» Order the probabilities of the source sybols» Cobine two lowest probability sybols into a single sybol that replaces the in the next source reduction Step 2: codeword construction codeword construction» Code each reduced source starting with the sallest source and work back to the original source 64 Fundaentals of Iage Copression
33 Huffan Coding (cont.) Source reduction process: H ( z) = P( ai )logp( ai ) = [ 0.4log(0.4) + 0.3log(0.3) log(0.1) log(0.06) log(0.04) ] = 2.14 n i= 1 65 Fundaentals of Iage Copression Huffan Coding (cont.) Codeword construction process: L L 1 avg = k= 0 l( a ) p( k a k ) = 1 (0.4) + 2 (0.3) + 3 (0.1) + 4 (0.1) + 5 (0.06) + 5 (0.04) = Fundaentals of Iage Copression
34 Huffan Decoding Concept reconstruct the binary coding tree to the decoder fro the sybol to codeword table Decoding steps: Step 1: Read the input copressed strea bit by bit and traverse the tree until a leaf is reached Step 2: Discard each input bits Step 3: When a leaf node is reached, output the sybol at the leaf node, this will coplete the decoding for this sybol; go to Step 1 for the decoding of next sybol 67 Fundaentals of Iage Copression Modified Huffan Codes An Exaple: assue the source has a range of 510 to +511 S p(s i ) I(s i ) Code Bits All other values cobined plus value (10 bits) Fundaentals of Iage Copression
35 Arithetic Coding Huffan coding encodes individual sybols relies heavily on accurate knowledge of sybol statistics disadvantage of Huffan coding the length of Huffan codes have to be an integer, while the entropy value of a sybol ay be a faction nuber Theoretical possible copressed essage cannot be achieved For exaple, if p(s) = 0.9, then I(s) = log(0.9) = 0.15 bits 69 Fundaentals of Iage Copression Arithetic Coding (cont.) Arithetic coding An entire sequence of sybols is assigned a single arithetic codeword The codeword defines an interval of real nubers between 0 and 1 Theoretically achieve the axiu copression 70 Fundaentals of Iage Copression
36 Arithetic Coding (cont.) Exaple: S i P i subinterval k 0.05 [0.00,0.05) l 0.20 [0.05,0.25) 0.70 u 0.10 [0.25,0.35) w 0.05 [0.35,0.40) 0.40 e 0.30 [0.40,0.70) 0.35 r 0.20 [0.70,0.90) 0.25? 0.10 [0.90,1.00) Alphabet (k, l, u, w, e, r,?)? : end-of-essage ? r e w u l k 71 Fundaentals of Iage Copression Arithetic Coding (cont.) l l u u r e? ???????? r r r r r r r r e e e e e e e e w w w w w w w w u u u u u u u u l l l l l l l l k k k k k k k k Fundaentals of Iage Copression
37 Arithetic Coding (cont.) Encoding process Low = 0.0 ; high =1.0 ; while not EOF do range = high - low ; read(c) ; high = low + range high_range(c) ; low = low + range low_range(c) ; endofwhile output(low); 73 Fundaentals of Iage Copression Arithetic Coding (cont.) New character Low value high value l l u u r e ? Fundaentals of Iage Copression
38 Arithetic Coding (cont.) Decoding is the inverse process Since falls between 0.05 and 0.25, the first character ust be L. Reoving the effect of L fro by first subtracting the low value of L, 0.05: = Then divided by the range of L, 0.2: /0.2 = Then calculate where that lands, which is in the range of the character L The process repeats until 0 or the known length of the essage is reached 75 Fundaentals of Iage Copression Arithetic Coding (cont.) Decoding algorith r = input_code repeat search c such that r falls in its range ; output(c) ; r = r - low_range(c) ; r = r/(high_range(c) - low_range(c)); until EOF or the length of the essage is reached 76 Fundaentals of Iage Copression
39 Arithetic Coding (cont.) r c Low High range L L U U R E ? Fundaentals of Iage Copression Arithetic Coding (cont.) In suary, the encoding process is siply one of narrowing the range of possible nubers with every new sybol The new range is proportional to the predefined probability attached to that sybol Decoding is the inverse procedure, in which the range is expanded in proportion to the probability of each sybol as it is extracted 78 Fundaentals of Iage Copression
40 Arithetic Coding (cont.) Coding rate approaches high-order entropy theoretically Not so popular as Huffan coding because, are needed. Average bits/byte on 14 files (progra, object, text, and etc.) Huff. Arithetic Fundaentals of Iage Copression Conclusions JPEG, MPEG-1/2 uses Huffan and arithetic coding preprocessed by DPCM JPEG2000, MPEG-4 uses arithetic coding only 80 Fundaentals of Iage Copression
Multi-Scale/Multi-Resolution: Wavelet Transform
Multi-Scale/Multi-Resolution: Wavelet Transfor Proble with Fourier Fourier analysis -- breaks down a signal into constituent sinusoids of different frequencies. A serious drawback in transforing to the
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.
ECE47/57 - Lecture Image Compression Fundamentals and Lossless Compression Techniques /03/ Roadmap Preprocessing low level Image Enhancement Image Restoration Image Segmentation Image Acquisition Image
More informationLec 05 Arithmetic Coding
Outline CS/EE 5590 / ENG 40 Special Topics (7804, 785, 7803 Lec 05 Arithetic Coding Lecture 04 ReCap Arithetic Coding About Hoework- and Lab Zhu Li Course Web: http://l.web.ukc.edu/lizhu/teaching/06sp.video-counication/ain.htl
More informationImage Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image
Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n
More informationFixed-to-Variable Length Distribution Matching
Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical
IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul
More informationList Scheduling and LPT Oliver Braun (09/05/2017)
List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)
More informationOBJECTIVES INTRODUCTION
M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationOptimal Jamming Over Additive Noise: Vector Source-Channel Case
Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper
More informationImpact of Imperfect Channel State Information on ARQ Schemes over Rayleigh Fading Channels
This full text paper was peer reviewed at the direction of IEEE Counications Society subject atter experts for publication in the IEEE ICC 9 proceedings Ipact of Iperfect Channel State Inforation on ARQ
More informationReduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C
Image Compression Background Reduce the amount of data to represent a digital image Storage and transmission Consider the live streaming of a movie at standard definition video A color frame is 720 480
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationConvolutional Codes. Lecture Notes 8: Trellis Codes. Example: K=3,M=2, rate 1/2 code. Figure 95: Convolutional Encoder
Convolutional Codes Lecture Notes 8: Trellis Codes In this lecture we discuss construction of signals via a trellis. That is, signals are constructed by labeling the branches of an infinite trellis with
More informationCompression and Predictive Distributions for Large Alphabet i.i.d and Markov models
2014 IEEE International Syposiu on Inforation Theory Copression and Predictive Distributions for Large Alphabet i.i.d and Markov odels Xiao Yang Departent of Statistics Yale University New Haven, CT, 06511
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationHomework 3 Solutions CSE 101 Summer 2017
Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing
More informationTopic 5a Introduction to Curve Fitting & Linear Regression
/7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3
More informationThe Transactional Nature of Quantum Information
The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationBootstrapping Dependent Data
Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationBASICS OF COMPRESSION THEORY
BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find
More informationA Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness
A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationOn Constant Power Water-filling
On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives
More informationSIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding
SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental
More informationFinite fields. and we ve used it in various examples and homework problems. In these notes I will introduce more finite fields
Finite fields I talked in class about the field with two eleents F 2 = {, } and we ve used it in various eaples and hoework probles. In these notes I will introduce ore finite fields F p = {,,...,p } for
More informationImage Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)
Image Compression Qiaoyong Zhong CAS-MPG Partner Institute for Computational Biology (PICB) November 19, 2012 1 / 53 Image Compression The art and science of reducing the amount of data required to represent
More informationIMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1
IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding
More information2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all
Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either
More informationGeneral Properties of Radiation Detectors Supplements
Phys. 649: Nuclear Techniques Physics Departent Yarouk University Chapter 4: General Properties of Radiation Detectors Suppleents Dr. Nidal M. Ershaidat Overview Phys. 649: Nuclear Techniques Physics Departent
More information4. Quantization and Data Compression. ECE 302 Spring 2012 Purdue University, School of ECE Prof. Ilya Pollak
4. Quantization and Data Compression ECE 32 Spring 22 Purdue University, School of ECE Prof. What is data compression? Reducing the file size without compromising the quality of the data stored in the
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments
Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5
Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement
More informationError Exponents in Asynchronous Communication
IEEE International Syposiu on Inforation Theory Proceedings Error Exponents in Asynchronous Counication Da Wang EECS Dept., MIT Cabridge, MA, USA Eail: dawang@it.edu Venkat Chandar Lincoln Laboratory,
More informationIntroduction to Discrete Optimization
Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and
More informationtime time δ jobs jobs
Approxiating Total Flow Tie on Parallel Machines Stefano Leonardi Danny Raz y Abstract We consider the proble of optiizing the total ow tie of a strea of jobs that are released over tie in a ultiprocessor
More informationProc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES
Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co
More informationAn Algorithm for Quantization of Discrete Probability Distributions
An Algorith for Quantization of Discrete Probability Distributions Yuriy A. Reznik Qualco Inc., San Diego, CA Eail: yreznik@ieee.org Abstract We study the proble of quantization of discrete probability
More informationPrincipal Components Analysis
Principal Coponents Analysis Cheng Li, Bingyu Wang Noveber 3, 204 What s PCA Principal coponent analysis (PCA) is a statistical procedure that uses an orthogonal transforation to convert a set of observations
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationApproximation in Stochastic Scheduling: The Power of LP-Based Priority Policies
Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing
More informationA Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science
A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest
More informationAnalyzing Simulation Results
Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient
More informationESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics
ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationLower Bounds for Quantized Matrix Completion
Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &
More informationCompression and Coding
Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)
More informationFourier Series Summary (From Salivahanan et al, 2002)
Fourier Series Suary (Fro Salivahanan et al, ) A periodic continuous signal f(t), - < t
More informationReview of Quantization. Quantization. Bring in Probability Distribution. L-level Quantization. Uniform partition
Review of Quantization UMCP ENEE631 Slides (created by M.Wu 004) Quantization UMCP ENEE631 Slides (created by M.Wu 001/004) L-level Quantization Minimize errors for this lossy process What L values to
More informationAlgorithms for parallel processor scheduling with distinct due windows and unit-time jobs
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and
More informationProbability Distributions
Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples
More informationESE 523 Information Theory
ESE 53 Inforation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electrical and Systes Engineering Washington University 11 Urbauer Hall 10E Green Hall 314-935-4173 (Lynda Marha Answers) jao@wustl.edu
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationOptimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks
1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster
More informationPh 20.3 Numerical Solution of Ordinary Differential Equations
Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative
More informationInteractive Markov Models of Evolutionary Algorithms
Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationCMPT 365 Multimedia Systems. Final Review - 1
CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression
More informationIn this chapter, we consider several graph-theoretic and probabilistic models
THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions
More informationSource Coding. Master Universitario en Ingeniería de Telecomunicación. I. Santamaría Universidad de Cantabria
Source Coding Master Universitario en Ingeniería de Telecomunicación I. Santamaría Universidad de Cantabria Contents Introduction Asymptotic Equipartition Property Optimal Codes (Huffman Coding) Universal
More informationWork, Energy and Momentum
Work, Energy and Moentu Work: When a body oves a distance d along straight line, while acted on by a constant force of agnitude F in the sae direction as the otion, the work done by the force is tered
More informationDetermining OWA Operator Weights by Mean Absolute Deviation Minimization
Deterining OWA Operator Weights by Mean Absolute Deviation Miniization Micha l Majdan 1,2 and W lodziierz Ogryczak 1 1 Institute of Control and Coputation Engineering, Warsaw University of Technology,
More informationMathematical Model and Algorithm for the Task Allocation Problem of Robots in the Smart Warehouse
Aerican Journal of Operations Research, 205, 5, 493-502 Published Online Noveber 205 in SciRes. http://www.scirp.org/journal/ajor http://dx.doi.org/0.4236/ajor.205.56038 Matheatical Model and Algorith
More informationForce and dynamics with a spring, analytic approach
Force and dynaics with a spring, analytic approach It ay strie you as strange that the first force we will discuss will be that of a spring. It is not one of the four Universal forces and we don t use
More informationRandomized Recovery for Boolean Compressed Sensing
Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch
More informationPolygonal Designs: Existence and Construction
Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G
More informationEMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS
EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,
More informationWarning System of Dangerous Chemical Gas in Factory Based on Wireless Sensor Network
565 A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 59, 07 Guest Editors: Zhuo Yang, Junie Ba, Jing Pan Copyright 07, AIDIC Servizi S.r.l. ISBN 978-88-95608-49-5; ISSN 83-96 The Italian Association
More informationIN modern society that various systems have become more
Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto
More informationA Probabilistic and RIPless Theory of Compressed Sensing
A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More informationConstant-Space String-Matching. in Sublinear Average Time. (Extended Abstract) Wojciech Rytter z. Warsaw University. and. University of Liverpool
Constant-Space String-Matching in Sublinear Average Tie (Extended Abstract) Maxie Crocheore Universite de Marne-la-Vallee Leszek Gasieniec y Max-Planck Institut fur Inforatik Wojciech Rytter z Warsaw University
More informationReed-Muller codes for random erasures and errors
Reed-Muller codes for rando erasures and errors Eanuel Abbe Air Shpilka Avi Wigderson Abstract This paper studies the paraeters for which Reed-Muller (RM) codes over GF (2) can correct rando erasures and
More informationA PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010
A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065
More informationW Arithmetic Coding (BAC). BAC is a variable in to
1546 EEE TRANSACTONS ON NFORMATON THEORY, VOL. 39, NO. 5, SEPTEMBER 1993 Block Arithetic Coding for Source Copression Charles G. Boncelet Jr., Meber EEE Abstruct- We introduce Block Arithetic Coding (BAC),
More informationAudio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU
Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression
More informationImage Compression Basis Sebastiano Battiato, Ph.D.
Image Compression Basis Sebastiano Battiato, Ph.D. battiato@dmi.unict.it Compression and Image Processing Fundamentals; Overview of Main related techniques; JPEG tutorial; Jpeg vs Jpeg2000; SVG Bits and
More informationMaximum Entropy Interval Aggregations
Maxiu Entropy Interval Aggregations Ferdinando Cicalese Università di Verona, Verona, Italy Eail: cclfdn@univr.it Ugo Vaccaro Università di Salerno, Salerno, Italy Eail: uvaccaro@unisa.it arxiv:1805.05375v1
More informationAutomated Frequency Domain Decomposition for Operational Modal Analysis
Autoated Frequency Doain Decoposition for Operational Modal Analysis Rune Brincker Departent of Civil Engineering, University of Aalborg, Sohngaardsholsvej 57, DK-9000 Aalborg, Denark Palle Andersen Structural
More informationAnalysis of Hu's Moment Invariants on Image Scaling and Rotation
Edith Cowan University Research Online ECU Publications Pre. 11 1 Analysis of Hu's Moent Invariants on Iage Scaling and Rotation Zhihu Huang Edith Cowan University Jinsong Leng Edith Cowan University 1.119/ICCET.1.548554
More informationCombining Classifiers
Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationShort Papers. Test Data Compression and Decompression Based on Internal Scan Chains and Golomb Coding
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 1, NO. 6, JUNE 00 715 Short Papers Test Data Copression and Decopression Based on Internal Scan Chains and Golob Coding
More information