Chapter7 Image Compression
|
|
- Allen Clark
- 5 years ago
- Views:
Transcription
1 Chapter7 Image Compression Preview 7I 7. Introduction 7.2 Fundamental concepts and theories 7.3 Basic coding methods 7.4 Predictive coding 7.5 Transform coding 7.6 Introduction to international standards 23 Digital Image Processing
2 Preview General communication model f( x, y) n n 2 n 3 Soure encoder Channel encoder Channel Channel Soure decoder decoder ^ (, ) f xy n >n 2 n 3 >n 2 n > n 3 >n 2 23 Digital Image Processing 2
3 Preview Lossless compression 8.7Kbyte 3.46Kbyte 2.62Kbyte winzip G4 23 Digital Image Processing 3
4 Preview Lossy compression 92kbyte.2Kbyte PSNR=36.97dB 23 Digital Image Processing 2.5kbyte PSNR=25.dB 4
5 7.. why code data? To reduce storage volume To reduce transmission time One colour image 76 by 58 pixels 3 channels, each 8 bits.3 Mbyte Video data same resolution 25 frames per second 33 Mbyte/second 7. Introduction 7..2 Classification Lossless compression: reversible Lossy compression: irreversible 23 Digital Image Processing 5
6 7..3 Compression methods 7. Introduction Entropy coding: Huffman coding, arithmetic coding Run length coding: G3, G4 LZW coding: arj(dos), zip(windows), i compress( UNIX) Predictive coding: linear prediction, non-linear prediction Transform coding: DCT, KLT, Walsh-T, TWT( WT(wavelet) Others: vector quantization (VQ), Fractal coding the second generation coding 23 Digital Image Processing 6
7 7. Introduction 7..3 Compression methods: Fractal coding Julia Set Koch curve Mandelbrot Set 23 Digital Image Processing 7
8 7. Introduction 7..3 Compression methods: the second generation coding Region of interest (ROI) original ii ROI.3 bpp Saliency map 23 Digital Image Processing 8
9 7.2 Fundamental concepts and theories 7.2. Data redundancy If n denote the original data number, and n 2 denote the compressed data number, then the data redundancy can be defined as RD = C R where C R, commonly called the compression ratio, is C R = n n 2 For example n 2 =n R D = n n C, R 2 R D 23 Digital Image Processing 9
10 7.2 Fundamental concepts and theories 7.2. Data redundancy Other expression of compression ratio C R n n 2 = or n2 8 bits / pixel L = bits / pixel, CR = M N L Three basic image redundancy Coding redundancy Interpixle ep eredundancy d psychovisual redundancy 23 Digital Image Processing
11 7.2 Fundamental concepts and theories 7.2. Data redundancy: coding redundancy If r k represents the gray levels of an image and each r k occurs with probability p r (r k ), then nk pr( rk) =, k =,, L n where L is the number of gray levels, n k is number of times that the kth gray level appears in the image, and n is the total number of pixels in the image 23 Digital Image Processing
12 7.2 Fundamental concepts and theories 7.2. Data redundancy: coding redundancy If the number of bits used to represent each value of r k is l(r k ),then the average number of bits required to represent each pixel is For example L L = lr ( ) p( r) avg k r k k = r k P r (r k ) Code l (r k ) Code2 l 2 (r k ) r r r r r r R r Digital Image Processing 2
13 7.2 Fundamental concepts and theories 7.2. Data redundancy: coding redundancy 7 L = l ( r ) p ( r ) avg 2 k r k k = = 2(.9) + 2(.25) + 2(.2) + 3(.6) + 4(.8) + 5(.6) + 6(.3) + 6(.2) = 2.7bits C = 3/27 3/2.7=. R R D = = Digital Image Processing 3
14 7.2 Fundamental concepts and theories 7.2. Data redundancy: interpixel redundancy 23 Digital Image Processing 4
15 7.2 Fundamental concepts and theories 7.2. Data redundancy: interpixel redundancy Autocorrelation coefficients where γ ( Δ n) = A( Δn) A() N Δ n A( Δ n) = f( x, y) f( x, y+δn) N Δn y= 23 Digital Image Processing 5
16 7.2 Fundamental concepts and theories 7.2. Data redundancy: interpixel redundancy 23 Digital Image Processing 6
17 7.2 Fundamental concepts and theories 7.2. Data redundancy: psychovisual redundancy.psychovisual redundancy is associated with real or quantifiable visual information. The information itself is not essential for normal visual processing 8bits/pixel 4bits/pixel 2bits/pixel 23 Digital Image Processing 7
18 7.2 Fundamental concepts and theories Fidelity criteria: objective Lt ^ Let f ( x, y ) represent an input timage and f ( xy, ) denote the Decompressed image. For any value x and y, the error e(x,y) bewteen f ( x, y) and f ^ ( xy, ) can be defined as ^ exy (, ) = f( xy, ) f( xy, ) So that the total error between the two images is M N ^ x= y= [ f ( x, y ) f ( x, y)] 23 Digital Image Processing 8
19 7.2 Fundamental concepts and theories Fidelity criteria: objective The root-mean-square error, e ems, between and f xy is M N ^ erms = { [ f( x, y) f( x, y)] } MN x= y= f( x, y) 2 /2 The mean-square signal noise ratio is defined as SNR = lg M N ^ x= y= rms M N ^ x= y= f ( x, y) 2 [ f ( x, y) f( x, y)] 2 ^ (, ) 23 Digital Image Processing 9
20 7.2 Fundamental concepts and theories Fidelity criteria: objective PSNR The signal noise ratio is defined as SNR = lg M N ^ f ( x, y ) f ( x, y ) x= y= M N ^ x= y= [ f ( x, y ) f ( x, y )] The peak signal noise ratio is defined as M N 2 fmax 2 x= y= fmax lg M N ^ 2 2 erms = lg = [ f ( x, y) f( x, y )] x= y= 23 Digital Image Processing 2 2 2
21 7.2 Fundamental concepts and theories Fidelity criteria: subjective Side-by-side comparisons 23 Digital Image Processing 2
22 两幅图象有相同的 PSNR==3.7db,, 同样数量的矩形噪声在左图中加到了平坦的天空区域, 而在右图却加在了纹理比较丰富的地面岩石区域, 噪声在左图中很易被察觉到, 而在右图中确很难被发现 ( 这就是所谓的掩蔽效应 ), 两幅图象主观上质量有很大的差别, 而 PSNR 却不能反映这一区别 23 Digital Image Processing 22 22
23 23 Digital Image Processing 23
24 7.2 Fundamental concepts and theories image compression models Source encoder model f ( x, y) decorrelation quantizer Symbol encoder ^ (, ) f xy Remove Interpixel redundancy Remove psychovisual redundancy Remove coding redundancy 23 Digital Image Processing 24
25 7.2 Fundamental concepts and theories Elements of information theory: terms 948, Cloude Shannon, A mathematical theory of communication News:data information: contents Information source: symbols Memoryless source Markov source 23 Digital Image Processing 25
26 7.2 Fundamental concepts and theories Elements of information theory: Self-information A random event E that occurs with probability is said to contain I ( E ) units of information. I( E) = log log P( E) PE ( ) = P( E) For example PE ( ) =, Unit: I( E ) = PE ( ) =, I( E ) = Base=2, bits ( binary digits) Base=e, nats ( nature digits) Base=, Hartly 23 Digital Image Processing 26
27 7.2 Fundamental concepts and theories Elements of information theory: Entropy of the source definition Suppose X={x, x, x N- }is a discrete random variable, and the probability of x j is P(x j ),namely X x x x N = PX ( ) p ( x x x ) p( ) p( ) N N then H ( X) = p( xj )log = p( xj)log p( xj) j= px ( j ) j= 23 Digital Image Processing 27
28 7.2 Fundamental concepts and theories Elements of information theory: Entropy of the source properties () H ( X ) (2) H( X) = H(,, ) = H(,, ) = = H(,, ) = (3) H( X) = H( x, x x ) logn N When P(x j )=/N for all j,, H(X)=logN example X={ } X={,2,3,4,5,6,7,8 }, p j =/8 for each j 8 H ( X ) = log2 8= 3 bits / symbol 8 j= 8 23 Digital Image Processing 28
29 7.2 Fundamental concepts and theories Elements of information theory: Conditional entropy definitions H( X) = p( xi )log i p ( xi ) H( X) = p( xi) p( xi xi )log p ( x x ) i i i i H ( X ) = px ( ) ( )log 2 i pxi xi x i 2 p ( x x x ) i i i i i 2 Hm( X) = px ( i) px ( i xi xi m)log px ( x x ) i i i i i m property H = = H < H < < H < H < H m m 2 23 Digital Image Processing 29
30 7.2 Fundamental concepts and theories Elements of information theory: using information theory Asimple8 8-bits image: Zero-order entropy: Probability of the source symbols: Gray-levelGa Count probability 2 2 3/ / / /8 H( X) = p( xi )log p ( x ) i = 8.8 bits / pixel i 23 Digital Image Processing 3
31 7.2 Fundamental concepts and theories Elements of information theory: using information theory Relative frequency of pairs of pixels: Gray-level Count probability (2,2) Paira 8 2/8 (2,95) 4 /8 (95,69) 4 /8 (69,243) 4 /8 (243,243) 8 2/8 (243,2) 4 /8 H( X ) = p( xi )log = 2.52 bits / string px ( ) i i =.25 bits / pixel 23 Digital Image Processing 3
32 7.2 Fundamental concepts and theories Elements of information theory: Noiseless coding theorem Also called Shannon first theorem. Defines the minimum average code word length per source symbol that can be achieved H ( X) Lavg For memoryless (zero-memory) source: H( X) = p( xi )log p ( x ) i But, the image data usually are Markov source, how to code? i 23 Digital Image Processing 32
33 7.2 Fundamental concepts and theories Elements of information theory: Noiseless coding theorem Coding steps marcov X decorrelation Y memoryless entropy coding entropy coding L X L Y H(X)=H m (X)<H (X), L X = H (X) H(Y)=H (Y), L Y = H (Y) H(X)=H(Y) ( ) L Y= H m m( (X)<L X 23 Digital Image Processing 33
34 7.2 Fundamental concepts and theories Elements of information theory: Noise coding theorem For a given rate distortion D, the rate distortion function R(D) is less than H(X), and R(D) is the minimum of coding length L avg 23 Digital Image Processing 34
35 7.3. Huffman coding Huffman, Basic 7.3 coding methods code the low probabilities symbols with long word code the high probabilities symbols with short word smallest possible number of code symbols per source symbol the source symbols must be coded one at a time 23 Digital Image Processing 35
36 7.3. Huffman coding 73Basic 7.3 coding methods step:create a series of source reduction by ordering the probabilities step2:combine the lowest probability symbols into a single symbol that replaces them in the next source reduction step3:repeat step2 until the lowest probability is For example Symbol Probability A B C D E F Digital Image Processing 36
37 7.3. Huffman coding 73Basic 7.3 coding methods step4:code each reduced d source with and, staring with the smallest source and working back to the original source Symbol Probability A B C D E F L = avg = 2.2 bits / symbol L ( ) avg > H X 23 Digital Image Processing i== H( A) = p( ai)log p( ai) = 2.4 bits / symbol 37
38 7.3 Basic coding methods Arithmetic coding An entire sequence of source symbols( or message) is assigned as a single arithmetic code word the code word itself defines an interval of real numbers between and The encoding process of an arithmetic coding can be explained through the following example 23 Digital Image Processing 38
39 7.3 Basic coding methods Arithmetic coding: example encoding Let us assume that the source symbols are {a a 2 a 3 a 4 } and the probabilities of these symbols are {.2, {2.2, 2.4, 4.2} 2} To encode a message of a sequence: a a 2 a 3 a 3 a 4 The interval l[) [,) can be divided id das four sub-intervals: [.,.2), [.2,.4), [.4,.8), [.8,.), symbols probabilites Initial intervals a.2 [.,.2) a 2.2 [.2,.4) a 3.4 [.4,.8) a [8) [.8,.) 23 Digital Image Processing 39
40 7.3 Basic coding methods Arithmetic coding: example We take the first symbol a from the message and find its encoding range is [.,.2). The second symbol a2is encodedby taking the 2 th -4 th of interval [.,.2) as the new interval [.2,.4). And so on. Visually, we can use the figure: Finally, choose a number from the interval of [.688,.6752) ) as the output: Digital Image Processing 4
41 7.3 Basic coding methods Arithmetic coding : example decoding < <.2 a.4<.688 <.8 a 2.56< 56< <.7272 a 3.624<.688 <.688 a <.688 <.688 a 4 23 Digital Image Processing 4
42 73Basic 7.3 coding methods Binary image coding: constant area coding all white all black mixed intensity The most probable category is coded with -bit code word The other two category is coded with 2-bit code word and The code assigned to the mixed intensity category is used as a prefix 23 Digital Image Processing 42
43 73Basic 7.3 coding methods Binary image coding: run length coding Represent each row of a image by a sequence of lengths that describe runs of black and white pixels the standard compression approach in facsimile (FAX) coding. CCITT, G3,G4. (,8) (,3) (,4) (,4) (, 28) (,3) (,23) (,3) (,29) (, 3) (,28) (,3) (,3) (,4) (,25) (,4) (,38) (,4) Digital Image Processing 43
44 73Basic 7.3 coding methods Binary image coding: contour tracing and coding Δ ' is the difference between the starting coordinates of the front contours adjacent lines Δ '' is the difference between the front-to-back contour lengths contours adjacent lines code: new start, Δ ' and Δ '' 23 Digital Image Processing 44
45 But for gray image Digital Image Processing 45
46 7.4. lossless coding encoder 74Predictive 7.4 coding e = f fˆ n n n Huffman coding or Arithmetic coding decoderd f = e + fˆ n n n 23 Digital Image Processing 46
47 74Predictive 7.4 coding 7.4. lossless coding In most case, the prediction is formed by a linear combination of m previous pixels. That is, m ˆ fˆn = round α i f n i i= where m is the order of fthe linear predictor α i are prediction coefficients For example, -D linear predictive can be written m fˆ( xy, ) = round αi f ( xy, i ) i= 23 Digital Image Processing 47
48 74Predictive 7.4 coding 7.4. lossless coding: Experimental result m=,α= fˆ ( xy, ) = round[ f( xy, ) ] Original image 23 Digital Image Processing Residual image (28 represents ) 48
49 74Predictive 7.4 coding 7.4. lossless coding: Experimental result m=, α= 2 e e p () = σ e e e 2σ e Histogram of original image Histogram of residual image 23 Digital Image Processing 49
50 74Predictive 7.4 coding 7.4. lossless coding: JPEG lossless compression standard Prediction schemes c b a x Location of pixels Prediction Prediction Scheme value Null a 2 b 3 c 4 a+b-c 5 a+(b-c)/2 6 b+(a-c)/2 7 (a+b)/2 23 Digital Image Processing 5
51 7.4.2 lossy coding: model 74Predictive 7.4 coding f = e + fˆ encoder n n n decoder 23 Digital Image Processing 5
52 74Predictive 7.4 coding lossy coding: Optimal predictors differential pulse code modulation( (DPCM) Minimizes the encoder s mean-square prediction error E e E f f ˆ 2 2 { n} = {[ n n] } Subject to the constraint that fˆ m = α f n i n i i= E e E f f m 2 { = n } f n α i f n i= 2 23 Digital Image Processing 52
53 74Predictive 7.4 coding lossy coding: Optimal predictors Experiment: four predictors: () f ˆ( x, y ) =.97 f ( x, y ) (2) f ˆ( x, y) =.5 f( x, y ) +.5 f( x, y) f ˆ( x, y) =.75 f( x, y ) +.75 f( x, y).5 f( x, y ) (3).97 (, ) if (4) fˆ ( xy, ).97 f( x, y ) if Δ h Δ v =.97 f( x, y) otherwise where Δ h= f( x, y) f( x, y ) f(x-,y-) f(x-,y) f(x-,y+) Δ v= f( x, y ) f( x, y ) f(x,y-) f(x,y) 23 Digital Image Processing 53
54 74Predictive 7.4 coding lossy coding: Optimal predictors Experiment original 23 Digital Image Processing 54
55 74Predictive 7.4 coding lossy coding: Optimal predictors Conclusion:Error decreases as the order of the predictor increases 23 Digital Image Processing 55
56 74Predictive 7.4 coding lossy coding: Optimal quantization The staircase quantization function t=q(s) is shown as 23 Digital Image Processing 56
57 75Transform 7.5 Coding 7.5. Review of image transform: definitions X T Y Direct transform Inverse transform Y = TX X = Vector expression T Y M N ymn, = xi, jφ(, i j, m, n) i= j= polynomial expression Question: why transformation can compress data? 23 Digital Image Processing 57
58 75Transform 7.5 Coding 7.5. review of image transform: properties Entropy keeping: Energy keeping: decorrelation: Energy re-assigned: H(X)=H(Y) M N M N 2 2 X ( mn, ) = Ykl (, ) m= n= k= l= H = = H < H < < H < H < H m m 2 H(Y)=H (Y) H(X)=H m (X) H (X)>H (Y) original DFT DCT 23 Digital Image Processing 58
59 7.5.2 Transform coding system 75Transform 7.5 Coding f( x, y) Construct n*n subimage Forward transform encoder quantizer Symbol encoder Bits stream Bits Symbol Inverse stream encoder transform Merge n*n subimage ^ (, ) f xy decoder 23 Digital Image Processing 59
60 7.5.3 Transform selection Information packing ability: Computational complexity: sub-image size selection Computational complexity increase as the subimage size increase Correlation decrease as the subimage size increase The most popular subimage size are 8*8, and 6*6 75Transform 7.5 Coding KLT>DCT>DFT>WHT WHT<DCT<DFT<KLT 23 Digital Image Processing 6
61 75Transform Coding 7.5 Transform Coding bit allocation: zonal coding transform coefficients of maximum variance carry the most information and should be retained Zonal mask Bit allocation 23 Digital Image Processing 6
62 75Transform Coding 7.5 Transform Coding bit allocation: threshold coding Zig-zag g g threshold mask 23 Digital Image Processing 62
63 75Transform 7.5 Coding bit allocation: threshold coding There are three basic ways to threshold a transformed coefficients A single global threshold can be applied to all subimages A different threshold can be used for each subimage The threshold can be varied as a function of the location of each coefficient within the subimage where Tuv (, ) Tˆ( u, v) = round Zuv (, ) T(u,v): transform coefficients Z(u,v): quantization matrix 23 Digital Image Processing 63
64 75Transform 7.5 Coding bit allocation: threshold coding Z(u,v) is assigned a particular value c Z(u,v) used in JPEG stardard 23 Digital Image Processing 64
65 75Transform 7.5 Coding bit allocation: threshold coding Experiment results Left column: quantize with Z(u,v) (,) Right column: quantize with 4*Z(u,v) 23 Digital Image Processing 65
66 75Transform 7.5 Coding JEPG lossy compression standard DCT Zig-zag quantise run length code entropy code 23 Digital Image Processing.. 66
67 Original image DCT quantise Zig-zag run - length - code - Huffman 2 - code. 23 Digital Image Processing 67
68 JPEG Decoding DCT Zig-zag quantise run length code entropy code 23 Digital Image Processing.. 68
69 Result of Coding and Decoding Original block Reconstructed block errors 23 Digital Image Processing 69
70 75Transform 7.5 Coding JEPG lossy compression standard f( x, y) Construct 8*8 subimage DCT encoder quantizer Zig-zag Question: why there are mosaics in JEPG images? Huffman coding Bits stream 23 Digital Image Processing 7
71 75Transform 7.5 Coding JEPG 2 lossy compression standard 23 Digital Image Processing 7
72 23 Digital Image Processing 72
73 23 Digital Image Processing 73
74 7.6 Introduction to international standards Image compression standards: -JPEG, -JPEG2 Video compression standards: -MPEG-,MPEG-2,MPEG-4, -H.26,H.263,H.263+,H.264 H 263 H 263+ H 264 -H.264/AVC 23 Digital Image Processing 74
75 Video compression standards 23 Digital Image Processing 75
76 Homework 6.7(-4) () 请说明是否能用变长变码法压缩 幅已直方图均衡化的具有 2 n 级灰度的图? (2) 这样的图像中包含像素间冗余吗? 6.(-22) 已知符号 a,e,i,o,u,? 的出现概率分别是.2,.3,.,.2,.,., 进行解码, 解码长度为 6 6.(-22) 已知符号 a,e,i,o,u,?,,,,, 的出现概率分别是.2,.3,.,.2,.,,,,,,., 进行解码, 解码长度为 6 23 Digital Image Processing 76
77 THE END THE END 科大西区 West campus of USTC 23 Digital Image Processing 77
Image Compression. Fundamentals: Coding redundancy. The gray level histogram of an image can reveal a great deal of information about the image
Fundamentals: Coding redundancy The gray level histogram of an image can reveal a great deal of information about the image That probability (frequency) of occurrence of gray level r k is p(r k ), p n
More informationECE472/572 - Lecture 11. Roadmap. Roadmap. Image Compression Fundamentals and Lossless Compression Techniques 11/03/11.
ECE47/57 - Lecture Image Compression Fundamentals and Lossless Compression Techniques /03/ Roadmap Preprocessing low level Image Enhancement Image Restoration Image Segmentation Image Acquisition Image
More informationImage and Multidimensional Signal Processing
Image and Multidimensional Signal Processing Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ Image Compression 2 Image Compression Goal: Reduce amount
More informationIMAGE COMPRESSION-II. Week IX. 03/6/2003 Image Compression-II 1
IMAGE COMPRESSION-II Week IX 3/6/23 Image Compression-II 1 IMAGE COMPRESSION Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding Predictive coding Transform coding
More informationCompression. What. Why. Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color
Compression What Reduce the amount of information (bits) needed to represent image Video: 720 x 480 res, 30 fps, color Why 720x480x20x3 = 31,104,000 bytes/sec 30x60x120 = 216 Gigabytes for a 2 hour movie
More informationBasic Principles of Video Coding
Basic Principles of Video Coding Introduction Categories of Video Coding Schemes Information Theory Overview of Video Coding Techniques Predictive coding Transform coding Quantization Entropy coding Motion
More informationL. Yaroslavsky. Fundamentals of Digital Image Processing. Course
L. Yaroslavsky. Fundamentals of Digital Image Processing. Course 0555.330 Lec. 6. Principles of image coding The term image coding or image compression refers to processing image digital data aimed at
More informationImage Compression. Qiaoyong Zhong. November 19, CAS-MPG Partner Institute for Computational Biology (PICB)
Image Compression Qiaoyong Zhong CAS-MPG Partner Institute for Computational Biology (PICB) November 19, 2012 1 / 53 Image Compression The art and science of reducing the amount of data required to represent
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments
Lecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006 jzhang@cse.unsw.edu.au
More informationSource mechanism solution
Source mechanism solution Contents Source mechanism solution 1 1. A general introduction 1 2. A step-by-step guide 1 Step-1: Prepare data files 1 Step-2: Start GeoTaos or GeoTaos_Map 2 Step-3: Convert
More informationLecture 2: Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments. Tutorial 1. Acknowledgement and References for lectures 1 to 5
Lecture : Introduction to Audio, Video & Image Coding Techniques (I) -- Fundaments Dr. Jian Zhang Conjoint Associate Professor NICTA & CSE UNSW COMP959 Multimedia Systems S 006 jzhang@cse.unsw.edu.au Acknowledgement
More informationMultimedia Networking ECE 599
Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on lectures from B. Lee, B. Girod, and A. Mukherjee 1 Outline Digital Signal Representation
More informationDigital Image Processing. Point Processing( 点处理 )
Digital Image Processing Point Processing( 点处理 ) Point Processing of Images In a digital image, point = pixel. Point processing transforms a pixel s value as function of its value alone; it it does not
More informationReduce the amount of data required to represent a given quantity of information Data vs information R = 1 1 C
Image Compression Background Reduce the amount of data to represent a digital image Storage and transmission Consider the live streaming of a movie at standard definition video A color frame is 720 480
More informationDigital communication system. Shannon s separation principle
Digital communication system Representation of the source signal by a stream of (binary) symbols Adaptation to the properties of the transmission channel information source source coder channel coder modulation
More informationBASICS OF COMPRESSION THEORY
BASICS OF COMPRESSION THEORY Why Compression? Task: storage and transport of multimedia information. E.g.: non-interlaced HDTV: 0x0x0x = Mb/s!! Solutions: Develop technologies for higher bandwidth Find
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 5 Other Coding Techniques Instructional Objectives At the end of this lesson, the students should be able to:. Convert a gray-scale image into bit-plane
More informationCompression methods: the 1 st generation
Compression methods: the 1 st generation 1998-2017 Josef Pelikán CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ Still1g 2017 Josef Pelikán, http://cgg.mff.cuni.cz/~pepca 1 / 32 Basic
More informationDigital Image Processing Lectures 25 & 26
Lectures 25 & 26, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2015 Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image
More informationOverview. Analog capturing device (camera, microphone) PCM encoded or raw signal ( wav, bmp, ) A/D CONVERTER. Compressed bit stream (mp3, jpg, )
Overview Analog capturing device (camera, microphone) Sampling Fine Quantization A/D CONVERTER PCM encoded or raw signal ( wav, bmp, ) Transform Quantizer VLC encoding Compressed bit stream (mp3, jpg,
More informationIMAGE COMPRESSION IMAGE COMPRESSION-II. Coding Redundancy (contd.) Data Redundancy. Predictive coding. General Model
IMAGE COMRESSIO IMAGE COMRESSIO-II Data redundancy Self-information and Entropy Error-free and lossy compression Huffman coding redictive coding Transform coding Week IX 3/6/23 Image Compression-II 3/6/23
More informationencoding without prediction) (Server) Quantization: Initial Data 0, 1, 2, Quantized Data 0, 1, 2, 3, 4, 8, 16, 32, 64, 128, 256
General Models for Compression / Decompression -they apply to symbols data, text, and to image but not video 1. Simplest model (Lossless ( encoding without prediction) (server) Signal Encode Transmit (client)
More informationOn the Quark model based on virtual spacetime and the origin of fractional charge
On the Quark model based on virtual spacetime and the origin of fractional charge Zhi Cheng No. 9 Bairong st. Baiyun District, Guangzhou, China. 510400. gzchengzhi@hotmail.com Abstract: The quark model
More informationrepetition, part ii Ole-Johan Skrede INF Digital Image Processing
repetition, part ii Ole-Johan Skrede 24.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo today s lecture Coding and
More informationImage Compression - JPEG
Overview of JPEG CpSc 86: Multimedia Systems and Applications Image Compression - JPEG What is JPEG? "Joint Photographic Expert Group". Voted as international standard in 99. Works with colour and greyscale
More informationTransform Coding. Transform Coding Principle
Transform Coding Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform coefficients Entropy coding of transform coefficients
More informationUNIT I INFORMATION THEORY. I k log 2
UNIT I INFORMATION THEORY Claude Shannon 1916-2001 Creator of Information Theory, lays the foundation for implementing logic in digital circuits as part of his Masters Thesis! (1939) and published a paper
More informationInformation and Entropy
Information and Entropy Shannon s Separation Principle Source Coding Principles Entropy Variable Length Codes Huffman Codes Joint Sources Arithmetic Codes Adaptive Codes Thomas Wiegand: Digital Image Communication
More informationChapter 22 Lecture. Essential University Physics Richard Wolfson 2 nd Edition. Electric Potential 電位 Pearson Education, Inc.
Chapter 22 Lecture Essential University Physics Richard Wolfson 2 nd Edition Electric Potential 電位 Slide 22-1 In this lecture you ll learn 簡介 The concept of electric potential difference 電位差 Including
More informationObjective: Reduction of data redundancy. Coding redundancy Interpixel redundancy Psychovisual redundancy Fall LIST 2
Image Compression Objective: Reduction of data redundancy Coding redundancy Interpixel redundancy Psychovisual redundancy 20-Fall LIST 2 Method: Coding Redundancy Variable-Length Coding Interpixel Redundancy
More informationLecture 7 Predictive Coding & Quantization
Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 7 Predictive Coding & Quantization June 3, 2009 Outline Predictive Coding Motion Estimation and Compensation Context-Based Coding Quantization
More informationFundamentals of Image Compression
Fundaentals of Iage Copression Iage Copression reduce the size of iage data file while retaining necessary inforation Original uncopressed Iage Copression (encoding) 01101 Decopression (decoding) Copressed
More informationImage Data Compression
Image Data Compression Image data compression is important for - image archiving e.g. satellite data - image transmission e.g. web data - multimedia applications e.g. desk-top editing Image data compression
More informationWavelet Scalable Video Codec Part 1: image compression by JPEG2000
1 Wavelet Scalable Video Codec Part 1: image compression by JPEG2000 Aline Roumy aline.roumy@inria.fr May 2011 2 Motivation for Video Compression Digital video studio standard ITU-R Rec. 601 Y luminance
More informationSYDE 575: Introduction to Image Processing. Image Compression Part 2: Variable-rate compression
SYDE 575: Introduction to Image Processing Image Compression Part 2: Variable-rate compression Variable-rate Compression: Transform-based compression As mentioned earlier, we wish to transform image data
More informationCompression and Coding
Compression and Coding Theory and Applications Part 1: Fundamentals Gloria Menegaz 1 Transmitter (Encoder) What is the problem? Receiver (Decoder) Transformation information unit Channel Ordering (significance)
More informationd) There is a Web page that includes links to both Web page A and Web page B.
P403-406 5. Determine whether the relation R on the set of all eb pages is reflexive( 自反 ), symmetric( 对 称 ), antisymmetric( 反对称 ), and/or transitive( 传递 ), where (a, b) R if and only if a) Everyone who
More informationGalileo Galilei ( ) Title page of Galileo's Dialogue concerning the two chief world systems, published in Florence in February 1632.
Special Relativity Galileo Galilei (1564-1642) Title page of Galileo's Dialogue concerning the two chief world systems, published in Florence in February 1632. 2 Galilean Transformation z z!!! r ' = r
More informationThe dynamic N1-methyladenosine methylome in eukaryotic messenger RNA 报告人 : 沈胤
The dynamic N1-methyladenosine methylome in eukaryotic messenger RNA 报告人 : 沈胤 2016.12.26 研究背景 RNA 甲基化作为表观遗传学研究的重要内容之一, 是指发生在 RNA 分子上不同位置的甲基化修饰现象 RNA 甲基化在调控基因表达 剪接 RNA 编辑 RNA 稳定性 控制 mrna 寿命和降解等方面可能扮演重要角色
More informationIntroduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p.
Preface p. xvii Introduction p. 1 Compression Techniques p. 3 Lossless Compression p. 4 Lossy Compression p. 5 Measures of Performance p. 5 Modeling and Coding p. 6 Summary p. 10 Projects and Problems
More informationIntegrated Algebra. Simplified Chinese. Problem Solving
Problem Solving algebraically concept conjecture constraint equivalent formulate generalization graphically multiple representations numerically parameter pattern relative efficiency strategy verbally
More informationTransform coding - topics. Principle of block-wise transform coding
Transform coding - topics Principle of block-wise transform coding Properties of orthonormal transforms Discrete cosine transform (DCT) Bit allocation for transform Threshold coding Typical coding artifacts
More informationCompression. Encryption. Decryption. Decompression. Presentation of Information to client site
DOCUMENT Anup Basu Audio Image Video Data Graphics Objectives Compression Encryption Network Communications Decryption Decompression Client site Presentation of Information to client site Multimedia -
More informationAudio Coding. Fundamentals Quantization Waveform Coding Subband Coding P NCTU/CSIE DSPLAB C.M..LIU
Audio Coding P.1 Fundamentals Quantization Waveform Coding Subband Coding 1. Fundamentals P.2 Introduction Data Redundancy Coding Redundancy Spatial/Temporal Redundancy Perceptual Redundancy Compression
More informationRun-length & Entropy Coding. Redundancy Removal. Sampling. Quantization. Perform inverse operations at the receiver EEE
General e Image Coder Structure Motion Video x(s 1,s 2,t) or x(s 1,s 2 ) Natural Image Sampling A form of data compression; usually lossless, but can be lossy Redundancy Removal Lossless compression: predictive
More informationImage Compression Basis Sebastiano Battiato, Ph.D.
Image Compression Basis Sebastiano Battiato, Ph.D. battiato@dmi.unict.it Compression and Image Processing Fundamentals; Overview of Main related techniques; JPEG tutorial; Jpeg vs Jpeg2000; SVG Bits and
More informationFundamentals. Relative data redundancy of the set. C.E., NCU, Taiwan Angela Chih-Wei Tang,
Image Compressio Agela Chih-Wei Tag ( 唐之瑋 ) Departmet of Commuicatio Egieerig Natioal Cetral Uiversity JhogLi, Taiwa 2012 Sprig Fudametals Compressio ratio C R / 2, 1 : origial, 2 1 : compressed Relative
More informationSource Coding: Part I of Fundamentals of Source and Video Coding
Foundations and Trends R in sample Vol. 1, No 1 (2011) 1 217 c 2011 Thomas Wiegand and Heiko Schwarz DOI: xxxxxx Source Coding: Part I of Fundamentals of Source and Video Coding Thomas Wiegand 1 and Heiko
More informationCSE 408 Multimedia Information System Yezhou Yang
Image and Video Compression CSE 408 Multimedia Information System Yezhou Yang Lots of slides from Hassan Mansour Class plan Today: Project 2 roundup Today: Image and Video compression Nov 10: final project
More informationWaveform-Based Coding: Outline
Waveform-Based Coding: Transform and Predictive Coding Yao Wang Polytechnic University, Brooklyn, NY11201 http://eeweb.poly.edu/~yao Based on: Y. Wang, J. Ostermann, and Y.-Q. Zhang, Video Processing and
More information通量数据质量控制的理论与方法 理加联合科技有限公司
通量数据质量控制的理论与方法 理加联合科技有限公司 通量变量 Rn = LE + H + G (W m -2 s -1 ) 净辐射 潜热 感热 地表热 通量 通量 通量 通量 Fc (mg m -2 s -1 ) 二氧化碳通量 τ [(kg m s -1 ) m -2 s -1 ] 动量通量 质量控制 1. 概率统计方法 2. 趋势法 3. 大气物理依据 4. 测定实地诊断 5. 仪器物理依据 '
More informationCMPT 365 Multimedia Systems. Lossless Compression
CMPT 365 Multimedia Systems Lossless Compression Spring 2017 Edited from slides by Dr. Jiangchuan Liu CMPT365 Multimedia Systems 1 Outline Why compression? Entropy Variable Length Coding Shannon-Fano Coding
More informationAtomic & Molecular Clusters / 原子分子团簇 /
Atomic & Molecular Clusters / 原子分子团簇 / 王金兰 Email: jlwang@seu.edu.cn Department of Physics Southeast University What is nanometer? Nano is Small (10-7 --10-9 m; 1-100 nm) 10 0 m 10-1 m 10-2 m 10-3 m 10-4
More informationat Some sort of quantization is necessary to represent continuous signals in digital form
Quantization at Some sort of quantization is necessary to represent continuous signals in digital form x(n 1,n ) x(t 1,tt ) D Sampler Quantizer x q (n 1,nn ) Digitizer (A/D) Quantization is also used for
More informationGRE 精确 完整 数学预测机经 发布适用 2015 年 10 月考试
智课网 GRE 备考资料 GRE 精确 完整 数学预测机经 151015 发布适用 2015 年 10 月考试 20150920 1. n is an integer. : (-1)n(-1)n+2 : 1 A. is greater. B. is greater. C. The two quantities are equal D. The relationship cannot be determined
More informationLecture English Term Definition Chinese: Public
Lecture English Term Definition Chinese: Public A Convert Change between two forms of measurements 变换 A Equilateral Triangle Triangle with equal sides and angles A FPS Units The American system of measurement.
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Lesson 7 Delta Modulation and DPCM Instructional Objectives At the end of this lesson, the students should be able to: 1. Describe a lossy predictive coding scheme.
More informationSIGNAL COMPRESSION. 8. Lossy image compression: Principle of embedding
SIGNAL COMPRESSION 8. Lossy image compression: Principle of embedding 8.1 Lossy compression 8.2 Embedded Zerotree Coder 161 8.1 Lossy compression - many degrees of freedom and many viewpoints The fundamental
More informationCSEP 521 Applied Algorithms Spring Statistical Lossless Data Compression
CSEP 52 Applied Algorithms Spring 25 Statistical Lossless Data Compression Outline for Tonight Basic Concepts in Data Compression Entropy Prefix codes Huffman Coding Arithmetic Coding Run Length Coding
More informationRLE = [ ; ], with compression ratio (CR) = 4/8. RLE actually increases the size of the compressed image.
MP/BME 574 Application Solutions. (2 pts) a) From first principles in class, we expect the entropy of the checkerboard image to be since this is the bit depth of the image and the frequency of each value
More information( 选出不同类别的单词 ) ( 照样子完成填空 ) e.g. one three
Contents 目录 TIPS: 对于数量的问答 - How many + 可数名词复数 + have you/i/we/they got? has he/she/it/kuan got? - I/You/We/They have got + 数字 (+ 可数名词复数 ). He/She/It/Kuan has got + 数字 (+ 可数名词复数 ). e.g. How many sweets
More informationSource Coding for Compression
Source Coding for Compression Types of data compression: 1. Lossless -. Lossy removes redundancies (reversible) removes less important information (irreversible) Lec 16b.6-1 M1 Lossless Entropy Coding,
More informationMultimedia. Multimedia Data Compression (Lossless Compression Algorithms)
Course Code 005636 (Fall 2017) Multimedia Multimedia Data Compression (Lossless Compression Algorithms) Prof. S. M. Riazul Islam, Dept. of Computer Engineering, Sejong University, Korea E-mail: riaz@sejong.ac.kr
More informationChapter 2 the z-transform. 2.1 definition 2.2 properties of ROC 2.3 the inverse z-transform 2.4 z-transform properties
Chapter 2 the -Transform 2.1 definition 2.2 properties of ROC 2.3 the inverse -transform 2.4 -transform properties 2.1 definition One motivation for introducing -transform is that the Fourier transform
More informationCS6304 / Analog and Digital Communication UNIT IV - SOURCE AND ERROR CONTROL CODING PART A 1. What is the use of error control coding? The main use of error control coding is to reduce the overall probability
More informationChapter 2: Source coding
Chapter 2: meghdadi@ensil.unilim.fr University of Limoges Chapter 2: Entropy of Markov Source Chapter 2: Entropy of Markov Source Markov model for information sources Given the present, the future is independent
More informationCooling rate of water
Cooling rate of water Group 5: Xihui Yuan, Wenjing Song, Ming Zhong, Kaiyue Chen, Yue Zhao, Xiangxie Li 目录. Abstract:... 2. Introduction:... 2 2.. Statement of the problem:... 2 2.2 objectives:... 2 2.3.
More information2018/5/3. YU Xiangyu
2018/5/3 YU Xiangyu yuxy@scut.edu.cn Entropy Huffman Code Entropy of Discrete Source Definition of entropy: If an information source X can generate n different messages x 1, x 2,, x i,, x n, then the
More informationAutumn Coping with NP-completeness (Conclusion) Introduction to Data Compression
Autumn Coping with NP-completeness (Conclusion) Introduction to Data Compression Kirkpatrick (984) Analogy from thermodynamics. The best crystals are found by annealing. First heat up the material to let
More informationInformation Theory. Coding and Information Theory. Information Theory Textbooks. Entropy
Coding and Information Theory Chris Williams, School of Informatics, University of Edinburgh Overview What is information theory? Entropy Coding Information Theory Shannon (1948): Information theory is
More information2. The lattice Boltzmann for porous flow and transport
Lattice Boltzmann for flow and transport phenomena 2. The lattice Boltzmann for porous flow and transport Li Chen XJTU, 2016 Mail: lichennht08@mail.xjtu.edu.cn http://gr.xjtu.edu.cn/web/lichennht08 Content
More informationLinear Regression. Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) SDA Regression 1 / 34
Linear Regression 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) SDA Regression 1 / 34 Regression analysis is a statistical methodology that utilizes the relation between
More informationLecture 2. Random variables: discrete and continuous
Lecture 2 Random variables: discrete and continuous Random variables: discrete Probability theory is concerned with situations in which the outcomes occur randomly. Generically, such situations are called
More informationCHAPTER 3. Transformed Vector Quantization with Orthogonal Polynomials Introduction Vector quantization
3.1. Introduction CHAPTER 3 Transformed Vector Quantization with Orthogonal Polynomials In the previous chapter, a new integer image coding technique based on orthogonal polynomials for monochrome images
More informationChapter 1 Linear Regression with One Predictor Variable
Chapter 1 Linear Regression with One Predictor Variable 許湘伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 1 1 / 41 Regression analysis is a statistical methodology
More informationELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS
EC 32 (CR) Total No. of Questions :09] [Total No. of Pages : 02 III/IV B.Tech. DEGREE EXAMINATIONS, APRIL/MAY- 207 Second Semester ELECTRONICS & COMMUNICATIONS DIGITAL COMMUNICATIONS Time: Three Hours
More information0 0 = 1 0 = 0 1 = = 1 1 = 0 0 = 1
0 0 = 1 0 = 0 1 = 0 1 1 = 1 1 = 0 0 = 1 : = {0, 1} : 3 (,, ) = + (,, ) = + + (, ) = + (,,, ) = ( + )( + ) + ( + )( + ) + = + = = + + = + = ( + ) + = + ( + ) () = () ( + ) = + + = ( + )( + ) + = = + 0
More informationEE5356 Digital Image Processing
EE5356 Digital Image Processing INSTRUCTOR: Dr KR Rao Spring 007, Final Thursday, 10 April 007 11:00 AM 1:00 PM ( hours) (Room 111 NH) INSTRUCTIONS: 1 Closed books and closed notes All problems carry weights
More informationKey Topic. Body Composition Analysis (BCA) on lab animals with NMR 采用核磁共振分析实验鼠的体内组分. TD-NMR and Body Composition Analysis for Lab Animals
TD-NMR and Body Composition Analysis for Lab Animals 时域磁共振及实验鼠体内组分的测量 Z. Harry Xie ( 谢宗海 谢宗海 ), PhD Bruker Optics, Inc. Key Topic Body Composition Analysis (BCA) on lab animals with NMR 采用核磁共振分析实验鼠的体内组分
More informationLecture 2: Introduction to Probability
Statistical Methods for Intelligent Information Processing (SMIIP) Lecture 2: Introduction to Probability Shuigeng Zhou School of Computer Science September 20, 2017 Outline Background and concepts Some
More informationOn Compression Encrypted Data part 2. Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University
On Compression Encrypted Data part 2 Prof. Ja-Ling Wu The Graduate Institute of Networking and Multimedia National Taiwan University 1 Brief Summary of Information-theoretic Prescription At a functional
More informationEE5585 Data Compression April 18, Lecture 23
EE5585 Data Compression April 18, 013 Lecture 3 Instructor: Arya Mazumdar Scribe: Trevor Webster Differential Encoding Suppose we have a signal that is slowly varying For instance, if we were looking at
More informationLecture Note on Linear Algebra 14. Linear Independence, Bases and Coordinates
Lecture Note on Linear Algebra 14 Linear Independence, Bases and Coordinates Wei-Shi Zheng, wszheng@ieeeorg, 211 November 3, 211 1 What Do You Learn from This Note Do you still remember the unit vectors
More information= lim(x + 1) lim x 1 x 1 (x 2 + 1) 2 (for the latter let y = x2 + 1) lim
1061 微乙 01-05 班期中考解答和評分標準 1. (10%) (x + 1)( (a) 求 x+1 9). x 1 x 1 tan (π(x )) (b) 求. x (x ) x (a) (5 points) Method without L Hospital rule: (x + 1)( x+1 9) = (x + 1) x+1 x 1 x 1 x 1 x 1 (x + 1) (for the
More informationNumerical Analysis in Geotechnical Engineering
M.Sc. in Geological Engineering: Subject No. 8183B2 Numerical Analysis in Geotechnical Engineering Hong-Hu ZHU ( 朱鸿鹄 ) School of Earth Sciences and Engineering, Nanjing University www.slope.com.cn Lecture
More informationMAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK UNIT V PART-A. 1. What is binary symmetric channel (AUC DEC 2006)
MAHALAKSHMI ENGINEERING COLLEGE-TRICHY QUESTION BANK SATELLITE COMMUNICATION DEPT./SEM.:ECE/VIII UNIT V PART-A 1. What is binary symmetric channel (AUC DEC 2006) 2. Define information rate? (AUC DEC 2007)
More informationOptimum Notch Filtering - I
Notch Filtering 1 Optimum Notch Filtering - I Methods discussed before: filter too much image information. Optimum: minimizes local variances of the restored image fˆ x First Step: Extract principal freq.
More informationProject Report of DOE
Project Report of DOE Research on the working principle of the spinning top 3/6/8 Group 3 Leader: Li Zheran 李哲然 Members: Xu Zan 许赞 She Manrong 佘曼榕 Lei Chen 雷晨 目录. Motivation... 3. Literature Review...
More information三类调度问题的复合派遣算法及其在医疗运营管理中的应用
申请上海交通大学博士学位论文 三类调度问题的复合派遣算法及其在医疗运营管理中的应用 博士生 : 苏惠荞 导师 : 万国华教授 专业 : 管理科学与工程 研究方向 : 运作管理 学校代码 : 10248 上海交通大学安泰经济与管理学院 2017 年 6 月 Dissertation Submitted to Shanghai Jiao Tong University for the Degree of
More informationChapter 4. Mobile Radio Propagation Large-Scale Path Loss
Chapter 4 Mobile Radio Propagation Large-Scale Path Loss The mobile radio channel places fundamental limitations on the performance. The transmission path between the T-R maybe very complexity. Radio channels
More informationCMPT 365 Multimedia Systems. Final Review - 1
CMPT 365 Multimedia Systems Final Review - 1 Spring 2017 CMPT365 Multimedia Systems 1 Outline Entropy Lossless Compression Shannon-Fano Coding Huffman Coding LZW Coding Arithmetic Coding Lossy Compression
More informationPrincipia and Design of Heat Exchanger Device 热交换器原理与设计
Principia and Design of Heat Exchanger Device 热交换器原理与设计 School of Energy and Power Engineering, SDU Presented: 杜文静 E-mail: wjdu@sdu.edu.cn Telephone: 88399000-511 1. Spiral heat exchanger 螺旋板式热交换器 basic
More informationCSCI 2570 Introduction to Nanocomputing
CSCI 2570 Introduction to Nanocomputing Information Theory John E Savage What is Information Theory Introduced by Claude Shannon. See Wikipedia Two foci: a) data compression and b) reliable communication
More informationAnalysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming
Analysis of Rate-distortion Functions and Congestion Control in Scalable Internet Video Streaming Min Dai Electrical Engineering, Texas A&M University Dmitri Loguinov Computer Science, Texas A&M University
More information暗物质 II 毕效军 中国科学院高能物理研究所 2017 年理论物理前沿暑期讲习班 暗物质 中微子与粒子物理前沿, 2017/7/25
暗物质 II 毕效军 中国科学院高能物理研究所 2017 年理论物理前沿暑期讲习班 暗物质 中微子与粒子物理前沿, 2017/7/25 Outline 暗物质 profile 确定 Subhalo 计算 Axion 简介 温暗物质 sterile neutrino Nonthermal DM SIDM Fuzzy dm 2 看什么信号? Gamma, e+,pbar; 什么实验探测? 看什么地方?
More informationService Bulletin-04 真空电容的外形尺寸
Plasma Control Technologies Service Bulletin-04 真空电容的外形尺寸 在安装或者拆装真空电容时, 由于真空电容的电级片很容易移位, 所以要特别注意避免对电容的损伤, 这对于过去的玻璃电容来说非常明显, 但对于如今的陶瓷电容则不那么明显, 因为它们能够承载更高的机械的 电性能的负载及热负载 尽管从外表看来电容非常结实, 但是应当注意, 由于采用焊接工艺来封装铜和陶瓷,
More informationSynthesis of PdS Au nanorods with asymmetric tips with improved H2 production efficiency in water splitting and increased photostability
Chinese Journal of Catalysis 39 (2018) 407 412 催化学报 2018 年第 39 卷第 3 期 www.cjcatal.org available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/chnjc Communication (Special Issue of
More informationCT reconstruction Simulation
Lecture IX CT reconstruction Simulation 1 Review 1. Direct back projection alg0rithm?. Filtered back projection 3. R-L, S-L filter 4. What are the four kinds of back projection algorithm? 5. What are the
More informationEaster Traditions 复活节习俗
Easter Traditions 复活节习俗 1 Easter Traditions 复活节习俗 Why the big rabbit? 为什么有个大兔子? Read the text below and do the activity that follows 阅读下面的短文, 然后完成练习 : It s Easter in the UK and the shops are full of Easter
More informationSome new developments in image compression
Retrospective Theses and Dissertations 1994 Some new developments in image compression Keshi Chen Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/rtd Part of the Electrical
More information