Quantization. Quantization is the realization of the lossy part of source coding Typically allows for a trade-off between signal fidelity and bit rate

Size: px
Start display at page:

Download "Quantization. Quantization is the realization of the lossy part of source coding Typically allows for a trade-off between signal fidelity and bit rate"

Transcription

1 Quantizatin Quantizatin is the realizatin f the lssy part f surce cding Typically allws fr a trade-ff between signal fidelity and bit rate s! s! Quantizer Quantizatin is a functinal mapping f a (cntinuus r discrete, scalar r vectr) input pint t an utput pint, such that the set f btainable utput pints is cuntable, and the set f btainable utput pints has fewer members than the set f admissible input pints: intrduces a nn-reversible lss in signal fidelity s(t) s[n] t n December 5, / 77

2 Outline Structure and Perfrmance f Quantizers Scalar Quantizatin Scalar Quantizatin with Fixed-Length Cdes 16 Scalar Quantizatin with Variable-Length Cdes 111 High-Rate Operatinal Distrtin Rate Functins 119 Apprximatin fr Distrtin Rate Functins 125 Perfrmance Cmparisn fr Gaussian Surces 127 Scalar Quantizatin fr Surces with Memry Vectr Quantizatin Linde-Buz-Gray Quantizer Design Gain ver Scalar Quantizatin Chu-Lkabaugh-Gray Quantizer Design Structured Vectr Quantizers December 5, / 77

3 Structure f Quantizers Quantizer descriptin is split int encder α, transmitted decding index i and decder β s! i! s!!" #" Inserting lssless mapping γ and inverse mapping γ 1 f the indexes i t a binary sequence {, 1} as well as the transmissin channel yields Quantizatin prcedure s! i! b! b! i! s!!" $" channel $ 1 " #" 1 Encder α maps ne r mre samples f input signal s t indexes i 2 Mapping γ maps the indexes i int a bit stream b (e.g. fixed-length r variable-length lssless cder) 3 Channel utputs transmitted bit stream b (= b fr niseless transmissin) 4 Inverse mapping γ 1 inverts mapping γ reprducing indexes i (e.g. fixed-length r variable-length lssless decder, respectively) 5 Decder β maps decding index i t ne r mre samples f decded signal s December 5, / 77

4 Quantizer Mappings Encder mappings α, γ have their cunterparts at decder β, γ 1 Decder mappings must be either implemented at receiver and/r transmitted Mapping fr N-dimensinal vectrs Q : R N {s, s 1,, s K 1} (1) Quantizatin cells: subset C i f the N-dimensinal Euclidean space R N C i = { s R N } : Q(s) = s i (2) Quantizatin cells C i frm partitin f the N-dimensinal Euclidean space R N K 1 i= Specify quantizatin mapping C i = R N with i j : C i C j = (3) Q(s) = s i s C i (4) December 5, / 77

5 Perfrmance f Quantizers Encder mapping α : R N I intrduces distrtin s! i!!" $" b! b! i! Channel! $ 1 " #" s! D! R! #" s! Assume randm prcess {S n } t be statinary: distrtin Rate D = E {d N (S n, Q(S n ))} = 1 N K 1 i= R = 1 N E { γ( Q(S n) ) } = 1 N C i d N (s, Q(s)) f S (s) ds (5) N 1 i= p(s i) γ(s i) (6) where γ(s i) dentes cdewrd length and p(s i) dentes pmf fr the s i p(s i) = f S (s) ds (7) C i December 5, / 77

6 Scalar Quantizatin Scalar Quantizatin Belw: Input/utput functin f a scalar quantizer. K recnstructin levels Output signal s s i+1 s i+2! s i u i! u i+1! u i+1! K-1 decisin! threshlds! A scalar (ne-dimensinal) quantizer is a mapping Input " signal s! Q : R {s, s 1,..., s K 1} (8) Quantizatin cells C i = [u i, u i+1 ) with u = and u K = (nte: C = (, u 1 )) Step size fr recnstructin level i is dented as i = u i+1 u i December 5, / 77

7 Scalar Quantizatin Perfrmance f Scalar Quantizers Scalar quantizatin f an amplitude-cntinuus randm variable S can be viewed as a discretizatin f its cntinuus pdf f(s) Average distrtin when assuming the MSE criterin is given as D = E {d 1 (S, β(α(s)))} = E {d 1 (S, S )} = K 1 i= ui+1 u i (s s i) 2 f(s) ds (9) Average rate is given by the expectatin value f the cdewrd length R = E { γ(q(s)) } = N 1 i= p(s i) γ(s i) (1) Optimize mappings α (i.e. u i ), β (i.e. s i ), and γ December 5, / 77

8 Scalar Quantizatin Scalar Quantizatin with Fixed-Length Cdes Restrictin n lssless cding mapping γ: assign cdewrd f same length t each recnstructin level s i Quantizer f size K: cdewrd length must be greater than r equal t lg 2 K If K is nt a pwer f 2, quantizer requires the same minimum cdewrd length as a quantizer f size K = 2 lg 2 K Since K < K, quantizer f size K can achieve a smaller distrtin Define rate while nly cnsidering that K represents a pwer f 2 R = lg 2 K, (11) December 5, / 77

9 Scalar Quantizatin Pulse-Cde-Mdulatin (PCM) PCM: unifrm mappings α and β All quantizatin intervals have same size Recnstructin values s i are placed in the middle between the decisin threshlds u i and u i+1 PCM fr randm prcesses with amplitude range [s min, s max ] A = s max s min = A K = A 2 R (12) Quantizatin mapping s smin Q(s) = Unifrm distributin: f(s) = 1 A fr A 2 s A 2 K 1 smin+(i+1) ( D = s s min i= s min+i 1 A Operatinal rate distrtin functin s min. (13) ( i + 1 ) 2 ) ds (14) 2 D PCM,unifrm (R) = A R = σ 2 2 2R (15) December 5, / 77

10 Scalar Quantizatin D(R) fr PCM fr Surces with Infinite Supprt In general, interval limits u i can be chsen as u =, u K =, u i+1 u i = fr 1 i K 1 (16) Symmetric pdfs: recnstructin symbls s i with i < K and interval bundaries u i with < i < K s i = (i K 1 ( ) u i = i K ) (17) 2 2 Distrtin D is split int granular distrtin D G and verlad distrtin D O D( ) = D G ( ) + D O ( ) Optimum fr a given pdf at a given rate R? Distrtin minimizatin by balancing granular and verlad distrtin min D( ) = min [D G( ) + D O ( )] (18) December 5, / 77

11 Scalar Quantizatin Overlad and Granular Distrtin Average distrtin fr PCM fr surces with infinite supprt D( ) = = K 1 ui+1 i= u i ( K 2 +1) (s s i) 2 f(s) ds (s s ) 2 f(s)ds } {{ } verlad distrtin + K 2 i=1 (i+1 K 2 ) (i K 2 ) (s s i) 2 f(s) ds }{{} granular distrtin + (s s K 1) 2 f(s) ds ( K 2 } 1) {{} verlad distrtin In general, it is nt pssible t calculate the ptimum step sizes pt explicitly: numerical ptimizatin December 5, / 77

12 Scalar Quantizatin Optimum Step Size fr PCM Distrtin D( ) vs. step size fr a Gaussian pdf with unit variance Cyan: R = 2, Magenta: R = 3, Green: R = 4 bit/sample.25 D( ).2.15 min D( ) December 5, / 77

13 Scalar Quantizatin Numerical Optimizatin Results fr Unifrm Quantizatin and Fixed-Length Cdes SNR [db] U: Unifrm pdf L: Laplacian pdf G: Gaussian pdf U L G Slid lines: Shannn Lwer Bund Dashed lines/circles: pdf ptimzed unifrm quantizatin R [bit/symbl] pt /σ L G U R [bit/symbl] Numerical minimizatin f distrtin by varying Lss in SNR is large and increases twards higher rates Imprvement thrugh pdf-ptimized quantizers Make variable, use variable length cdes r bth? December 5, / 77

14 Scalar Quantizatin Mid-Rise and Mid-Tread Quantizers Quantizatin nise is signal-dependent Output depends n quantizer type: mid-rise r mid-tread s! s!!" s! s! Mid-Rise Quantizer! Mid-Tread Quantizer! Fr s < 2 2, mid-rise quantizers generate nise with variance 4 s [n] s[n] s [n] n n s [n] s[n] s [n] n n December 5, / 77

15 Scalar Quantizatin Cnditins fr Optimality: Generalized Centrid Cnditin Average distrtin D = K 1 i= D i (s i) = K 1 i= ui+1 u i d 1 (s, s i) f(s) ds (19) Inside a quantizatin interval, we have f(s) = f(s s i ) p(s i ), yielding D i (s i) = p(s i) with u i+1 u i d 1 (s, s i) f(s s i) ds = p(s i) E {d 1 (S, s i) S C i } (2) p(s i) = Since p(s i ) des nt depend n s i s i ui+1 u i f(s) ds (21) = arg min s R E {d 1(S, s ) S C i } (22) Generalized centrid cnditin December 5, / 77

16 Scalar Quantizatin Minimizatins fr Randm Variables Given a randm variable X, the value f y that minimizes E { (X y) 2} is y = E {X} (23) which can be shwn by E { (X y) 2} = E { (X E {X} + E {X} y) 2} = E { (X E {X}) 2} + (E {X} y) 2 E { (X E {X}) 2} (24) Cnsequently, given an event A, the value that minimizes E { (X y) 2 X A } (25) is y = E {X X A} (26) December 5, / 77

17 Scalar Quantizatin Generalized centrid cnditin fr MSE Generalized centrid cnditin s i with d 1 (x, y) = (x y) 2 = arg min s R E {d 1(S, s ) S C i } (27) The value f s i that minimizes the generalized centrid cnditin is s i = E {S S C i } = = = ui+1 u i ui+1 u i ui+1 u i ui+1 u i s f(s s i) ds s f(s) p(s i ) ds s f(s) ds f(s) ds with f(s) = f(s s i ) p(s i ) and p(s i ) = u i+1 u i f(s) ds (28) December 5, / 77

18 Scalar Quantizatin Cnditins fr Optimality: Nearest Neighbr Cnditin Given recnstructin levels s i, minimize average distrtin D = K 1 i= D i (s i) = K 1 i= ui+1 u i d 1 (s, s i) f(s) ds (29) by chsing the decisin bundaries u i Decisin threshld u i influences nly the distrtins D i f neighbring intervals d 1 (u i, s i 1) = d 1 (u i, s i) (3) fr all decisin threshlds u i, with < i < K Fr MSE, ptimal decisin threshlds u i with < i < K u i = 1 2 (s i 1 + s i) (31) December 5, / 77

19 Scalar Quantizatin Pdf-Optimized Scalar Quantizatin with Fixed Length Cdes Fixed-length cde fr K = 2 R recnstructin values, R : integer (32) Minimizatin f distrtin fr MSE criterin D = E {d 1 (S, S )} = K 1 i= Necessary cnditin fr decisin threshlds ui+1 u i (s s i) 2 f(s) ds (33) Necessary cnditin fr reprductin values u i = 1 2 (s i 1 + s i) (34) s i = ui+1 u i ui+1 u i s f(s) ds f(s) ds (35) December 5, / 77

20 Scalar Quantizatin Llyd Algrithm Iterative quantizer design (given here fr MSE) [Llyd, 1982] 1 Given a sufficiently large realizatin {s n} (f surce distributin f(s)) the number K f recnstructin levels {s i} (the lgarithm f the size f the set is equal t the rate per symbl) 2 Chse an initial set f unique recnstructin levels {s i } 3 Assciate all samples f the training set {s n } with ne f the quantizatin intervals C i accrding t α(s n ) = arg min i d 1 (s n, s i) (nearest neighbr cnditin) and update the decisin threshlds {u i } accrdingly 4 Update the recnstructin levels {s i } accrding t s i = arg min s R E {d 1(S, s ) α(s) = i} (centrid cnditin) where the expectatin value is taken ver the training set 5 Repeat the previus tw steps until cnvergence December 5, / 77

21 Scalar Quantizatin Llyd Algrithm Example fr Gaussian Surce Assume Gaussian distributin with unit variance f(s) = 1 σ 2 2π e s /(2σ 2 ) Draw a sufficiently large number f samples (> 1) frm f(s) Design minimum mean squared errr (MMSE) quantizer with rate R = 2 bit/symbl Result f Llyd algrithm Decisin threshlds u i u 1 =.98, u 2 =, u 3 =.98 Decding symbls s i s = 1.51, s 1 =.45 s 2 =.45, s 3 = 1.51 Minimum distrtin: D F =.12 = 9.3 db f( s) u 1 u 2 u 3 s s 1 s 2 s 3 (36) s December 5, / 77

22 Scalar Quantizatin Cnvergence f Llyd Algrithm fr Gaussian Surce Example Initializatin A: s i = i 5 u 4 4 s 3 3 u 3 2 s 1 2 u 2 1 s 1 2 u 1 3 s 4 u D 16 D 1 lg ( D i K) 1 lg 1 D D D 1 D Initializatin B: s 3/ = +/ 1.15, s 2/1 = +/.32 5 u s 1 u 3 3 s u 2 2 s u s u lg 14 D 1 ( D i K) 1 D 1 lg 1 D D D 6 D Fr bth initializatins, (D DF )/D F < 1% after 6 iteratins December 5, / 77

23 Scalar Quantizatin Llyd Algrithm Example fr Laplacian Surce Assume Laplacian distributin with unit variance f(s) = 1 σ 2/σ 2 e s (37) Draw a sufficiently large number f samples (> 1) frm f(s) Design minimum mean squared errr (MMSE) quantizer with rate R = 2 bit/symbl Result f Llyd algrithm Decisin threshlds u i u 1 = 1.13, u 2 =, u 3 = 1.13 Decding symbls s i s = 1.83, s 1 =.42 s 2 =.42, s 3 = 1.83 Minimum distrtin: D F =.18 = 7.55 db f( s) u 1 u 2 u 3 s s 1 s 2 s s December 5, / 77

24 Scalar Quantizatin Cnvergence f Llyd Algrithm fr Laplacian Surce Example Initializatin A: s i = i & u %! % s+ $ $ u $ # s+ " # u #!!" s+ "!# u "!$ s+!%! u!!!!&! " # $ % & ' ( ) * "!"""# "'!"!,-./ "% + "! D i N1!!"!,-./ "# + "! D $ "! ) ' % # +! + "!# + #! " # $ % & ' ( ) * "!"""# Initializatin B: s 3/ = +/ 1.15, s 2/1 = +/.32 & u %! % $ # s+ " u $ $ s+ u! # # s+ u "!" " s+!!#!$!% u!!!!&! " # $ % & ' ( ) * "!"""# "'!"!,-./ "! D i N1 "% + "!"!,-./ "! D "# + # "! ) ' + % +! + # $!!#! " # $ % & ' ( ) * "!"""# Fr bth initializatins, (D DF )/D F < 1% after 6 iteratins December 5, / 77

25 Scalar Quantizatin Entrpy-Cnstrained Scalar Quantizatin In additin t variable step size: variable-length cding f indices Average rate R = E {r(s )} = N 1 i= p(s i) l(s i) K 1 E { lg 2 p(s )} = p(s i) lg 2 p(s i) (38) with p(s i ) being given by the integral ver f(s) with the limits f the interval C i = [u i, u i+1 ) cntaining s i p(s i) = ui+1 i= u i f(s) ds (39) nly depending n u i and u i+1 and nt n the actual value f s i Average MSE distrtin D = E {d(s, S )} = K 1 i= ui+1 u i (s s i) 2 f(s) ds (4) December 5, / 77

26 Scalar Quantizatin Lagrangian Functin f D and R We are given the task min D subject t R R C (41) r equivalently min R subject t D D C (42) Instead f the cnstrained minimizatin, minimize a Lagrangian functin J = D + λr = E {d 1 (S, S )} + λe {l(s )} (43) The necessary cnditin with respect t the recnstructin values s i remains unchanged as average rate des nt depend n s i (nte: s i can be mved arund inside the interval withut affecting the prbability f the interval) s i = E {S s C i } = yielding the ptimum decder β(i) ui+1 u i ui+1 u i s f(s) ds f(s) ds (44) December 5, / 77

27 Scalar Quantizatin Lagrangian Minimizatin in Discrete Sets We want t minimize J = D + λr given the ptimal decder β(i) J = D + λr = E {d 1 (S, S )} + λe {l(s )} (45) with Q being the quantizer with expected distrtin D and rate R A necessary cnditin fr the minimizatin f J is given when fr every surce symbl s n, we minimize α(s n ) = arg min s i d 1 (s, s i) + λ l(s i) (46) with l(s i ) = lg 2 p(s i ) A mapping α : R I that that minimizes the psitive integrand in Eq. (46) minimizes the integral in the expectatin Eq. (45) (mre rigrus prf in [Shham and Gersh, 1988]) December 5, / 77

28 Scalar Quantizatin Optimal Decisin Threshlds Lagrangian functinal is minimized fr encding α(s n ) = arg min s i d 1 (s, s i) + λ l(s i) (47) Decisin threshlds u i d 1 (u i, s i 1) + λ l(s i 1) = d 1 (u i, s i) + λ l(s i) (48) Fr MSE distrtin, we have (u i s i 1) 2 + λl(s i 1) = (u i s i) 2 + λl(s i) (49) yielding u i = s i + s i λ 2 l(s i ) l(s i 1 ) s i s i 1 (5) The decisin threshld is shifted frm the middle between the recnstructin values tward the recnstructin value with the lnger cdewrd December 5, / 77

29 Scalar Quantizatin Example fr Lagrangian Minimizatin 5 different quantizers with D i (R) = a 2 i 2 2R with a 2 i randmly chsen and rate pints R being D [MSE].8.6 D [MSE] R [bits/symbl] R [bits/symbl] LHS: Each circle is ne f 6 rd pints that is available fr each quantizer RHS: Blue dts shw mean distrtin and rate fr all pssible cmbinatins f the 5 different quantizers with their 6 rate distrtin pints RHS: Red circles shw the slutins t the Lagrangian minimizatin prblem December 5, / 77

30 Scalar Quantizatin Entrpy-Cnstrained Llyd Algrithm 1 Given is a sufficiently large realizatin {s n } (f surce distributin f(s)) 2 Chse an initial set f recnstructin levels {s i} an initial set f cdewrd length l(s i) 3 Assciate all samples f the traing set {s n } with ne f the quantizatin intervals C i accrding t α(s n ) = arg min d 1 (s n, s i) + λ l(s i) (51) s i and update the decisin threshlds {u i } accrdingly 4 Update the recnstructin levels {s i } accrding t s i = arg min s R E {d 1(S, s ) α(s) = i} (52) where the expectatin value f taken ver the training set 5 Update the cdewrd lengths l(s i ) accrding t 6 Repeat previus three steps until cnvergence l(s i) = lg 2 p(s i) (53) December 5, / 77

31 Scalar Quantizatin Number f Initial Decding Symbls fr EC Llyd Algrithm D N=2 N=3 N=4 N= R [bit/symbl] SNR [db] N=2 N=4 N=3 N=19 N=16 N=17N=18 N=13 N=14N=15 N=1 N=11N=12 N=9 N=7 N=8 N=6 N= R [bit/symbl] Entrpy cnstraint in EC Llyd algrithm causes shift f csts depending n the cnditinal prbability p(α(s n ) α(s n ) = i) Hence, if tw decding symbls s i and s k are cmpeting, the symbl with larger ppularity (e.g. p(α(s n ) α(s n ) = i)) has higher chance f being chsen Decding symbl which is nt chsen further reduces its assciated cnditinal prbability (p(α(s n ) α(s n ) = k)) As a cnsequence, symbls get remved and the EC Llyd algrithm can be initialized with mre symbls than the final result December 5, / 77

32 Scalar Quantizatin Entrpy-Cnstrained Llyd Algrithm fr Laplacian Surce Assume Laplacian distributin with unit variance Design ptimal entrpy-cnstrained quantizer with average rate R = 2 bit/symbl Optimum average distrtin: DV =.7 = db Results fr ptimal decisin threshlds u i and decding symbls s i are December 5, / 77

33 Scalar Quantizatin Cnvergence f EC Llyd Algrithm fr Laplacian Surce 5 u s u u 12 s 12 2 u 11 s 11 u 1 s 1 1 u 9 s 9 u 8 s 8 u 7 s 7 u 6 s 6 1 u 5 s 5 u 4 s 4 2 u 3 s 3 u 2 s s 1 4 u 5 Initializatin A: s i = i 15 D 14 [db] R 3 [bit/s] u s 3 2 u 3 1 s 2 u 2 1 s 1 2 u 1 3 s Initializatin B: s i = i 4 u D 3 [db] 2 R [bit/s] Fr initializatin A, faster cnvergence f csts than threshlds Fr initializatin B, desired quantizer perfrmance is nt achieved December 5, / 77

34 Scalar Quantizatin Entrpy-Cnstrained Llyd Algrithm fr Gaussian Surce Assume Gaussian distributin with unit variance Design ptimal entrpy-cnstrained quantizer with average rate R = 2 bit/symbl Optimum average distrtin: DF =.9 = 1.45 db Results fr ptimal decisin threshlds u i and decding symbls s i are December 5, / 77

35 Scalar Quantizatin Cnvergence EC Llyd Algrithm fr Gaussian Surce 5 u s u u 12 s 12 2 u 11 s 11 u 1 s 1 1 u 9 s 9 u 8 s 8 u 7 s 7 u 6 s 6 1 u 5 s 5 u 4 s 4 2 u 3 s 3 u 2 s s 1 Initializatin A: s i = i 4 u D [db] R 3 2 [bit/s] u s 3 2 u 3 1 s 2 u 2 1 s 1 2 u 1 3 s Initializatin B: s i = i 4 u D 4 [db] 3 2 R [bit/s] Fr initializatin A, decding bins get discarded Fr initializatin B, desired quantizer perfrmance is nt achieved December 5, / 77

36 Scalar Quantizatin High-Rate Apprximatins fr Scalar Quantizers I Assumptin: small sizes i f quantizatin intervals [u i, u i+1 ) Marginal pdf f(s) nearly cnstant inside each interval Apprximatin p(s i) = Average distrtin f(s) f(s i) fr s [u i, u i+1 ) (54) ui+1 u i f(s) ds (u i+1 u i )f(s i) = i f(s i) (55) D = E {d(s, Q(S))} = K 1 i= K 1 i= ui+1 u i f(s i) (s s i) 2 f(s) ds ui+1 u i (s s i) 2 ds (56) December 5, / 77

37 Scalar Quantizatin High-Rate Apprximatins fr Scalar Quantizers II Average distrtin D = 1 3 K 1 i= K 1 i= ui+1 f(s i) u i (s s i) 2 ds (57) f(s i) ( (u i+1 s i) 3 (u i s i) 3) (58) By differentiatin with respect t s i, we find that fr minimum distrtin, (u i+1 s i) 2 = (u i s i) 2 s i = 1 2 (u i + u i+1 ) (59) Average distrtin at high rates D 1 K 1 f(s 12 i) 3 i = 1 K 1 p(s 12 i) 2 i (6) i= i= Average distrtin at high rates fr cnstant = i D 2 12 (61) December 5, / 77

38 Scalar Quantizatin High-Rate Apprximatins fr Scalar Quantizers with FLC Using K 1 i= K 1 = 1 D = 1 K 1 f(s 12 i) 3 i = 1 12 i= ( K 1 f(s i) 3 i i= ) 1 ( 3 K 1 i= 1 K ) (62) Using Hölders inequality α + β = 1 ( b ) ( b ) β x i α y i i=a i=a b x α i y β i (63) i=a with equality if and nly if x i is prprtinal t y i, it fllws ( D 1 K 1 12 i= ( ) )2 3 f(s i) i = 1 K 12 K 2 ( K 1 i= 3 3 f(s i i) ) (64) Reasn fr α = 1/3: btain expressin in which i has n expnent December 5, / 77

39 Scalar Quantizatin High-Rate Apprximatins fr Scalar Quantizers with FLC Inequality fr average distrtin ( D 1 K K 2 f(s i i) ) (65) i= becmes equality if all terms f(s i ) 3 i are equal Apprximatin asympttically valid fr small intervals i D = 1 ( ) 3 3 f(s) ds 12K 2 (66) With 1/K 2 = 2 lg 2 K2 = 2 2R : peratinal distrtin rate functin fr ptimal scalar quantizers with fixed-length cdes D F (R) = σ 2 ε 2 F 2 2R with ε 2 F = 1 ( ) 3 3 f(s) ds 12σ 2 (67) Published by Panter and Dite in [Panter and Dite, 1951] and is als referred t as the Panter and Dite frmula December 5, / 77

40 Scalar Quantizatin D F (R) fr Optimum High-Rate Quantizers with FLCs D F (R) fr ptimum high-rate scalar quantizatin with fixed-length cdes D F (R) = ε 2 F σ 2 2 2R (68) Unifrm pdf: ε 2 F = 1 ( db) Laplacian pdf: ε 2 F = 4.5 (6.53 db) Gaussian pdf: 3π ε 2 F = (4.35 db) 2 SNR [db] 45 U: Unifrm pdf L: Laplacian pdf 4 G: Gaussian pdf U G L R [bit/symbl] December 5, / 77

41 Scalar Quantizatin High-Rate Pdf-Optimized FLC Quantizers Using Cmpanders Asympttic perfrmance f Llyd quantizer fr small quantizatin step sizes i Realize unequal quantizatin step sizes i using cmpanding Cmpand ing: Cmpressin and Expand ing s! Cmpressr Unifrm Quantizer Expander s! c(s) : Q[c(s)] c -1 (Q[c(s)]) Cmpress signal s using nn-linear cmpressin functin c(s) Apply unifrm quantizatin with step size t cmpressed signal c(s) Expand quantized and cmpressed signal Q[c(s)] using expander functin c 1 ( ) December 5, / 77

42 Scalar Quantizatin Nn-unifrm Quantizatin by Cmpanding 4 Cmpressr c(s) s c 4-4 c(s) Unifrm Steps c Q[c(s)] s c -1 (Q[c(s)]) s Q[c(s)] Nn-unifrm Steps i N 1 i i= 7 8 Expander December 5, / 77

43 Scalar Quantizatin Cmputatin f the Cmpressr Functin Slpe f cmpressr functin c( ) fr i th quantizatin interval dc(s) (69) ds s=s i i Cnditin f equal MSE cntributin per interval is given as 3 f(s i ) i 1 3 f(s)ds (7) K Cmbining the tw abve equatins yields dc(s) = 3 f(s i ) 3 f(s ds s=s i 3 i i f(s i ) = K i ) (71) 3 f(x)dx Cmpressr functin is btained by integrating ver its slpe c(s) = K f(x)dx 3 s 3 K f(x)dx 2 (72) with the minimum fr c(s) being chsen as K /2 December 5, / 77

44 Scalar Quantizatin Examples fr Optimal Cmpander Functins Unifrm pdf with f(s) = 1/A fr A/2 s A/2 c(s) = K 2 s A/2 fr A/2 s A/2 Laplacian pdf with f(s) = 1 σ 2/σ 2 e s c(s) = K sgn(s)(1 2/(3σ) e s ) 2 Gaussian pdf with f(s) = 1 σ 2 2π e s /(2σ 2 ) c(s) = K 2 erf(s) = 2 s π ( ) s erf σ 6 c(s), c!1 (s c ) U G c(s) U: Unifrm pdf L: Laplacian pdf G: Gaussian pdf c!1 (s c ) G U e x2 dx L L s, s c December 5, / 77

45 Scalar Quantizatin High-Rate Apprximatins fr Quantizers with Variable-Length Cdes Use variable length cding fr the quantizer indexes Again, assume pmf p(s i ) f quantized utput signal s as p(s i ) = f(s i ) i The average rate is given as K 1 K 1 R = H(S ) = p(s i) lg 2 p(s i) = f(s i) i lg 2 (f(s i) i ) i= i= K 1 K 1 = f(s i) lg 2 f(s i) i f(s i) i lg 2 i i= f(s) lg 2 f(s) ds 1 2 }{{} differential entrpy h(s) = h(s) 1 2 K 1 i= i= K 1 i= p(s i) lg 2 2 i p(s i) lg 2 2 i (73) December 5, / 77

46 Scalar Quantizatin Applicatin f Jensen s Inequality Jensen s inequality fr cnvex functins ϕ(x i ) such as ϕ(x i ) = lg 2 x i (cp. [Cver and Thmas, 26, p. 657]) ( K 1 ) K 1 K 1 ϕ a i x i a i ϕ(x i ) fr a i = 1 (74) i= i= with equality fr cnstant x i Jensen s inequality and the high-rate distrtin apprximatin R = h(s) 1 2 K 1 i= h(s) 1 2 lg 2 p(s i) lg 2 2 i ( K 1 i= p(s i) 2 i ) i= = h(s) 1 2 lg 2(12D) (75) with equality if and nly if all i =, i.e. fr unifrm quantizatin Fr MSE distrtin and high rates, ptimal quantizers with variable length cdes have unifrm step sizes Result first established in [Gish and Pierce, 1968] using variatinal calculus Use f Jensen s inequality first shwn in [Gray and Gray, 1977] December 5, / 77

47 Scalar Quantizatin Distrtin Rate Functin Cmparisns D V (R) fr ptimum high-rate scalar quantizers with variable-length cdes D V (R) = h(S) 2 2R (76) is factr πe/ r 1.53 db frm the Shannn Lwer Bund (SLB) D L (R) = 1 2πe 22h(S) 2 2R (77) Recall D F (R) fr ptimum high-rate scalar quantizers with fixed-length cdes D F (R) = 1 [ ] 3 3 f(s) ds 2 2R (78) 12 The D X (R) functins (X = L, F, V ) can be expressed in general frm as D X (R) = ε 2 X σ 2 2 2R (79) with ε 2 X being a factr that depends n pdf (f(s)) f the surce and prperties f the quantizer (fixed-length vs. variable length vs. SLB) December 5, / 77

48 Scalar Quantizatin Distrtin Rate Functin Cmparisns fr Varius Example Pdfs Operatinal Distrtin Rate Functin at high rates is given as D X (R) = ε 2 X σ 2 2 2R (8) Values f ε 2 X fr quantizatin methd X Methd Shannn Lwer Panter & Dite Gish & Pierce Bund (SLB) (Pdf-Opt w. FLC) (Unifrm Q. w. VLC) Unifrm pdf Laplacian pdf 6 πe (1.53 db t SLB) (1.53 db t SLB) e π = 4.5 e (7.1 db t SLB) (1.53 db t SLB) Gaussian pdf 1 3π πe (4.34 db t SLB) (1.53 db t SLB) December 5, / 77

49 Scalar Quantizatin Apprximate Quantizer Functins fr ECSQ High-rate apprximatin fr ECSQ nly valid fr large R D h (R) = σ 2 ε 2 2 2R (81) Gaussian pdf: high-rate apprximatin valid fr R > 1 bit/symbl Laplacian pdf: high-rate apprximatin valid fr R > 2.5 bit/symbl Create functin that describes behavir ver entire bit rate range Prperties f g(r) ε 2 2 2R lim R g(r) The first derivative f g(r) shuld be cntinuus and D(R) = σ 2 g(r) (82) g() = 1 (83) = 1 (84) dg(r) dr = g (R) < (85) (86) December 5, / 77

50 Scalar Quantizatin Apprximate Quantizer Functins fr ECSQ and Gaussian pdfs Functin satisfying abve cnditins D(R) = σ 2 g(r) = σ 2 ε 2 ln 2 a lg 2(a 2 2R + 1) (87) with a =.9519 and ε 2 = πe/6 First derivative is given as g (R) = ε 2 2 ln 2 a + 2 2R (88) Red curve: infrmatin distrtin-rate functin, green curve: high-rate apprximatin fr ECSQ, circles: result f EC-Llyd algrithm D(R) 1.5 SNR [db] R [bit/symbl] Rate [bit/symbl] December 5, / 77

51 Scalar Quantizatin Cnvergence t High-Rate Quantizer Functin Functin apprximating ECSQ perfrmance D(R) = σ 2 g(r) = σ 2 ε 2 ln 2 a lg 2(a 2 2R + 1) (89) Fr high rates, 2 2R By substituting x = 2 2R, the first cefficients f a Taylr series expansin f the fllwing functin arund x = reads as g(x) = ε 2 ln 2 a lg 2(a x + 1) (9) g(x) g() + g () x (91) 1! = ε 2 x (92) Hence, eq. (89) cnverges t the distrtin rate functin fr high rates D(R) = σ 2 ε 2 2 2R (93) December 5, / 77

52 Scalar Quantizatin Apprximate Quantizer Functins fr ECSQ and Laplacian pdfs Functin satisfying abve cnditins D(R) = σ 2 g(r) = σ 2 ε 2 ln 2 a lg 2(a 2 2R + 1) (94) with a =.5 and ε 2 = e 2 /6 Red curve: infrmatin distrtin-rate functin, green curve: high-rate apprximatin fr ECSQ, circles: result f EC-Llyd algrithm D(R) 1.5 SNR [db] R [bit/symbl] Rate [bit/symbl] Task: find a better functin g(r) Further reading n scalar quantizatin f Laplacian surces: [Sullivan, 1996] December 5, / 77

53 Scalar Quantizatin Perfrmance f Different Scalar Quantizer fr Gaussian i.i.d. SNR [db] R [bit/symbl] Cmparisn f rate distrtin perfrmance fr Gaussian surces Entrpy-cnstrained scalar quantizer is 1.53 db frm distrtin rate curve Distrtin rate curve fr Gauss-Markv surce with crrelatin ρ =.9 is shwn - statistical dependencies between signals cannt be explited December 5, / 77

54 Vectr Quantizatin Scalar and Vectr Quantizatin Scalar quantizatin can be seen as a special case f vectr quantizatin with N = 1 dimensins Vectr quantizatin with N > 1 allws a number f new ptins cell Amplitude 2 pdf Representative vectr Amplitude 1 December 5, / 77

55 Vectr Quantizatin Vectr Quantizatin Generalizatin f scalar quantizatin t quantizatin f a vectr: an rdered set f real numbers Many mdels and design techniques used in VQ (vectr quantizatin) are natural generalizatins f SQ (scalar quantizatin) Vectr can describe any type f pattern: speech, audi, image, r vide segments Vectr quantizer Q f dimensin N and size K is a mapping frm a pint in N-dimensinal Euclidean space R N int a finite set C cntaining K cde vectrs r cde wrds Q : R N C (95) Assciatin f size K vectr quantizer with a partitin f R N int K cells C i fr i I C i = {s R N : q(s) = s } (96) The cells frm a partitin f R N C i = R N and C i Cj = fr i j (97) i December 5, / 77

56 Vectr Quantizatin Measuring Vectr Quantizer Perfrmance Average distrtin fr a N-dimensinal vectr quantizer D = E {d N (S, S )} = d N (s, s )f(s) ds (98) R N Using the partitining f R N int cells C i and the cdebk C = {s, s 1,...} fr a given quantizer Q K 1 D = d N (s, s i)f(s) ds (99) C i i= Often used and very cnvenient measure fr d(, ) (and being used thrughut this lecture) d N (s, s i) = 1 N s s i = 1 N (s s i) t (s s i) = 1 N 1 (s n s N i,n) 2 (1) Average rate (bit/scalar) fr a N-dimensinal vectr quantizer f size K R = 1 N E { lg 2 p(s i)} = 1 N K 1 i= n= p(s i) lg 2 p(s i) (11) December 5, / 77

57 Vectr Quantizatin The Linde-Buz-Gray Algrithm Llyd algrithm has its cunterpart fr vectr quantizatin [Linde et al., 198] (given here fr MSE) 1 Given a sufficiently large realizatin {s n } (f surce distributin f(s)) 2 Chse an initial set f recnstructin vectrs {s i} (the lgarithm f the size f the set is equal t the rate per symbl) 3 Assciate all samples f the training set {s n } with ne f the quantizatin cells C i accrding t α(s n ) = arg min d N (s n, s i) i 4 Update set f recnstructin vectrs {s i} accrding t s i = arg min E {d N (S, s i) α(s) = i} s R 5 Repeat previus tw steps until cnvergence December 5, / 77

58 Vectr Quantizatin LBG Algrithm Result fr a Tw-Dimensinal Gaussian i.i.d and K = 16 Cdewrds Initializatin After Iteratin 8 After Iteratin 49 2D Gaussian i.i.d. with unit variance Initializatin: s i+4k = ( i, k) t After iteratin 8: same perfrmance as in scalar case: 9.3 db After iteratin 49: imprvement t 9.67 db SNR [db] db Iteratin December 5, / 77

59 Vectr Quantizatin LBG Algrithm Result fr a Tw-Dimensinal Gaussian i.i.d and K = 256 Cdewrds SNR [db], H [bit/s] Cnjectured VQ perfrmance fr R=4 bit/s db Fixed length SQ perfrmance fr R=4 bit/s H=3.69 bit/s Iteratin 2D Gaussian i.i.d. with unit variance Randm initializatin Gain arund.9 db fr tw-dimensinal VQ cmpared t SQ with fixed-length cdes resulting in 2.64 db (f cnjectured 21.5 db) December 5, / 77

60 Vectr Quantizatin LBG Algrithm Result fr a Tw-Dimensinal Laplacian i.i.d and K = 16 Cdewrds db SNR [db] 6 4 2D Laplacian i.i.d. with unit variance Iteratin Initializatin (equal t experiment with 2D Gaussian i.i.d.): s i+4k = ( i, k) t Large gain (1.32 db) fr tw-dimensinal VQ cmpared t SQ with fixed-length cdes resulting in 8.87 db December 5, / 77

61 Vectr Quantizatin LBG Algrithm Result fr a Tw-Dimensinal Laplacian i.i.d and K = 256 Cdewrds SNR [db], H [bit/s] Cnjectured VQ perfrmance fr R=4 bit/s db 18 Fixed length SQ perfrmance fr R=4 bit/s H=3.44 bit/s Iteratin 2D Laplacian i.i.d. with unit variance Randm initializatin Large gain (1.84 db) fr tw-dimensinal VQ cmpared t SQ with fixed-length cdes resulting in 19.4 db (f cnjectured db) December 5, / 77

62 Vectr Quantizatin The Vectr Quantizer Advantage Gain ver scalar quantizatin can be assigned t 3 effects [Lkabaugh and Gray, 1989] Space filling advantage: Z lattice is nt mst efficient sphere packing in N dimensins (N > 1) Independent frm surce distributin r statistical dependencies Maximum gain fr N : 1.53 db Shape advantage: Explit shape f surce pdf Can als be explited using entrpy-cnstrained scalar quantizatin Memry advantage: Explit statistical dependencies f the surce Can als be explited using predictive, transfrm, sub-band, r blck entrpy cding December 5, / 77

63 Vectr Quantizatin Space Filling Advantage Unifrm pdf with f(s) = 1/A fr A/2 s A/2 and A = 1 = 5 iteratins f LBG algrithm D U (R) fr unifrm distributin with A = 1 is given as D U (R) = A R with R = 3.32 bit/scalar, D U (R) = db LBG algrithm cnverged twards 2.8 db shwing an apprximate hexagnal lattice in 2D December 5, / 77

64 Vectr Quantizatin Space-Filling Advantage: Sphere Packings in Multiple Dimensins Densest packings and highest kissing numbers knwn [Slane, 1998] and apprximate gain using VQ [Lkabaugh and Gray, 1989] Dim. Densest Name Highest Kissing Apprximate Packing Number Gain [db] 1 Z Integer lattice 2 2 A 2 Hexagnal lattice A 3 D 3 Cubidal lattice D D E E E 8 Gsset lattice Λ 9 Laminated lattice P 1c Nn-lattice arrangement K 12 Cxeter-Tdd lattice BW 16 Λ 16 Barnes-Wall lattice Λ 24 Leech lattice December 5, / 77

65 Vectr Quantizatin Space-Filling Advantage: Sphere Packings and Maximum Gain Gain due t denser sphere packing in higher dimensins is bunded t 1.53 db Cdebk C must be stred and can reach very large numbers fr N becming large Belw: Densest sphere packings knwn in dimensins N 48 frm [Slane, 1998] 2 1 DENSITY (SCALED) ROGER S BOUND P 48q LEECH LATTICE Λ 24 Λ 48-1 Q 32-2 D 3 Λ 8 = E 8 Λ n D 4 BW 16 Λ n Ks 36 Λn -3 K 12 K n DIMENSION December 5, / 77

66 Vectr Quantizatin Chu-Lkabaugh-Gray Algrithm: ECVQ LBG algrithm has its entrpy-cnstrained extensin als fr vectr quantizatin [Chu et al., 1989] (given here fr MSE) 1 Given a sufficiently large realizatin {s n } (f surce distributin f(s)) 2 Chse an initial set f recnstructin vectrs {s i} an initial set f cdewrd lengths l(s i) 3 Assciate all samples f the training set {s n } with ne f the quantizatin cells C i accrding t α(s) = arg min s i d N (s, s i) + λ l(s i) 4 Update the recnstructin vectrs {s i} accrding t s i = arg min E {d N (S, s ) α(s) = i} s R 5 Update the cdewrd lengths l(s i) accrding t l(s i) = lg 2 p(s i) 6 Repeat previus three steps until cnvergence December 5, / 77

67 Vectr Quantizatin Shape Advantage: Results fr 2D Gaussian i.i.d (K = 16) SNR [db] Iteratin.26 db Result f CLG algrithm fr 2D Gaussian i.i.d. Gain f ECVQ cmpared t ECSQ is.26 db Gain f VQ cmpared t SQ with fixed-length cdes is.37 db December 5, / 77

68 Vectr Quantizatin Shape Advantage: Results fr 2D Laplace i.i.d (K = 16) SNR [db], R [bit/s] Iteratin.2 db R=2.4 bit/s Result f CLG algrithm fr 2D Laplace i.i.d. Gain f ECVQ cmpared t ECSQ is.2 db Gain f VQ cmpared t SQ with fixed-length cdes is 1.32 db December 5, / 77

69 Vectr Quantizatin Shape Advantage: Results fr 2D Gaussian i.i.d (K = 256) SNR [db], H [bit/s] db ECVQ perfrmance fr R=4.4 bit/s (cnjectured) 22 2 ECSQ perfrmance fr R=4.4 bit/s H=4.4 bit/s Iteratin 4 5 Result f CLG algrithm fr 2D Gaussian i.i.d. Gain f ECVQ cmpared t ECSQ is.17 db Gain f VQ cmpared t SQ with fixed-length cdes is.9 db December 5, / 77

70 Vectr Quantizatin Shape Advantage: Results fr 2D Laplace i.i.d (K = 256) Result f CLG algrithm fr 2D Laplace i.i.d. Gain f ECVQ cmpared t ECSQ is.17 db Gain f VQ cmpared t SQ with fixed-length cdes is 1.84 db Intrductin f entrpy-cding f cdewrds nly leaves the space-filling gain fr N = 2 :.17 db December 5, / 77

71 Vectr Quantizatin Shape Advantage When cmparing ECSQ with ECVQ fr i.i.d surces, the gain due t K > 1 reduces t the space filling gain VQ with fixed-length cdes can als explit the gain that ECSQ shws cmpares t SQ with fixed-length cdes: Shape Advantage f VQ [Lkabaugh and Gray, 1989] 6 5 Laplacian pdf 5.63 db SNR Gain [db] Gaussian pdf 2.81 db Dimensin N December 5, / 77

72 Vectr Quantizatin Memry Advantage: Results fr Gauss-Markv Surce with ρ =.9 (I) VQ results frm LBG algrithm fr Gauss-Markv surce with crrelatin ρ =.9 = R = 1 bit/scalar R = 2 bit/scalar = = R = 3 bit/scalar R = 4 bit/scalar = LBG algrithm has been extended by re-inserting discarded symbls s i using randm chices December 5, / 77

73 Vectr Quantizatin Memry Advantage: Results fr Gauss-Markv Surce with ρ=.9 (II) SNR [db], H [bit/s] SNR [db], H [bit/s] Cnjectured VQ perfrmance fr R=1 bit/s 4.92 db Fixed length SQ perfrmance fr R=1 bit/s H=.96 bit/s Iteratin 2 Cnjectured VQ perfrmance fr R=3 bit/s db 14 Fixed length SQ perfrmance fr R=3 bit/s H=2.75 bit/s Iteratin 1 bit/scalar 2 bit/scalar 3 bit/scalar 4 bit/scalar SNR [db], H [bit/s] SNR [db], H [bit/s] Cnjectured VQ perfrmance fr R=2 bit/s 4.92 db Fixed length SQ perfrmance fr R=2 bit/s H=1.93 bit/s Iteratin Cnjectured VQ perfrmance fr R=4 bit/s 4.92 db Fixed length SQ perfrmance fr R=4 bit/s Entrpy H is shwn fr H=3.64 bit/s infrmatin nly, FLC Iteratin used Gains are additive frm space-filling, shape and memry effects Fr R = 3 and R = 4 bit/scalar, cnjectured VQ perfrmance is apprached (estimates in [Lkabaugh and Gray, 1989] nly valid fr high-rates) December 5, / 77

74 Vectr Quantizatin Memry Advantage Largest gain t be made if surce cntains statistical dependencies Expliting the memry advantage is ne f the mst relevant aspect f surce cding research (shape advantage can be btained using Huffman and Arithmetic cding) SNR Gain [db] Remainder f surce cding lecture will treat this issue 11 1 ρ= ρ= ρ= Dimensin N 1.11 db 7.21 db 1.25 db December 5, / 77

75 Vectr Quantizatin Vectr Quantizer Advantage fr a Gauss-Markv Surce with ρ =.9 Estimated vectr quantizer advantage fr a Gauss-Markv surce with crrelatin factr ρ =.9 [Lkabaugh and Gray, 1989] Cnjectured numbers are empirically verified fr K = 2 SNR [db]! VQ, K=5 (e)! VQ, K=1 (e)! VQ, K=1 (e)! VQ, K=2 (e)! R(D)! Fixed-Length Cded SQ (K=1)! (Panter-Dite Apprximatin)! ECSQ using EC Llyd Algrithm! VQ, K=2 using LBG algrithm! R [bit/scalar]! December 5, / 77

76 Vectr Quantizatin Vectr Quantizatin with Structural Cnstraints Vectr quantizers can achieve R(D) if N Cmplexity requirements: strage and cmputatin Delay Impse structural cnstraints that reduce cmplexity Tree-Structured VQ Transfrm VQ Multistage VQ Shape-Gain VQ Lattice Cdebk VQ Predictive VQ Predictive VQ can be seen as a generalizatin f a number f very ppular techniques: mtin cmpensatin in vide cding and varius techniques in speech cding December 5, / 77

77 Vectr Quantizatin Summary n Quantizatin Llyd quantizer: minimum MSE distrtin fr given number f representative levels Variable length cding: additinal gains by entrpy-cnstrained quantizatin Minimum mean squared errr fr given entrpy: unifrm quantizer (fr fine quantizatin!) Vectr quantizers can achieve R(D) if N - Are we dne? N! Cmplexity f vectr quantizers is the issue - lw cmplexity alternatives Design a cding system with ptimum rate distrtin perfrmance, such that the delay, cmplexity, and strage requirements are met Space filling gain: can nly be explited using vectr quantizers Shape gain: explit by entrpy cding f decding symbls Memry gain: explit using predictive, transfrm, r sub-band cding December 5, / 77

Source Coding Fundamentals

Source Coding Fundamentals Surce Cding Fundamentals Surce Cding Fundamentals Thmas Wiegand Digital Image Cmmunicatin 1 / 54 Surce Cding Fundamentals Outline Intrductin Lssless Cding Huffman Cding Elias and Arithmetic Cding Rate-Distrtin

More information

Video Encoder Control

Video Encoder Control Vide Encder Cntrl Thmas Wiegand Digital Image Cmmunicatin 1 / 41 Outline Intrductin Encder Cntrl using Lagrange multipliers Lagrangian ptimizatin Lagrangian bit allcatin Lagrangian Optimizatin in Hybrid

More information

Source Coding and Compression

Source Coding and Compression Surce Cding and Cmpressin Heik Schwarz Cntact: Dr.-Ing. Heik Schwarz heik.schwarz@hhi.fraunhfer.de Heik Schwarz Surce Cding and Cmpressin September 22, 2013 1 / 60 PartI: Surce Cding Fundamentals Heik

More information

Predictive Coding. U n " S n

Predictive Coding. U n  S n Intrductin Predictive Cding The better the future f a randm prcess is predicted frm the past and the mre redundancy the signal cntains, the less new infrmatin is cntributed by each successive bservatin

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

Transform Coding. coefficient vectors u = As. vectors u into decoded source vectors s = Bu. 2D Transform: Rotation by ϕ = 45 A = Transform Coding

Transform Coding. coefficient vectors u = As. vectors u into decoded source vectors s = Bu. 2D Transform: Rotation by ϕ = 45 A = Transform Coding Transfrm Cding Transfrm Cding Anther cncept fr partially expliting the memry gain f vectr quantizatin Used in virtually all lssy image and vide cding applicatins Samples f surce s are gruped int vectrs

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

Probability, Random Variables, and Processes. Probability

Probability, Random Variables, and Processes. Probability Prbability, Randm Variables, and Prcesses Prbability Prbability Prbability thery: branch f mathematics fr descriptin and mdelling f randm events Mdern prbability thery - the aximatic definitin f prbability

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

Physical Layer: Outline

Physical Layer: Outline 18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

Pure adaptive search for finite global optimization*

Pure adaptive search for finite global optimization* Mathematical Prgramming 69 (1995) 443-448 Pure adaptive search fr finite glbal ptimizatin* Z.B. Zabinskya.*, G.R. Wd b, M.A. Steel c, W.P. Baritmpa c a Industrial Engineering Prgram, FU-20. University

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

Source Coding and Compression

Source Coding and Compression Surce Cding and Cmpressin Heik Schwarz Cntact: Dr.-Ing. Heik Schwarz heik.schwarz@hhi.fraunhfer.de Heik Schwarz Surce Cding and Cmpressin December 7, 2013 1 / 539 PartII: Applicatin in Image and Vide Cding

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Modelling of NOLM Demultiplexers Employing Optical Soliton Control Pulse

Modelling of NOLM Demultiplexers Employing Optical Soliton Control Pulse Micwave and Optical Technlgy Letters, Vl. 1, N. 3, 1999. pp. 05-08 Mdelling f NOLM Demultiplexers Emplying Optical Slitn Cntrl Pulse Z. Ghassemly, C. Y. Cheung & A. K. Ray Electrnics Research Grup, Schl

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

On concentric spherical codes and permutation codes with multiple initial codewords

On concentric spherical codes and permutation codes with multiple initial codewords On cncentric spherical cdes and permutatin cdes with multiple initial cdewrds The MIT Faculty has made this article penly available. Please share hw this access benefits yu. Yur stry matters. Citatin As

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition

The Kullback-Leibler Kernel as a Framework for Discriminant and Localized Representations for Visual Recognition The Kullback-Leibler Kernel as a Framewrk fr Discriminant and Lcalized Representatins fr Visual Recgnitin Nun Vascncels Purdy H Pedr Mren ECE Department University f Califrnia, San Dieg HP Labs Cambridge

More information

EDA Engineering Design & Analysis Ltd

EDA Engineering Design & Analysis Ltd EDA Engineering Design & Analysis Ltd THE FINITE ELEMENT METHOD A shrt tutrial giving an verview f the histry, thery and applicatin f the finite element methd. Intrductin Value f FEM Applicatins Elements

More information

Performance Bounds for Detect and Avoid Signal Sensing

Performance Bounds for Detect and Avoid Signal Sensing Perfrmance unds fr Detect and Avid Signal Sensing Sam Reisenfeld Real-ime Infrmatin etwrks, University f echnlgy, Sydney, radway, SW 007, Australia samr@uts.edu.au Abstract Detect and Avid (DAA) is a Cgnitive

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

Particle Size Distributions from SANS Data Using the Maximum Entropy Method. By J. A. POTTON, G. J. DANIELL AND B. D. RAINFORD

Particle Size Distributions from SANS Data Using the Maximum Entropy Method. By J. A. POTTON, G. J. DANIELL AND B. D. RAINFORD 3 J. Appl. Cryst. (1988). 21,3-8 Particle Size Distributins frm SANS Data Using the Maximum Entrpy Methd By J. A. PTTN, G. J. DANIELL AND B. D. RAINFRD Physics Department, The University, Suthamptn S9

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

MATHEMATICS SYLLABUS SECONDARY 5th YEAR

MATHEMATICS SYLLABUS SECONDARY 5th YEAR Eurpean Schls Office f the Secretary-General Pedaggical Develpment Unit Ref. : 011-01-D-8-en- Orig. : EN MATHEMATICS SYLLABUS SECONDARY 5th YEAR 6 perid/week curse APPROVED BY THE JOINT TEACHING COMMITTEE

More information

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA

February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA Mental Experiment regarding 1D randm walk Cnsider a cntainer f gas in thermal

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

Lyapunov Stability Stability of Equilibrium Points

Lyapunov Stability Stability of Equilibrium Points Lyapunv Stability Stability f Equilibrium Pints 1. Stability f Equilibrium Pints - Definitins In this sectin we cnsider n-th rder nnlinear time varying cntinuus time (C) systems f the frm x = f ( t, x),

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS 1 Influential bservatins are bservatins whse presence in the data can have a distrting effect n the parameter estimates and pssibly the entire analysis,

More information

Inter-Picture Coding. Inter-Picture Coding. o Thomas Wiegand Digital Image Communication 1 / 62

Inter-Picture Coding. Inter-Picture Coding. o Thomas Wiegand Digital Image Communication 1 / 62 Inter-Picture Cding Thmas Wiegand Digital Image Cmmunicatin 1 / 62 Outline Intrductin Accuracy f Mtin-Cmpensated Predictin Theretical Cnsideratins Chice f Interplatin Filters Mtin Vectr Accuracy Mtin Mdels

More information

Thermodynamics and Equilibrium

Thermodynamics and Equilibrium Thermdynamics and Equilibrium Thermdynamics Thermdynamics is the study f the relatinship between heat and ther frms f energy in a chemical r physical prcess. We intrduced the thermdynamic prperty f enthalpy,

More information

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw:

In SMV I. IAML: Support Vector Machines II. This Time. The SVM optimization problem. We saw: In SMV I IAML: Supprt Vectr Machines II Nigel Gddard Schl f Infrmatics Semester 1 We sa: Ma margin trick Gemetry f the margin and h t cmpute it Finding the ma margin hyperplane using a cnstrained ptimizatin

More information

7 TH GRADE MATH STANDARDS

7 TH GRADE MATH STANDARDS ALGEBRA STANDARDS Gal 1: Students will use the language f algebra t explre, describe, represent, and analyze number expressins and relatins 7 TH GRADE MATH STANDARDS 7.M.1.1: (Cmprehensin) Select, use,

More information

a(k) received through m channels of length N and coefficients v(k) is an additive independent white Gaussian noise with

a(k) received through m channels of length N and coefficients v(k) is an additive independent white Gaussian noise with urst Mde Nn-Causal Decisin-Feedback Equalizer based n Sft Decisins Elisabeth de Carvalh and Dirk T.M. Slck Institut EURECOM, 2229 rute des Crêtes,.P. 93, 694 Sphia ntiplis Cedex, FRNCE Tel: +33 493263

More information

Multiple Source Multiple. using Network Coding

Multiple Source Multiple. using Network Coding Multiple Surce Multiple Destinatin Tplgy Inference using Netwrk Cding Pegah Sattari EECS, UC Irvine Jint wrk with Athina Markpulu, at UCI, Christina Fraguli, at EPFL, Lausanne Outline Netwrk Tmgraphy Gal,

More information

Sequential Allocation with Minimal Switching

Sequential Allocation with Minimal Switching In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

Department of Electrical Engineering, University of Waterloo. Introduction

Department of Electrical Engineering, University of Waterloo. Introduction Sectin 4: Sequential Circuits Majr Tpics Types f sequential circuits Flip-flps Analysis f clcked sequential circuits Mre and Mealy machines Design f clcked sequential circuits State transitin design methd

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

Comparing Several Means: ANOVA. Group Means and Grand Mean

Comparing Several Means: ANOVA. Group Means and Grand Mean STAT 511 ANOVA and Regressin 1 Cmparing Several Means: ANOVA Slide 1 Blue Lake snap beans were grwn in 12 pen-tp chambers which are subject t 4 treatments 3 each with O 3 and SO 2 present/absent. The ttal

More information

OF SIMPLY SUPPORTED PLYWOOD PLATES UNDER COMBINED EDGEWISE BENDING AND COMPRESSION

OF SIMPLY SUPPORTED PLYWOOD PLATES UNDER COMBINED EDGEWISE BENDING AND COMPRESSION U. S. FOREST SERVICE RESEARCH PAPER FPL 50 DECEMBER U. S. DEPARTMENT OF AGRICULTURE FOREST SERVICE FOREST PRODUCTS LABORATORY OF SIMPLY SUPPORTED PLYWOOD PLATES UNDER COMBINED EDGEWISE BENDING AND COMPRESSION

More information

NAME: Prof. Ruiz. 1. [5 points] What is the difference between simple random sampling and stratified random sampling?

NAME: Prof. Ruiz. 1. [5 points] What is the difference between simple random sampling and stratified random sampling? CS4445 ata Mining and Kwledge iscery in atabases. B Term 2014 Exam 1 Nember 24, 2014 Prf. Carlina Ruiz epartment f Cmputer Science Wrcester Plytechnic Institute NAME: Prf. Ruiz Prblem I: Prblem II: Prblem

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

ChE 471: LECTURE 4 Fall 2003

ChE 471: LECTURE 4 Fall 2003 ChE 47: LECTURE 4 Fall 003 IDEL RECTORS One f the key gals f chemical reactin engineering is t quantify the relatinship between prductin rate, reactr size, reactin kinetics and selected perating cnditins.

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

Stats Classification Ji Zhu, Michigan Statistics 1. Classification. Ji Zhu 445C West Hall

Stats Classification Ji Zhu, Michigan Statistics 1. Classification. Ji Zhu 445C West Hall Stats 415 - Classificatin Ji Zhu, Michigan Statistics 1 Classificatin Ji Zhu 445C West Hall 734-936-2577 jizhu@umich.edu Stats 415 - Classificatin Ji Zhu, Michigan Statistics 2 Examples f Classificatin

More information

What is Statistical Learning?

What is Statistical Learning? What is Statistical Learning? Sales 5 10 15 20 25 Sales 5 10 15 20 25 Sales 5 10 15 20 25 0 50 100 200 300 TV 0 10 20 30 40 50 Radi 0 20 40 60 80 100 Newspaper Shwn are Sales vs TV, Radi and Newspaper,

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

Chapter 3 Kinematics in Two Dimensions; Vectors

Chapter 3 Kinematics in Two Dimensions; Vectors Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

CAPACITY OF MULTI-ANTENNA ARRAY SYSTEMS IN INDOOR WIRELESS ENVIRONMENT

CAPACITY OF MULTI-ANTENNA ARRAY SYSTEMS IN INDOOR WIRELESS ENVIRONMENT CAPACITY OF MULTI-ANTENNA ARRAY SYSTEMS IN INDOOR WIRELESS ENVIRONMENT Chen-Nee Chuah, Jseph M. Kahn and David Tse -7 Cry Hall, University f Califrnia Berkeley, CA 947 e-mail: {chuah, jmk, dtse}@eecs.berkeley.edu

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

You need to be able to define the following terms and answer basic questions about them:

You need to be able to define the following terms and answer basic questions about them: CS440/ECE448 Sectin Q Fall 2017 Midterm Review Yu need t be able t define the fllwing terms and answer basic questins abut them: Intr t AI, agents and envirnments Pssible definitins f AI, prs and cns f

More information

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:

More information

Chapter 4. Unsteady State Conduction

Chapter 4. Unsteady State Conduction Chapter 4 Unsteady State Cnductin Chapter 5 Steady State Cnductin Chee 318 1 4-1 Intrductin ransient Cnductin Many heat transfer prblems are time dependent Changes in perating cnditins in a system cause

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Lead/Lag Compensator Frequency Domain Properties and Design Methods

Lead/Lag Compensator Frequency Domain Properties and Design Methods Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin

More information

RECHERCHES Womcodes constructed with projective geometries «Womcodes» construits à partir de géométries projectives Frans MERKX (') École Nationale Su

RECHERCHES Womcodes constructed with projective geometries «Womcodes» construits à partir de géométries projectives Frans MERKX (') École Nationale Su Wmcdes cnstructed with prjective gemetries «Wmcdes» cnstruits à partir de gémétries prjectives Frans MERKX (') Écle Natinale Supérieure de Télécmmunicatins (ENST), 46, rue Barrault, 75013 PARIS Étudiant

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 11: Mdeling with systems f ODEs In Petre Department f IT, Ab Akademi http://www.users.ab.fi/ipetre/cmpmd/ Mdeling with differential equatins Mdeling strategy Fcus

More information

Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals of Diffusion

Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals of Diffusion Materials Engineering 272-C Fall 2001, Lecture 7 & 8 Fundamentals f Diffusin Diffusin: Transprt in a slid, liquid, r gas driven by a cncentratin gradient (r, in the case f mass transprt, a chemical ptential

More information

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data

x 1 Outline IAML: Logistic Regression Decision Boundaries Example Data Outline IAML: Lgistic Regressin Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester Lgistic functin Lgistic regressin Learning lgistic regressin Optimizatin The pwer f nn-linear basis functins Least-squares

More information

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department

More information

Green Noise Digital Halftoning with Multiscale Error Diffusion

Green Noise Digital Halftoning with Multiscale Error Diffusion Green Nise Digital Halftning with Multiscale Errr Diffusin Yik-Hing Fung and Yuk-Hee Chan Center f Multimedia Signal Prcessing Department f Electrnic and Infrmatin Engineering The Hng Kng Plytechnic University,

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

1 The limitations of Hartree Fock approximation

1 The limitations of Hartree Fock approximation Chapter: Pst-Hartree Fck Methds - I The limitatins f Hartree Fck apprximatin The n electrn single determinant Hartree Fck wave functin is the variatinal best amng all pssible n electrn single determinants

More information

The Solution Path of the Slab Support Vector Machine

The Solution Path of the Slab Support Vector Machine CCCG 2008, Mntréal, Québec, August 3 5, 2008 The Slutin Path f the Slab Supprt Vectr Machine Michael Eigensatz Jachim Giesen Madhusudan Manjunath Abstract Given a set f pints in a Hilbert space that can

More information

Support Vector Machines and Flexible Discriminants

Support Vector Machines and Flexible Discriminants 12 Supprt Vectr Machines and Flexible Discriminants This is page 417 Printer: Opaque this 12.1 Intrductin In this chapter we describe generalizatins f linear decisin bundaries fr classificatin. Optimal

More information

Module 4: General Formulation of Electric Circuit Theory

Module 4: General Formulation of Electric Circuit Theory Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated

More information

Chapter 3 Digital Transmission Fundamentals

Chapter 3 Digital Transmission Fundamentals Chapter 3 Digital Transmissin Fundamentals Errr Detectin and Crrectin Errr Cntrl Digital transmissin systems intrduce errrs, BER ranges frm 10-3 fr wireless t 10-9 fr ptical fiber Applicatins require certain

More information

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 11 (3/11/04) Neutron Diffusion

22.54 Neutron Interactions and Applications (Spring 2004) Chapter 11 (3/11/04) Neutron Diffusion .54 Neutrn Interactins and Applicatins (Spring 004) Chapter (3//04) Neutrn Diffusin References -- J. R. Lamarsh, Intrductin t Nuclear Reactr Thery (Addisn-Wesley, Reading, 966) T study neutrn diffusin

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1 Administrativia Assignment 1 due thursday 9/23/2004 BEFORE midnight Midterm eam 10/07/2003 in class CS 460, Sessins 8-9 1 Last time: search strategies Uninfrmed: Use nly infrmatin available in the prblem

More information

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction

T Algorithmic methods for data mining. Slide set 6: dimensionality reduction T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,

More information

Floating Point Method for Solving Transportation. Problems with Additional Constraints

Floating Point Method for Solving Transportation. Problems with Additional Constraints Internatinal Mathematical Frum, Vl. 6, 20, n. 40, 983-992 Flating Pint Methd fr Slving Transprtatin Prblems with Additinal Cnstraints P. Pandian and D. Anuradha Department f Mathematics, Schl f Advanced

More information

Some Theory Behind Algorithms for Stochastic Optimization

Some Theory Behind Algorithms for Stochastic Optimization Sme Thery Behind Algrithms fr Stchastic Optimizatin Zelda Zabinsky University f Washingtn Industrial and Systems Engineering May 24, 2010 NSF Wrkshp n Simulatin Optimizatin Overview Prblem frmulatin Theretical

More information

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank

MATCHING TECHNIQUES. Technical Track Session VI. Emanuela Galasso. The World Bank MATCHING TECHNIQUES Technical Track Sessin VI Emanuela Galass The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Emanuela Galass fr the purpse f this wrkshp When can we use

More information

Section I5: Feedback in Operational Amplifiers

Section I5: Feedback in Operational Amplifiers Sectin I5: eedback in Operatinal mplifiers s discussed earlier, practical p-amps hae a high gain under dc (zer frequency) cnditins and the gain decreases as frequency increases. This frequency dependence

More information

Surface and Contact Stress

Surface and Contact Stress Surface and Cntact Stress The cncept f the frce is fundamental t mechanics and many imprtant prblems can be cast in terms f frces nly, fr example the prblems cnsidered in Chapter. Hwever, mre sphisticated

More information

Math Foundations 20 Work Plan

Math Foundations 20 Work Plan Math Fundatins 20 Wrk Plan Units / Tpics 20.8 Demnstrate understanding f systems f linear inequalities in tw variables. Time Frame December 1-3 weeks 6-10 Majr Learning Indicatrs Identify situatins relevant

More information

Lecture 24: Flory-Huggins Theory

Lecture 24: Flory-Huggins Theory Lecture 24: 12.07.05 Flry-Huggins Thery Tday: LAST TIME...2 Lattice Mdels f Slutins...2 ENTROPY OF MIXING IN THE FLORY-HUGGINS MODEL...3 CONFIGURATIONS OF A SINGLE CHAIN...3 COUNTING CONFIGURATIONS FOR

More information

" 1 = # $H vap. Chapter 3 Problems

 1 = # $H vap. Chapter 3 Problems Chapter 3 rblems rblem At 1 atmsphere pure Ge melts at 1232 K and bils at 298 K. he triple pint ccurs at =8.4x1-8 atm. Estimate the heat f vaprizatin f Ge. he heat f vaprizatin is estimated frm the Clausius

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

ECE 545 Project Deliverables

ECE 545 Project Deliverables ECE 545 Prject Deliverables Tp-level flder: _ Secnd-level flders: 1_assumptins 2_blck_diagrams 3_interface 4_ASM_charts 5_surce_cde 6_verificatin 7_timing_analysis 8_results

More information

Advanced Heat and Mass Transfer by Amir Faghri, Yuwen Zhang, and John R. Howell

Advanced Heat and Mass Transfer by Amir Faghri, Yuwen Zhang, and John R. Howell 6.5 Natural Cnvectin in Enclsures Enclsures are finite spaces bunded by walls and filled with fluid. Natural cnvectin in enclsures, als knwn as internal cnvectin, takes place in rms and buildings, furnaces,

More information

MODULAR DECOMPOSITION OF THE NOR-TSUM MULTIPLE-VALUED PLA

MODULAR DECOMPOSITION OF THE NOR-TSUM MULTIPLE-VALUED PLA MODUAR DECOMPOSITION OF THE NOR-TSUM MUTIPE-AUED PA T. KAGANOA, N. IPNITSKAYA, G. HOOWINSKI k Belarusian State University f Infrmatics and Radielectrnics, abratry f Image Prcessing and Pattern Recgnitin.

More information

BLAST / HIDDEN MARKOV MODELS

BLAST / HIDDEN MARKOV MODELS CS262 (Winter 2015) Lecture 5 (January 20) Scribe: Kat Gregry BLAST / HIDDEN MARKOV MODELS BLAST CONTINUED HEURISTIC LOCAL ALIGNMENT Use Cmmnly used t search vast bilgical databases (n the rder f terabases/tetrabases)

More information