Predictive Coding. U n " S n
|
|
- Dwayne Moore
- 6 years ago
- Views:
Transcription
1 Intrductin Predictive Cding The better the future f a randm prcess is predicted frm the past and the mre redundancy the signal cntains, the less new infrmatin is cntributed by each successive bservatin f the prcess Predictive cding idea: 1 Predict the current sample/vectr using an estimate which is a functin f past samples/vectrs f the input signal 2 Quantize residual between input signal and its predictin 3 Add quantizer residual and predictin t btain decded sample U n U n " S n +! Q! +! " -! S n S ˆ ˆ n S n Hw t btain the predictr Ŝn? Hw t cmbine predictr and quantizer? January 14, / 52
2 Outline Outline Predictin Linear Predictin Differential Pulse Cde Mdulatin (DPCM) Adaptive Differential Pulse Cde Mdulatin (ADPCM) Frward Adaptive DPCM Backward Adaptive DPCM Gradient Descent and LMS Algrithm Transmissin Errrs in DPCM January 14, / 52
3 Predictin Predictin Statistical estimatin prcedure: value f randm variable S n f randm prcess {S n } is estimated using values f ther randm variables f the randm prcess Set f bserved randm variables: B n Typical example: N randm variables that precede S n S n B n = {S n 1, S n 2,, S n N } (1) Predictr! ˆ S n +! -! Predictr fr S n : deterministic functin f bservatin set B n U n Ŝ n = A n (B n ) (2) Predictin errr U n = S n Ŝn = S n A n (B n ) (3) January 14, / 52
4 Predictin Predictin Perfrmance Defining MSE distrtin using u i = s i ŝ i and s i = u i + ŝ i d N (s, s ) = 1 N 1 (s i s N i) 2 = 1 N 1 (u i + ŝ i u i ŝ i ) 2 = d N (u, u ) (4) N i=0 i=0 Operatinal distrtin rate functin f a predictive cding systems is equal t the peratinal distrtin rate functin fr scalar quantizatin f the predictin residuals Operatinal distrtin rate functin fr scalar quant.: D(R) = σ 2 U g(r) σ 2 U : the variance f the residuals g(r): depends nly n the type f the distributin f the residuals Neglect the dependency n the distributin type Define: predictr A n (B n ) given an bservatin set B n is ptimal if it minimizes variance σ 2 U Assume statinary prcesses: A n ( ) becmes A( ) January 14, / 52
5 Predictin Optimal Predictin Optimizatin criterin used in literature [Makhul, 1975, Vaidyanathan, 2008, Gray, 2010] ɛ 2 U = E { { (Sn Un} 2 ) } { 2 (Sn = E Ŝn = E A n (B n ) ) } 2 (5) Minimizatin f secnd mment ɛ 2 U = E { (U n µ U + µ U ) 2} = E { (U n µ U ) 2} + 2E {(U n µ U )µ U } + E { µ 2 } U = σ 2 U + µ 2 U + 2µ U (E {U n } µ U ) = σ 2 U + µ 2 U (6) implies minimizatin f variance σ 2 U and mean µ U Slutin: cnditinal mean Ŝ n = A (B n ) = E {S n B n } (7) Prf: see [Wiegand and Schwarz, 2011, p. 150] January 14, / 52
6 Predictin Optimal Predictin fr Autregressive Prcesses Autregressive prcess f rder m (AR(m) prcess) S n = Z n + µ S + m a i (S n i µ S ) i=1 = Z n + µ S (1 a T me m ) + a T ms (m) n 1 (8) where {Z n} is a zer-mean iid prcess µ S is the mean f the AR(m) prcess a m = (a 1,, a m) T is a cnstant parameter vectr e m = (1,, 1) T is an m-dimensinal unit vectr Predictin f S n given the vectr S n 1 = (S n 1,, S n N ) with N m E {S n S n 1 } = E { Z n + µ S (1 a T N e N ) + a T N S n 1 S n 1 } where a N = (a 1,, a m, 0,, 0) T = µ S (1 a T N e N ) + a T N S n 1 (9) January 14, / 52
7 Linear Predictin Affine Predictin Affine predictr Ŝ n = A(S n k ) = h 0 + h T NS n k (10) where h N = (h 1,, h N ) T is a cnstant vectr and h 0 a cnstant ffset Variance σu 2 f predictin residual nly depends n h N { (Un σu 2 (h 0, h N ) = E E {U n } ) } 2 { (Sn } )2 } = E h 0 h T NS n k E {S n h 0 h T N S n k { (Sn = E E {S n } h T ( N Sn k E {S n k } )) } 2 (11) Mean squared predictin errr } 2 ɛ 2 U (h 0, h N ) = σu 2 (h N ) + µ 2 U (h 0, h N ) = σu 2 (h N ) + E {S n h 0 hn T S n k = σu 2 (h N ) + ( µ S (1 hn T ) 2 e N ) h 0 (12) Minimize mean squared predictin errr by setting h 0 = µ S (1 hn T e N ) (13) January 14, / 52
8 Linear Predictin Linear Predictin fr Zer-Mean Prcesses S n z "1 "1 z h 1 h 2 "1 z h N + -! U n ˆ S n +! +! The functin used fr predictin is linear, f the frm Ŝ n = h 1 S n 1 + h 2 S n h N S n N = h T NS n 1 (14) Mean squared predictin errr { σu 2 (h N ) = E (S n Ŝn) 2} { } = E (S n h T NS n 1 )(S n S T n 1h N ) = E { Sn 2 } } } 2E {h T NS n 1 S n + E {h T NS n 1 S T n 1h N = E { { } Sn} 2 2h T N E {S n S n 1 } + h T NE S n 1 S T n 1 h N (15) since h N is nt a randm variable January 14, / 52
9 Linear Predictin Aut-Cvariance Matrix and Aut-Cvariance Vectr Variance σ 2 S = E { S 2 n}, Aut-cvariance vectr (fr zer mean: aut-crrelatin vectr) c k = E {S n S n k } = σs 2 ρ k. ρ i. ρ N + k 1 with ρ i = E {S n S n i } /σ 2 S Aut-cvariance matrix (fr zer-mean: aut-crrelatin matrix) 1 ρ 1 ρ 2 ρ N 1 { } ρ 1 1 ρ 1 ρ N 2 C N = E S n 1 S T n 1 = σs 2 ρ 2 ρ 1 1 ρ N ρ N 1 ρ N 2 ρ N 3 1 (16) (17) January 14, / 52
10 Linear Predictin Optimal Linear Predictin Predictin errr variance σ 2 U (h N ) = σ 2 S 2h T N c k + h T NC N h N (18) Minimizatin f σ 2 U (h N) yields a system f linear equatins When C N is nn-singular C N h N = c k (19) h N = C 1 N c k (20) Minimum f σ 2 U (h N) is given as (with (C 1 N c k) T = c T k C 1 N ) σu 2 (hn) = σs 2 2 (hn ) T c k + (hn) T C N hn = σs 2 2 ( c T k C 1 ) N ck + ( c T k C 1 N )C N (C 1 N c ) k = σs 2 2 c T k C 1 N c k + c T k C 1 N c k = σs 2 c T k C 1 N c k. (21) In ptimal predictin, signal variance σ 2 S is reduced by ct k C 1 N c k January 14, / 52
11 Linear Predictin Verificatin f Optimality The ptimality f the slutin can be verified by inserting h N = h N + δ N int yieding σ 2 U (h N ) = σ 2 S 2 h T N c k + h T N C N h N (22) σ 2 U (h N ) = σ 2 S 2(h N + δ N ) T c k + (h N + δ N ) T C N (h N + δ N ) = σ 2 S 2 (h N ) T c k 2 δ T N c k + (h N ) T C N h N + (h N) T C N δ N + δ T N C N h N + δ T N C N δ N = σ 2 U (h N) 2δ T N c k + 2δ T N C N h N + δ T N C N δ N = σ 2 U (h N) + δ T N C N δ N (23) The additinal term is always nn-negative being equal t 0 nly if h N = h N δ T N C N δ N 0 (24) January 14, / 52
12 Linear Predictin The Orthgnality Principle Imprtant prperty fr ptimal affine predictrs { (Sn E {U n S n k } = E h 0 hn T ) } S n k Sn k = E {S n S n k } h 0 E {S n k } E { } S n k S T n k h N = c k + µ 2 S e N h 0 µ S e N (C N + µ 2 S e N e T N ) h N = c k C N h N + µ S e N ( µs (1 h T N e N ) h 0 ). (25) Inserting yields h N = C 1 N c k and h 0 = µ S (1 h T N e N ) (26) E {U n S n k } = 0 (27) January 14, / 52
13 Linear Predictin Orthgnality Principle and Gemetric Interpretatin Fr ptimal affine predictin, the predictin residual U n is uncrrelated with the bservatin vectr S n k E {U n S n k } = 0 (28) Therefre fr ptimum affine filter design, predictin errr shuld be rthgnal t input signal S 2 S ˆ * 0 S 0 U 0 * S 1 Apprximate a vectr S 0 by a linear cmbinatin f S 1 and S 2 Best apprximatin Ŝ 0 is given by prjectin f S 0 nt the plane spanned by S 1 and S 2 Errr vectr U 0 has minimum length and is rthgnal t the prjectin January 14, / 52
14 Linear Predictin One-Step Predictin I Randm variable S n is predicted using the N directly preceding randm variables S n 1 = (S n 1,, S n N ) T Using φ k = E {( S n E {S n } )( S n+k E {S n+k } )}, the nrmal equatins are given as φ 0 φ 1 φ N 1 φ 1 φ 0 φ N φ N 1 φ N 2 φ 0 h N 1 h N 2... h N N = φ 1 φ 2. φ N (29) where h N k represent elements f h N = (h N 1,, h N N )T Changing the equatin t 1 φ 1 φ 0 φ 1 φ N 1 h N 1 φ 2 φ 1 φ 0 φ N h N 2... φ N φ N 1 φ N 2 φ 0 h N N = (30) January 14, / 52
15 Linear Predictin One-Step Predictin II Including the predictin errr variance fr ptimal linear predictin using the N preceding samples σ 2 N = σ 2 S c T 1 C 1 N c 1 = σ 2 S c T 1 h N = φ 0 h N 1 φ 1 h N 2 φ 2 h N Nφ N (31) yields and additinal rw in the matrix φ 0 φ 1 φ 2 φ N φ 1 φ 0 φ 1 φ N 1 φ 2 φ 1 φ 0 φ N } φ N φ N 1 φ N 2 {{ φ 0 } C N+1 1 h N 1 h N 2. h N N }{{} a N = σn (32) Augmented nrmal equatin January 14, / 52
16 Linear Predictin One-Step Predictin III Multiplying bth sides f the augmented nrmal equatin with a T N σ 2 N = a T N C N+1 a N (33) Cmbing the equatins fr 0 t N preceding samples int ne matrix equatin yields σ 2 N X X X. h N σ C N+1. h N 2 h N N 1 X X = X X σ 2 1 X h N N h N 1 N 1 h σ0 2 Taking the determinant f bth sides f the equatin gives C N+1 = σ 2 N σ 2 N 1... σ 2 0 (34) Predictin errr variance σn 2 fr ptimal linear predictin using the N preceding samples σn 2 = C N+1 C N (35) January 14, / 52
17 Linear Predictin One-Step Predictin fr Autregressive Prcesses Recall: AR(m) prcess with mean µ S and a m = (a 1,, a m ) T S n = Z n + µ S (1 a T me m ) + a T ms (m) n 1 (36) Predictin using N preceding samples in h N with N m: define a N = (a 1,, a m, 0,, 0) T Predictin errr U n = S n h T NS n 1 = Z n + µ S (1 a T N e N ) + (a N h N ) T S n 1 (37) Subtracting the mean E {U n } = µ S (1 a T N e N) + (a N h N ) T E{S n 1 } U n E {U n } = Z n + (a N h N ) T ( S n 1 E {S n 1 } ) (38) Optimal predictin: cvariances between U n and S n 1 must be equal t 0 0 = E {( U n E {U n } )( S n k E {S n k } )} = E { Z n ( Sn k E {S n k } )} + C N (a N h N ) (39) yields h N = a N (40) January 14, / 52
18 Linear Predictin One-Step Predictin in Gauss-Markv Prcesses I Gauss-Markv prcess is a particular AR(1) prcess S n = Z n + µ S (1 ρ) + ρ S n 1, (41) fr which the iid prcess {Z n } has a Gaussian distributin Cmpletely characterized by its mean µ S, its variance σs 2, and the crrelatin cefficient ρ with 1 < ρ < 1 Aut-cvariance matrix and its inverse ( ) ( ) C 2 = σs 2 1 ρ C ρ ρ 1 2 = σs 2 (1 (42) ρ2 ) ρ 1 Aut-cvariance vectr c 1 = σ 2 S ( ρ ρ 2 ) (43) Optimum predictr h 2 = C 1 2 c 1 h 2 = 1 ( ) ( ) 1 ρ ρ 1 ρ 2 ρ 1 ρ 2 = 1 ( ) ( ρ ρ 3 ρ 1 ρ 2 ρ 2 + ρ 2 = 0 First element f h N is equal t ρ, all ther elements are equal t 0 ( N 2) ) January 14, / 52
19 Linear Predictin One-Step Predictin in Gauss-Markv Prcesses II Minimum predictin residual σu 2 = C 2 C 1 = σ4 S σ4 S ρ2 σs 2 = σs 2 (1 ρ 2 ) (44) Predictin residual fr filter h 1 U n = S n h 1 S n 1 Average squared errr σu 2 (h 1 ) = E { Un 2 } = σs(1 2 + h 2 1 2ρh 1 ) Nte: btain minimum MSE by σ 2 U (h 1) h 1 = σ 2 S(2h 1 2ρ)! = 0 als yields the result h 1 = ρ ρ January 14, / 52
20 Linear Predictin Predictin Gain Predictin gain using Φ N = C N /σs 2 and φ 1 = c 1 /σs 2 G P = E { } Sn 2 E {Un} 2 = σ2 S σs 2 1 σu 2 = σs 2 ct 1 C 1 N c = 1 1 φ T 1 Φ 1 N φ, (45) 1 Predictin gain fr ptimal predictin in first-rder Gauss-Markv prcess G P (h ) = 1 1 ρ 2 (46) Predictin gain fr filter h 1 20 G P (h 1 ) = = σ 2 S σs 2 (1 + h2 1 2ρh 1) h 2 1 2ρh lg 10 G P (h ) At high bit rates, 10 lg 10 G P : SNR imprvement achieved by predictive cding 0 10 lg 10 G P (h 1 ), h 1 = ρ January 14, / 52
21 Linear Predictin Optimum Linear Predictin fr K = 2 The nrmalized aut-crrelatin matrix and its inverse fllw as ( ) 1 ρ1 Φ 2 = Φ 1 ρ = 1 ( ) 1 ρ1 1 ρ 2 ρ With nrmalized crrelatin vectr φ 1 = ( ρ1 ρ 2 ) we btain the ptimum predictr h 2 = Φ 1 2 φ 1 = 1 ( ) ( ) 1 ρ1 ρ1 1 ρ 2 ρ ρ 2 ( ) 1 ρ1 (1 ρ = 2 ) 1 ρ 2 ρ 1 2 ρ 2 1 (47) (48) = 1 ( ρ1 ρ 1 ρ 2 1 ρ 2 ρ ρ 2 Result is identical t h fr the first-rder Gauss-Markv surce when setting ρ 1 = ρ and ρ 2 = ρ 2 Fr a surce with ρ 2 = ρ 2 1: secnd cefficient desn t imprve predictin gain can be generalized t N th-rder Gauss-Markv surces ) (49) January 14, / 52
22 Linear Predictin Predictin fr Speech Example Example fr speech predictin: ρ 1 = 0.825, ρ 2 = G P (1) = 5.0 db, G P (2) = 5.5 db Anther speech predictin example s[n] u[n], G P (1) = 4.2 db u[n], G P (3) = 7.7 db u[n], G P (12) = 11.7 db January 14, / 52
23 Linear Predictin Predictin in Images: Intra Frame Predictin Past and present bservable randm variables are prir scanned samples within that image Derivatins n linear predictin fr zer-mean randm variables (subtract µ S r rughly 127 frm 8-bit picture) Pictures are typically scanned line-by-line frm upper left crner t lwer right crner 1-D hrizntal predictin: Ŝ 0 = h 1 S 1 1-D vertical predictin: 2-D predictin: Ŝ 0 = h 2 S 2 Ŝ 0 = 3 h i S i i=1 h 3 S 3 + S 1 S 0 S 2 h 2 h U 0 ˆ S 0 January 14, / 52
24 Linear Predictin Predictin Example: Test Pattern σ 2 S = (s 127) Vertical Predictr h 1 = 0 h 2 = h 3 = 0 σu 2 (h) = G P = 8.82 db Hrizntal Predictr h 1 = h 2 = 0 h 3 = 0 σu 2 (h) = G P = db 2-d Predictr h 1 = h 2 = h 3 = σu 2 (h) = G P = db January 14, / 52
25 Linear Predictin Predictin Example: Lena center crpped picture σ 2 S = (s 127) Vertical Predictr h 1 = 0 h 2 = h 3 = 0 σu 2 (h) = G P = db Hrizntal Predictr h 1 = h 2 = 0 h 3 = 0 σu 2 (h) = G P = db 2-d Predictr h 1 = h 2 = h 3 = 0.48 σu 2 (h) = G P = db January 14, / 52
26 Linear Predictin Predictin Example: PMFs fr Picture Lena p(s) p(u) s u Pmfs p(s) and p(u) change significantly due t predictin peratin Entrpy changes significantly (runding predictin signal twards integer: E { U n } = 80.47) H(S) = 7.44 bit/sample H(U) = 4.97 bit/sample (50) Linear predictin can be used fr lssless cding: JPEG-LS January 14, / 52
27 Linear Predictin Asympttic Predictin Gain Upper bund fr predictin gain as N One-step predictin f a randm variable S n given the cuntably infinite set f preceding randm variables {S n 1, S n 2, } and {h 0, h 1, } U n = S n h 0 h i S n i, (51) Orthgnality criterin: U n is uncrrelated with all S n i fr i > 0 But U n k fr k > 0 is fully determined by a linear cmbinatin f past input values S n k i fr i 0 i=1 Hence, U n is uncrrelated with U n k fr k > 0 φ UU (k) = σ 2 U, δ(k) Φ UU (ω) = σ 2 U, (52) where σu, 2 is the asympttic ne-step predictin errr variance fr N January 14, / 52
28 Linear Predictin Asympttic One-Step Predictin Errr Variance I Fr ne-step predictin we shwed which yields C N = σ 2 N 1 σ 2 N 2 σ 2 N 3 σ 2 0 (53) N 1 1 N ln C N = ln C N 1 1 N = ln σi 2 (54) N If a sequence f numbers α 0, α 1, α 2, appraches a limit α, the average value appraches the same limit, Hence, we can write yielding lim N lim N ln C N 1 N N 1 1 N i=0 i=0 α i = α (55) N 1 1 = lim ln σi 2 = ln σ 2 (56) N N i=0 ( ) σ 2 = exp lim ln C N 1 N N = lim N C N 1 N (57) January 14, / 52
29 Linear Predictin Asympttic One-Step Predictin Errr Variance II Asympttic One-Step Predictin Errr Variance σ 2 U, = lim N C N 1 N Determinant f N N matrix: prduct f its eigenvalues ξ (N) i lim N C N 1 N = lim N ( N 1 i=0 ξ (N) i ) 1 N = 2 ( lim N N 1 i=0 1 N lg 2 ξ(n) i ) (58) Apply Grenander and Szegö s therem lim N N 1 1 N i=0 ( G ξ (N) i Expressin using pwer spectral density ) = 1 π G (Φ(ω)) dω (59) 2π π σu, 2 = lim C N 1 1 π N = 2 2π π lg 2 Φ SS(ω) dω N (60) January 14, / 52
30 Linear Predictin Asympttic Predictin Gain Predictin gain G P G P = σ2 S σu, 2 = 1 π 2π π 2 1 2π Φ(ω) dω Arithmetic mean π (61) π lg 2 Φ(ω) dω Gemetric mean Result fr first-rder Gauss-Markv surce (can als be cmputed differently) 20 Φ(ω) lg10 G P (ρ) db db 10 ρ = ρ = 0.5 ρ = ω/π db ρ January 14, / 52
31 Differential Pulse Cde Mdulatin (DPCM) Differential Pulse Cde Mdulatin (DPCM) Cmbining predictin with quantizatin requires simultaneus recnstructin f predictr at cder and decder use f quantized samples fr predictin U n U n S n + Q + S n - S ˆ S ˆ n n Re-drawing yields blck-diagram with typical DPCM structure S n P U n U n + Q - S ˆ + n S n P January 14, / 52
32 Differential Pulse Cde Mdulatin (DPCM) DPCM Cdec Redrawing with encder α, mapping frm index t bit stream γ, and decder β yields DPCM encder S n + - U n α I n B n B n γ Channel γ -1 I n ˆ S n β U ʹ n + ˆ S n β U ʹ n + P S ʹ n P S ʹ n DPCM Encder DPCM Decder DPCM encder cntains DPCM decder except fr γ 1 January 14, / 52
33 Differential Pulse Cde Mdulatin (DPCM) DPCM and Quantizatin Predictin Ŝn fr a sample Sn is generated by linear filtering f recnstructed samples S n frm the past K K Ŝ n = h i S n i = h i (S n i + Q n i) = h T (S n 1 + Q n 1 ) (62) i=1 i=1 with Q n being the quantizatin errr between the recnstructed samples S n and riginal samples S n Predictin errr variance (fr zer-mean input) is given by σu 2 = E { Un 2 } { = E (S n Ŝn)2} = E {(S n h T S n 1 h T Q n 1 ) 2} = E { { } { } Sn} 2 + h T E S n 1S T n 1 h + h T E Q n 1 Q T n 1 h (63) 2h T E {S ns n 1} 2h T E { } { } S nq n 1 + 2h T E S n 1Q T n 1 h Defining Φ = E { S n 1Sn 1} T /σ 2 S and φ = E {S ns n 1} /σs 2 we get ( ) σu 2 = σs h T Φ h 2h T φ (64) { } +h T E Q n 1 Q T n 1 h 2h T E { } { } S nq n 1 + 2h T E S n 1Q T n 1 h January 14, / 52
34 Differential Pulse Cde Mdulatin (DPCM) DPCM fr a First-Order Gauss-Markv Surce Calculate R(D) fr zer-mean Gauss-Markv prcess with 1 < ρ < 1 and variance σ 2 S S n = Z n + ρ S n 1 (65) Cnsider a ne-tap linear predictin filter h = (h) Nrmalized aut-crrelatin matrix Φ = (1) and crss-crrelatin φ = (ρ) Predictin errr variance σu 2 = σs 2 ( 1 + h 2 2 h ρ ) + h 2 E { Q 2 n 1} 2hE {S n Q n 1 } + 2h 2 E {S n 1 Q n 1 } (66) Using S n = Z n + ρ S n 1, the secnd rw in abve equatin becmes 2hE {S n Q n 1 } + 2h 2 E {S n 1 Q n 1 } = 2hE {Z n Q n 1 } 2hρE {S n 1 Q n 1 } + 2h 2 E {S n 1 Q n 1 } = 2hE {Z n Q n 1 } + 2h(h ρ)e {S n 1 Q n 1 } (67) With setting h = ρ, we have E {Z n Q n 1 } = 0 2h(h ρ)e {S n 1 Q n 1 } = 0 (68) January 14, / 52
35 Differential Pulse Cde Mdulatin (DPCM) Cmbinatin f DPCM with ECSQ fr Gauss-Markv Prcesses Expressin fr predictin errr variance simplifies t σu 2 = σs 2 ( 1 ρ 2 ) + ρ 2 E { Q 2 n 1} (69) Mdel expressin fr quantizatin errr D = E { Q 2 n 1} by an peratinal distrtin rate functin D(R) = σ 2 U g(r) (70) Example: Assume ECSQ and with that g(r) as g(r) = ε2 ln 2 a with a = and ε 2 = πe/6 lg 2 (a 2 2R + 1) (71) Expressin fr predictin errr variance becmes dependent n rate σ 2 U = σ 2 S 1 ρ 2 1 g(r) ρ 2 (72) January 14, / 52
36 Differential Pulse Cde Mdulatin (DPCM) Cmputatin f Operatinal Distrtin Rate Functin fr DPCM Operatinal distrtin rate functin fr DPCM and ECSQ fr a first-rder Gauss-Markv surce D(R) = σ 2 U g(r) = σ 2 S 1 ρ 2 g(r) (73) 1 g(r) ρ2 Algrithm fr ECSQ in DPCM cding 1 Initializatin with a small value f λ, set s n = s n, n and h = ρ 2 Create signal u n using s n and DCPM 3 Design ECSQ (α, β, γ) using signal u n and the current value f λ by minimizing D + λr 4 Cnduct DPCM encding/decding using ECSQ (α, β, γ) 5 Measure σ 2 U (R) as well as rate R and distrtin D 6 Increase λ and start again with step 2 Algrithm shws prblems at lw bit rates: instabilities January 14, / 52
37 Differential Pulse Cde Mdulatin (DPCM) Cmparisn f Theretical and Experimental Results I SNR [db] Space-Filling Gain: 1.53 db Distrtin-Rate Functin D(R) D(R) = σu 2 (R)g(R) EC-Llyd and DPCM G P = 7.21 db 10 EC-Llyd (n predictin) 5 D(R) = σs 2 g(r) Bit Rate [bit/sample] Fr high rates and Gauss-Markv surces, shape and memry gain achievable Space-filling gain can nly be achieved using vectr quantizatin Theretical mdel prvides a useful descriptin January 14, / 52
38 Differential Pulse Cde Mdulatin (DPCM) Cmparisn f Theretical and Experimental Results (I Predictin errr variance σu 2 depends n bit rate Theretical mdel prvides a useful descriptin 1 σ 2 U (R) σ 2 U (R) = σ2 U 1 ρ 2 1 g(r) ρ measurement σ 0.2 U 2 ( ) = σs 2 (1 ρ2 ) R[bit/symbl] January 14, / 52
39 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Adaptive Differential Pulse Cde Mdulatin (ADPCM) Fr quasi-statinary surces like speech, fixed predictr is nt well suited ADPCM: Adapt the predictr based n the recent signal characteristics Frward adaptatin: send new predictr values - additinal bit rate S n + - U n α I n B n B n γ Channel γ -1 I n ˆ S n P β U n ʹ + ʹ S n ˆ S n P β U ʹ n + S ʹ n Buffer / APF APF DPCM Encder APF Decder APF DPCM Decder January 14, / 52
40 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Frward-Adaptive Predictin: Mtin Cmpensatin in Vide Cding Since predictr values are sent, extend predictin t vectrs/blcks Use statistical dependencies between tw pictures Predictin signal btained by searching a regin in a previusly decded picture that best matches the blck t be cded Let s[x, y] represent intensity at lcatin (x, y) Let s [x, y] represent intensity in a previusly decded picture als at lcatin (x, y) J = min (dx,dy) (s[x, y] s [x dx, y dy]) 2 + λr(dx, dy) (74) x,y Predicted signal is specified thrugh mtin vectr (dx, dy) and R(dx, dy) is its number f bits Predictin errr u[x, y] is quantized (ften using transfrm cding) Bit rate in vide cding is sum f mtin vectr and predictin residual bit rate January 14, / 52
41 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Backward Adaptive DPCM Backward adaptatin: use predictr cmputed frm recently decded signal N additinal bit rate Errr resilience issues Accuracy f predictr S n + - U n α I n B n B n γ Channel γ -1 I n ˆ S n P β U n ʹ + ʹ S n ˆ S n P β U ʹ n + S ʹ n APB APB APB DPCM Encder APB DPCM Decder January 14, / 52
42 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Adaptive Linear Predictin Cmputatinal prblems when inverting Φ fr cmputing h = Φ 1 φ Gradient f bjective functin dσ2 U (h) dh = σu 2 (Φh φ) Instead f setting dσ2 U (h)! dh = 0 which leads t matrix inversin, apprach minimum by iteratively adapting predictin filter Steepest Descent Algrithm: Update filter cefficients in the directin f negative gradient f bjective functin h[n + 1] = h[n] + h[n] = h[n] + κ (φ Φh[n]) (75) σ 2 S σ 2 U (h 1) min h1 σ 2 U (h 1) 0 h 1 h 1 January 14, / 52
43 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Least Mean Squared (LMS) Algrithm LMS is a stchastic gradient algrithm [Widrw, Hff, 1960] apprximating steepest descent LMS prpses simple current-value apprximatin { } Φ σs 2 = E S n 1 S T n 1 s n 1 s T n 1 (76) Update equatin becmes φ σ 2 S = E {S n 1 S n } s n 1 s n (77) h[n + 1] = h[n] + κ (s n 1 s n s n 1 s T n 1h[n]) = h[n] + κ s n 1 (s n s T n 1h[n]) (78) Realizing that predictin errr is given as u n = s n s T n 1h[n] h[n + 1] = h[n] + κ s n 1 u n (79) LMS is ne f many adaptive algrithms t determine h, including [Itakura and Sait, 1968, Atal and Hanauer, 1971, Makhul and Wlf, 1972] Autcrrelatin slutin Cvariance slutin Lattice slutin January 14, / 52
44 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Linear Predictive Cding f Speech Speech cding is dne using surce mdeling All-ple signal prcessing mdel fr speech prductin is assumed Speech spectrum S(z) is prduced by passing an excitatin spectrum, V (z), thrugh an all-ple transfer functin H(z) = S(z) = H(z) V (z) = where, A(z) = 1 P k=1 a k z k Crrespnding difference equatin s(n) = G A(z) G V (z) (80) A(z) P a k s[n k] + G v[n] (81) k=1 When input v[n] is train f impulses, it prduces viced speech When v[n] is nise-like, it prduces unviced speech (e.g. sunds like f, s, etc) January 14, / 52
45 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Predictin in Speech Predictin based n LPC is called Shrt Term Predictin (STP) as it generally perates n recent speech samples (e.g. arund 10 samples) ŝ[n] = After STP, resulting predictin errr N a i s[n i] (82) i=1 u[n] = s[n] ŝ[n] (83) still has distant sample crrelatin (knwn as Pitch) Pitch is predicted by Lng Term Predictin (LTP) by blck matching using crss-crrelatin R(l) = N 1 n=0 u[n] u[n l] N 1 n=0 u[n l] u[n l] (84) Lcatin l pt that maximizes crss-crrelatin is called lag Signal blck at lag is subtracted frm u[n] - resulting signal is called excitatin sequence January 14, / 52
46 Adaptive Differential Pulse Cde Mdulatin (ADPCM) Predictin in Speech: Cde Excited Linear Predictin (CELP) Instead f quantizing and transmitting the excitatin signal, CELP attempts t transmit an index frm a cdebk t apprximate the excitatin signal One methd wuld be t Vectr Quantize the excitatin signal t best match a cdebk entry But since this signal passes thrugh LPC synthesis filter, the behavir after filtering might nt necessarily be ptimal Analysis-by-Synthesis (AbS) apprach: Encding (analysis) is perfrmed by ptimising the decded (synthesis) signal in a clsed lp At encder, excitatin sequences in cdebk are passed thrugh synthesis filter and the index f best excitatin is transmitted January 14, / 52
47 Adaptive Differential Pulse Cde Mdulatin (ADPCM) CELP and AbS January 14, / 52
48 Transmissin Errrs in DPCM Transmissin Errrs in DPCM Fr a linear DPCM decder, the transmissin errr respnse is superimpsed t the recnstructed signal s Fr a stable DPCM decder, transmissin errr respnse decays Finite wrd-length effects at decder can lead t residual errrs that d nt decay (e.g. limit cycles) Belw: (a) errr sequence (BER f 0.5%) (b) errr-free transmissin (c) errr prpagatin January 14, / 52
49 Transmissin Errrs in DPCM Transmissin Errrs in DPCM fr Pictures Example: Lena, 3 b/p(fixed cde wrd length) 1D pred. hr. a H = 0.95, 1D pred. ver. a V = 0.95, 2D pred. a H = a V = 0.5 January 14, / 52
50 Transmissin Errrs in DPCM Transmissin Errrs in DPCM fr Mtin Cmpensatin in Vide Cding When transmissin errr ccurs, mtin cmpensatin causes spati-tempral errr prpagatin Try t cnceal image parts that are in errr Cde lst image parts withut referencing cncealed image parts helps but reduces cding efficiency intra blck Intra blck Cncealed image part Use clean reference picture fr mtin cmpensatin Cncealed image part January 14, / 52
51 Transmissin Errrs in DPCM Summary n Predictive Cding Predictin: Estimatin f randm variable frm past r present bservable randm variables Optimal predictin nly in special cases Optimal linear predictin simple and efficient Wiener-Hpf equatin fr ptimal linear predictin Gauss-Markv prcess f rder N requires predictr with N cefficients that are equal t crrelatin cefficients Nn-matched predictr can increase signal variance Optimal predictin errr is rthgnal t input signal Optimal predictin errr filter perates as a whitening filter January 14, / 52
52 Transmissin Errrs in DPCM Summary n Predictive Cding (cnt d) Differential pulse cde mdulatin (DPCM) is structure fr cmbinatin f predictin with quantizatin In DPCM: predictin is based n quantized samples Simple and efficient: cmbine DPCM and ECSQ Extensin f Entrpy-Cnstrained Llyd algrithm twards DPCM Fr Gauss-Markv surces, EC-Llyd fr DPCM achieves shape and memry gain Adaptive DPCM: frward and backward adaptatin Frward adaptatin requires transmissin f predictr values Backward adaptatin pses prblem f errr resilience and accuracy questins Adaptive linear predictin using steepest descent algrithm: LMS, autcvariance, cvariance, and lattice slutins Transmissin errrs cause errr prpagatin in DPCM Errr prpagatin can be mitigated by interrupting errneus predictin chain January 14, / 52
Quantization. Quantization is the realization of the lossy part of source coding Typically allows for a trade-off between signal fidelity and bit rate
Quantizatin Quantizatin is the realizatin f the lssy part f surce cding Typically allws fr a trade-ff between signal fidelity and bit rate s! s! Quantizer Quantizatin is a functinal mapping f a (cntinuus
More informationSource Coding and Compression
Surce Cding and Cmpressin Heik Schwarz Cntact: Dr.-Ing. Heik Schwarz heik.schwarz@hhi.fraunhfer.de Heik Schwarz Surce Cding and Cmpressin September 22, 2013 1 / 60 PartI: Surce Cding Fundamentals Heik
More informationTransform Coding. coefficient vectors u = As. vectors u into decoded source vectors s = Bu. 2D Transform: Rotation by ϕ = 45 A = Transform Coding
Transfrm Cding Transfrm Cding Anther cncept fr partially expliting the memry gain f vectr quantizatin Used in virtually all lssy image and vide cding applicatins Samples f surce s are gruped int vectrs
More informationSource Coding Fundamentals
Surce Cding Fundamentals Surce Cding Fundamentals Thmas Wiegand Digital Image Cmmunicatin 1 / 54 Surce Cding Fundamentals Outline Intrductin Lssless Cding Huffman Cding Elias and Arithmetic Cding Rate-Distrtin
More informationENSC Discrete Time Systems. Project Outline. Semester
ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding
More informationinitially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur
Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract
More informationVideo Encoder Control
Vide Encder Cntrl Thmas Wiegand Digital Image Cmmunicatin 1 / 41 Outline Intrductin Encder Cntrl using Lagrange multipliers Lagrangian ptimizatin Lagrangian bit allcatin Lagrangian Optimizatin in Hybrid
More informationLeast Squares Optimal Filtering with Multirate Observations
Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical
More informationPhysical Layer: Outline
18-: Intrductin t Telecmmunicatin Netwrks Lectures : Physical Layer Peter Steenkiste Spring 01 www.cs.cmu.edu/~prs/nets-ece Physical Layer: Outline Digital Representatin f Infrmatin Characterizatin f Cmmunicatin
More informationInter-Picture Coding. Inter-Picture Coding. o Thomas Wiegand Digital Image Communication 1 / 62
Inter-Picture Cding Thmas Wiegand Digital Image Cmmunicatin 1 / 62 Outline Intrductin Accuracy f Mtin-Cmpensated Predictin Theretical Cnsideratins Chice f Interplatin Filters Mtin Vectr Accuracy Mtin Mdels
More informationProbability, Random Variables, and Processes. Probability
Prbability, Randm Variables, and Prcesses Prbability Prbability Prbability thery: branch f mathematics fr descriptin and mdelling f randm events Mdern prbability thery - the aximatic definitin f prbability
More informationComputational modeling techniques
Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins
More informationDistributions, spatial statistics and a Bayesian perspective
Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics
More informationA Matrix Representation of Panel Data
web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins
More informationT Algorithmic methods for data mining. Slide set 6: dimensionality reduction
T-61.5060 Algrithmic methds fr data mining Slide set 6: dimensinality reductin reading assignment LRU bk: 11.1 11.3 PCA tutrial in mycurses (ptinal) ptinal: An Elementary Prf f a Therem f Jhnsn and Lindenstrauss,
More informationCOMP 551 Applied Machine Learning Lecture 11: Support Vector Machines
COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse
More information3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression
3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets
More informationRevision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax
.7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical
More informationPattern Recognition 2014 Support Vector Machines
Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft
More informationBootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >
Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);
More informationDetermining the Accuracy of Modal Parameter Estimation Methods
Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system
More informationCS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007
CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is
More informationSlide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons
Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large
More informationmaking triangle (ie same reference angle) ). This is a standard form that will allow us all to have the X= y=
Intrductin t Vectrs I 21 Intrductin t Vectrs I 22 I. Determine the hrizntal and vertical cmpnents f the resultant vectr by cunting n the grid. X= y= J. Draw a mangle with hrizntal and vertical cmpnents
More informationMATHEMATICS SYLLABUS SECONDARY 5th YEAR
Eurpean Schls Office f the Secretary-General Pedaggical Develpment Unit Ref. : 011-01-D-8-en- Orig. : EN MATHEMATICS SYLLABUS SECONDARY 5th YEAR 6 perid/week curse APPROVED BY THE JOINT TEACHING COMMITTEE
More informationDifferentiation Applications 1: Related Rates
Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm
More informationLead/Lag Compensator Frequency Domain Properties and Design Methods
Lectures 6 and 7 Lead/Lag Cmpensatr Frequency Dmain Prperties and Design Methds Definitin Cnsider the cmpensatr (ie cntrller Fr, it is called a lag cmpensatr s K Fr s, it is called a lead cmpensatr Ntatin
More informationResampling Methods. Chapter 5. Chapter 5 1 / 52
Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and
More information, which yields. where z1. and z2
The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin
More informationSimple Linear Regression (single variable)
Simple Linear Regressin (single variable) Intrductin t Machine Learning Marek Petrik January 31, 2017 Sme f the figures in this presentatin are taken frm An Intrductin t Statistical Learning, with applicatins
More informationSAMPLING DYNAMICAL SYSTEMS
SAMPLING DYNAMICAL SYSTEMS Melvin J. Hinich Applied Research Labratries The University f Texas at Austin Austin, TX 78713-8029, USA (512) 835-3278 (Vice) 835-3259 (Fax) hinich@mail.la.utexas.edu ABSTRACT
More informationModule 4: General Formulation of Electric Circuit Theory
Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated
More informationSection I5: Feedback in Operational Amplifiers
Sectin I5: eedback in Operatinal mplifiers s discussed earlier, practical p-amps hae a high gain under dc (zer frequency) cnditins and the gain decreases as frequency increases. This frequency dependence
More informationMethods for Determination of Mean Speckle Size in Simulated Speckle Pattern
0.478/msr-04-004 MEASUREMENT SCENCE REVEW, Vlume 4, N. 3, 04 Methds fr Determinatin f Mean Speckle Size in Simulated Speckle Pattern. Hamarvá, P. Šmíd, P. Hrváth, M. Hrabvský nstitute f Physics f the Academy
More informationFall 2013 Physics 172 Recitation 3 Momentum and Springs
Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.
More informationPrincipal Components
Principal Cmpnents Suppse we have N measurements n each f p variables X j, j = 1,..., p. There are several equivalent appraches t principal cmpnents: Given X = (X 1,... X p ), prduce a derived (and small)
More informationChapter 9 Vector Differential Calculus, Grad, Div, Curl
Chapter 9 Vectr Differential Calculus, Grad, Div, Curl 9.1 Vectrs in 2-Space and 3-Space 9.2 Inner Prduct (Dt Prduct) 9.3 Vectr Prduct (Crss Prduct, Outer Prduct) 9.4 Vectr and Scalar Functins and Fields
More informationDepartment of Electrical Engineering, University of Waterloo. Introduction
Sectin 4: Sequential Circuits Majr Tpics Types f sequential circuits Flip-flps Analysis f clcked sequential circuits Mre and Mealy machines Design f clcked sequential circuits State transitin design methd
More information1 The limitations of Hartree Fock approximation
Chapter: Pst-Hartree Fck Methds - I The limitatins f Hartree Fck apprximatin The n electrn single determinant Hartree Fck wave functin is the variatinal best amng all pssible n electrn single determinants
More informationPSU GISPOPSCI June 2011 Ordinary Least Squares & Spatial Linear Regression in GeoDa
There are tw parts t this lab. The first is intended t demnstrate hw t request and interpret the spatial diagnstics f a standard OLS regressin mdel using GeDa. The diagnstics prvide infrmatin abut the
More informationSupport-Vector Machines
Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material
More informationThe general linear model and Statistical Parametric Mapping I: Introduction to the GLM
The general linear mdel and Statistical Parametric Mapping I: Intrductin t the GLM Alexa Mrcm and Stefan Kiebel, Rik Hensn, Andrew Hlmes & J-B J Pline Overview Intrductin Essential cncepts Mdelling Design
More informationSmoothing, penalized least squares and splines
Smthing, penalized least squares and splines Duglas Nychka, www.image.ucar.edu/~nychka Lcally weighted averages Penalized least squares smthers Prperties f smthers Splines and Reprducing Kernels The interplatin
More informationBuilding to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.
Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define
More informationInternal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9.
Sectin 7 Mdel Assessment This sectin is based n Stck and Watsn s Chapter 9. Internal vs. external validity Internal validity refers t whether the analysis is valid fr the ppulatin and sample being studied.
More informationMultiple Source Multiple. using Network Coding
Multiple Surce Multiple Destinatin Tplgy Inference using Netwrk Cding Pegah Sattari EECS, UC Irvine Jint wrk with Athina Markpulu, at UCI, Christina Fraguli, at EPFL, Lausanne Outline Netwrk Tmgraphy Gal,
More informationSPH3U1 Lesson 06 Kinematics
PROJECTILE MOTION LEARNING GOALS Students will: Describe the mtin f an bject thrwn at arbitrary angles thrugh the air. Describe the hrizntal and vertical mtins f a prjectile. Slve prjectile mtin prblems.
More informationFlipping Physics Lecture Notes: Simple Harmonic Motion Introduction via a Horizontal Mass-Spring System
Flipping Physics Lecture Ntes: Simple Harmnic Mtin Intrductin via a Hrizntal Mass-Spring System A Hrizntal Mass-Spring System is where a mass is attached t a spring, riented hrizntally, and then placed
More informationLecture 24: Flory-Huggins Theory
Lecture 24: 12.07.05 Flry-Huggins Thery Tday: LAST TIME...2 Lattice Mdels f Slutins...2 ENTROPY OF MIXING IN THE FLORY-HUGGINS MODEL...3 CONFIGURATIONS OF A SINGLE CHAIN...3 COUNTING CONFIGURATIONS FOR
More informationThe blessing of dimensionality for kernel methods
fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented
More informationModule 3: Gaussian Process Parameter Estimation, Prediction Uncertainty, and Diagnostics
Mdule 3: Gaussian Prcess Parameter Estimatin, Predictin Uncertainty, and Diagnstics Jerme Sacks and William J Welch Natinal Institute f Statistical Sciences and University f British Clumbia Adapted frm
More informationIN a recent article, Geary [1972] discussed the merit of taking first differences
The Efficiency f Taking First Differences in Regressin Analysis: A Nte J. A. TILLMAN IN a recent article, Geary [1972] discussed the merit f taking first differences t deal with the prblems that trends
More informationSolution to HW14 Fall-2002
Slutin t HW14 Fall-2002 CJ5 10.CQ.003. REASONING AND SOLUTION Figures 10.11 and 10.14 shw the velcity and the acceleratin, respectively, the shadw a ball that underges unirm circular mtin. The shadw underges
More informationFebruary 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA
February 28, 2013 COMMENTS ON DIFFUSION, DIFFUSIVITY AND DERIVATION OF HYPERBOLIC EQUATIONS DESCRIBING THE DIFFUSION PHENOMENA Mental Experiment regarding 1D randm walk Cnsider a cntainer f gas in thermal
More informationECEN620: Network Theory Broadband Circuit Design Fall 2012
ECEN60: Netwrk Thery Bradband Circuit Design Fall 01 Lecture 16: VCO Phase Nise Sam Palerm Analg & Mixed-Signal Center Texas A&M University Agenda Phase Nise Definitin and Impact Ideal Oscillatr Phase
More informationThermodynamics and Equilibrium
Thermdynamics and Equilibrium Thermdynamics Thermdynamics is the study f the relatinship between heat and ther frms f energy in a chemical r physical prcess. We intrduced the thermdynamic prperty f enthalpy,
More informationENGI 4430 Parametric Vector Functions Page 2-01
ENGI 4430 Parametric Vectr Functins Page -01. Parametric Vectr Functins (cntinued) Any nn-zer vectr r can be decmpsed int its magnitude r and its directin: r rrˆ, where r r 0 Tangent Vectr: dx dy dz dr
More informationSections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.
Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage
More information4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression
4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw
More informationChapter 3: Cluster Analysis
Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA
More informationECEN620: Network Theory Broadband Circuit Design Fall 2014
ECEN60: Netwrk Thery Bradband Circuit Design Fall 014 Lecture 11: VCO Phase Nise Sam Palerm Analg & Mixed-Signal Center Texas A&M University Annuncements & Agenda HW3 is due tday at 5PM Phase Nise Definitin
More informationChapter 3 Kinematics in Two Dimensions; Vectors
Chapter 3 Kinematics in Tw Dimensins; Vectrs Vectrs and Scalars Additin f Vectrs Graphical Methds (One and Tw- Dimensin) Multiplicatin f a Vectr b a Scalar Subtractin f Vectrs Graphical Methds Adding Vectrs
More informationMath 302 Learning Objectives
Multivariable Calculus (Part I) 13.1 Vectrs in Three-Dimensinal Space Math 302 Learning Objectives Plt pints in three-dimensinal space. Find the distance between tw pints in three-dimensinal space. Write
More information2.161 Signal Processing: Continuous and Discrete Fall 2008
MIT OpenCurseWare http://cw.mit.edu 2.161 Signal Prcessing: Cntinuus and Discrete Fall 2008 Fr infrmatin abut citing these materials r ur Terms f Use, visit: http://cw.mit.edu/terms. Massachusetts Institute
More informationIntroduction to Smith Charts
Intrductin t Smith Charts Dr. Russell P. Jedlicka Klipsch Schl f Electrical and Cmputer Engineering New Mexic State University as Cruces, NM 88003 September 2002 EE521 ecture 3 08/22/02 Smith Chart Summary
More informationSynchronous Motor V-Curves
Synchrnus Mtr V-Curves 1 Synchrnus Mtr V-Curves Intrductin Synchrnus mtrs are used in applicatins such as textile mills where cnstant speed peratin is critical. Mst small synchrnus mtrs cntain squirrel
More informationModelling of Clock Behaviour. Don Percival. Applied Physics Laboratory University of Washington Seattle, Washington, USA
Mdelling f Clck Behaviur Dn Percival Applied Physics Labratry University f Washingtn Seattle, Washingtn, USA verheads and paper fr talk available at http://faculty.washingtn.edu/dbp/talks.html 1 Overview
More informationNTP Clock Discipline Principles
NTP Clck Discipline Principles David L. Mills University f Delaware http://www.eecis.udel.edu/~mills mailt:mills@udel.edu Sir Jhn Tenniel; Alice s Adventures in Wnderland,Lewis Carrll 24-Aug-04 1 Traditinal
More informationLecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff
Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised
More informationChapter 4. Unsteady State Conduction
Chapter 4 Unsteady State Cnductin Chapter 5 Steady State Cnductin Chee 318 1 4-1 Intrductin ransient Cnductin Many heat transfer prblems are time dependent Changes in perating cnditins in a system cause
More informationPerformance Bounds for Detect and Avoid Signal Sensing
Perfrmance unds fr Detect and Avid Signal Sensing Sam Reisenfeld Real-ime Infrmatin etwrks, University f echnlgy, Sydney, radway, SW 007, Australia samr@uts.edu.au Abstract Detect and Avid (DAA) is a Cgnitive
More informationFunction notation & composite functions Factoring Dividing polynomials Remainder theorem & factor property
Functin ntatin & cmpsite functins Factring Dividing plynmials Remainder therem & factr prperty Can d s by gruping r by: Always lk fr a cmmn factr first 2 numbers that ADD t give yu middle term and MULTIPLY
More informationCOMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification
COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551
More informationChapter 3 Digital Transmission Fundamentals
Chapter 3 Digital Transmissin Fundamentals Errr Detectin and Crrectin Errr Cntrl Digital transmissin systems intrduce errrs, BER ranges frm 10-3 fr wireless t 10-9 fr ptical fiber Applicatins require certain
More informationCOMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)
COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise
More informationPhysics 2010 Motion with Constant Acceleration Experiment 1
. Physics 00 Mtin with Cnstant Acceleratin Experiment In this lab, we will study the mtin f a glider as it accelerates dwnhill n a tilted air track. The glider is supprted ver the air track by a cushin
More informationPreparation work for A2 Mathematics [2017]
Preparatin wrk fr A2 Mathematics [2017] The wrk studied in Y12 after the return frm study leave is frm the Cre 3 mdule f the A2 Mathematics curse. This wrk will nly be reviewed during Year 13, it will
More informationCOMP 551 Applied Machine Learning Lecture 4: Linear classification
COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted
More informationAdmissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs
Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department
More informationROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS
Particle Acceleratrs, 1986, Vl. 19, pp. 99-105 0031-2460/86/1904-0099/$15.00/0 1986 Grdn and Breach, Science Publishers, S.A. Printed in the United States f America ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS
More informationFloating Point Method for Solving Transportation. Problems with Additional Constraints
Internatinal Mathematical Frum, Vl. 6, 20, n. 40, 983-992 Flating Pint Methd fr Slving Transprtatin Prblems with Additinal Cnstraints P. Pandian and D. Anuradha Department f Mathematics, Schl f Advanced
More informationNUMBERS, MATHEMATICS AND EQUATIONS
AUSTRALIAN CURRICULUM PHYSICS GETTING STARTED WITH PHYSICS NUMBERS, MATHEMATICS AND EQUATIONS An integral part t the understanding f ur physical wrld is the use f mathematical mdels which can be used t
More informationLecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff
Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised
More informationFlipping Physics Lecture Notes: Simple Harmonic Motion Introduction via a Horizontal Mass-Spring System
Flipping Physics Lecture Ntes: Simple Harmnic Mtin Intrductin via a Hrizntal Mass-Spring System A Hrizntal Mass-Spring System is where a mass is attached t a spring, riented hrizntally, and then placed
More informationk-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels
Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t
More informationSource Coding and Compression
Surce Cding and Cmpressin Heik Schwarz Cntact: Dr.-Ing. Heik Schwarz heik.schwarz@hhi.fraunhfer.de Heik Schwarz Surce Cding and Cmpressin December 7, 2013 1 / 539 PartII: Applicatin in Image and Vide Cding
More informationPhysics 2B Chapter 23 Notes - Faraday s Law & Inductors Spring 2018
Michael Faraday lived in the Lndn area frm 1791 t 1867. He was 29 years ld when Hand Oersted, in 1820, accidentally discvered that electric current creates magnetic field. Thrugh empirical bservatin and
More informationCOASTAL ENGINEERING Chapter 2
CASTAL ENGINEERING Chapter 2 GENERALIZED WAVE DIFFRACTIN DIAGRAMS J. W. Jhnsn Assciate Prfessr f Mechanical Engineering University f Califrnia Berkeley, Califrnia INTRDUCTIN Wave diffractin is the phenmenn
More informationTree Structured Classifier
Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients
More informationENG2410 Digital Design Arithmetic Circuits
ENG24 Digital Design Arithmetic Circuits Fall 27 S. Areibi Schl f Engineering University f Guelph Recall: Arithmetic -- additin Binary additin is similar t decimal arithmetic N carries + + Remember: +
More informationModelling of NOLM Demultiplexers Employing Optical Soliton Control Pulse
Micwave and Optical Technlgy Letters, Vl. 1, N. 3, 1999. pp. 05-08 Mdelling f NOLM Demultiplexers Emplying Optical Slitn Cntrl Pulse Z. Ghassemly, C. Y. Cheung & A. K. Ray Electrnics Research Grup, Schl
More informationIAML: Support Vector Machines
1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int
More informationA Scalable Recurrent Neural Network Framework for Model-free
A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee
More informationSequential Allocation with Minimal Switching
In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University
More informationPart 3 Introduction to statistical classification techniques
Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms
More informationKinetic Model Completeness
5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins
More informationand the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:
Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track
More informationPerfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Key Wrds: Autregressive, Mving Average, Runs Tests, Shewhart Cntrl Chart
Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Sandy D. Balkin Dennis K. J. Lin y Pennsylvania State University, University Park, PA 16802 Sandy Balkin is a graduate student
More informationthe results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must
M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins
More informationFIELD QUALITY IN ACCELERATOR MAGNETS
FIELD QUALITY IN ACCELERATOR MAGNETS S. Russenschuck CERN, 1211 Geneva 23, Switzerland Abstract The field quality in the supercnducting magnets is expressed in terms f the cefficients f the Furier series
More information