both covariance and autocorrelation methods use two step solutions 1. computation of a matrix of correlation values

Size: px
Start display at page:

Download "both covariance and autocorrelation methods use two step solutions 1. computation of a matrix of correlation values"

Transcription

1 Dgtal Speech Processng Lecture 4 Lnear Predctve Codng (LPC)-Lattce Lattce Methods, Applcatons

2 Lattce Formulatons of LP both covarance and autocorrelaton methods use two step solutons. computaton of a matrx of correlaton values 2. effcent soluton of a set of lnear equatons another class of LP methods, called lattce methods, has evolved n whch the two steps are combned nto a recursve algorthm for determnng LP parameters begn wth Durbn algorthm--at at the stage the set of () coeffcents { α j, j = 2,,..., } are coeffcents of the order optmum LP th th 2

3 Lattce Formulatons of LP defne the system functon of the order nverse flter (predcton error flter) as () ( ) α () = k A z z nˆ k = k f the nput to ths flter s the nput segment ( ) ( ˆ () () s m = s n+ m) w( m), wth output e ( m) = e ( nˆ + m) () () α k k = e ( m) = sm ( ) sm ( k) where we have dropped subscrpt nˆ - the absolute locaton of the sgnal the z-transform gves () () () E ( z) = A ( z) S( z) = k αk z S ( z ) k = th nˆ 3

4 Lattce Formulatons of LP usng the steps of the Durbn recurson () (-) (-) () j j k - j ( α = α α, and α = k ) we can obtan a recurrence formula for A ( z ) of the form () ( ) ( ) ( ) = ( ) ) ( z ) A z A z k z A gvng for the error transform the expresson () () ( ) ( ) = ( ) ( ) ( ) ( ) = ( ) ( ) E ( z) A ( zsz ) ( ) kz A ( z ) Sz ( ) e m e m k b m 4

5 Lattce Formulatons of LP where we can nterpret the frst term as the z-transform of fthe forward predcton error for an ( ) order predctor, and the second term can be smlarly nterpreted based on defnng a backward predcton error () () ( ) ( ) = = ( ) ( ) = z B ( z) ke ( z) B ( z) z A ( z ) S( z) z A ( z ) S( z) k A ( z) S( z) wth nverse transform () () αk ( ) k = b ( m) = s( m ) s( m+ k ) = b ( m ) ke ( m) () st ( ) wth the nterpretaton that we are attemptng to predct sm ( ) from the samples of the nput that follow s( m ) => we are dong a backward predcton and b ( m) s called the backward predcton error sequence 5

6 Lattce Formulatons of LP same set of samples s used to forward predct s(m) and backward predct s(m-) 6

7 Lattce Formulatons of LP () () the predcton error transform and sequence E ( z), e ( m) can now be expressed n terms of forward and backward errors, namely () ( ) ( ) E ( z) = E ( z) k z B ( z) () ( ) ( ( ) = ( ) e m e m k b ) ( m ) smlarly we can derve an expresson for the backward error transform and sequence at sample m of the form () ( ) ( ) = B ( z) z B ( z) k E ( z) () ( ) ( ) b ( m ) = b ( m ) k e ( m ) 2 these two equatons defne the forward and backward predcton error for th an order predctor n terms of the correspondng predcton errors of an th ( ) order predctor, wth the remnder that a zeroth order predctor does no predcton, so ( 0) ( 0) e ( m ) = b ( m ) = s ( m ) 0 m L 3 ( p) Ez ( ) = E ( z) = Az ( )/ Sz ( ) 7

8 Lattce Formulatons of LP Assume we know k (from external computaton): () (). we compute e [ m] and b [ m] from sm [ ], 0 m Lusng Eqs. and 2 (2) (2) 2. we next compute e [ m] and b [ m] for 0 m L+ usng Eqs. and 2 3. extend soluton (lattce) to p sectons gvng ( p) ( p) e [ m] and b [ m] for 0 m L+ p ( p) 4. soluton s en [ ] = e [ n] at the output of the th p lattce secton 8

9 Lattce Formulatons of LP lattce network wth p sectons the output of whch s the forward predcton error dgtal network mplementaton of the predcton error flter wth transfer functon A(z) no drect correlatons no alphas k s computed from forward and backward error sgnals 9

10 Lattce Flter for A(z) ( 0 ) ( 0 ) e [ n] b [ n] s[ n] n L = = 0 e [ n] = e [ n] k b [ n ] = 2,,..., p, 0 n L + ( ) ( ) ( ) b [ n] = ke [ n] + b [ n ] = 2,,..., p, 0 n L + ( ) ( ) ( ) ( p) en [ ] = e [ n ], 0 n L + p Ez ( ) = AzSz ( ) ( ) 0

11 ( p e ) [ n] All-Pole Lattce Flter for H(z) ( p e ) [ n] e () [ n ] k k p p k e (0) [ n ] s [ n] k k p p k ( p b ) [ n] ( p) e n = e n z z ( p b ) [ n] b () [ n ] b (0) [ n] ( ) [ ] [ ] ( ) ( ) ( ) e [ n] = e [ n] + k b [ n ] = p, p,...,, 0 n L + ( ) ( ) ( ) [ ] b n = ke [ n] + b [ n ] = p, p,...,, 0 n L + ( 0) ( 0) s[ n] = e [ n] = b [ n], 0 n L Sz Az ( ) ( ) = Ez ( )

12 All-Pole Lattce Flter for H(z) () ( ). snce b [ ] = 0,, we can frst solve for e [0] for = p p e = e ( ) ( ),,...,, usng the relatonshp: [0] [0] (0) (0) ( ) 2. snce b [0] = e [0] we can then solve for b [0] for = 2,2,..., p usng the equ aton: b [0] = k e [0] () ( ) ( ) 3. we can now begn to solve for as: e [] = e [] + k b [0], = p, p,..., ( ) ( ) ( ) (0) (0) ( ) 4. we set b [] = e [] and we can then solve for b [] for =,2,..., p usng the equaton: b [] = k e [] + b [0], =, 2,..., p () ( ) ( ) 5. we terate for n= 2,3,..., N and end up wth s n e n b n (0) (0) [ ] = [ ] = b [ ] e [] 2

13 Lattce Formulatons of LP the lattce structure comes drectly out of the Durbn algorthm the k parameters are obtaned from the Durbn equatons the predctor coeffcents, α, do not appear explctly n the lattce structure L + k can relate the k parameters to the forward and backward errors va m= 0 ( ) ( ) e ( m) b ( m ) k = 4 2 / L + L + ( ) 2 ( ) 2 [ e ( m)] [ b ( m )] m = 0 m = 0 where k s a normalzed cross correlaton between the forward and backward predcton error, and s therefore called a partal correlaton or PARCOR coeffcent can compute predctor coeffcents re cursvely from the PARCOR coeffcents 3

14 Drect Computaton of k Parameters assume sn [ ] non-zero for 0 n L assume k chosen to mnmze total energy of the forward k (or backward) predcton errors we can then mnmze forward predcton error as : L + () () 2 L + forward = ( m= 0 m= 0 ( ) ( ) E e m) = e ( m) k ( ) b m E ( ) ( ) ( ) k () L + forward ( ) ( ) ( ) = 0 = 2 e m k b m b m k m= 0 L + ( ) ( ) e m b m forward m= 0 = L + 2 m= 0 ( ) ( ) ( b ) ( m ) 2 4

15 Drect Computaton of k Parameters we can also choose to mnmze the backward predcton error L + 2 L + () () ( ) ( ) backward m= 0 m= 0 E = b ( m) = ke ( m) + b ( m ) E () L + backward ( ) ( ) = 0= 2 ke m + b m k m=0 L + ( ) ( ) e ( m) b ( m ) backward m = 0 k = L + 2 ( e ) ( m) m= 0 ( ( ) ( ) e ) ( m) 2 5

16 Drect Computaton of k Parameters f we wndow and sum over all tme, then L + 2 L + 2 ( ) ( ) e ( m) = b ( m ) m= 0 m= 0 therefore k = k k = k = k PARCOR forward backward forward backward = L + m= 0 ( ) ( ) e m b m ( ) ( ) L + 2 L + ( ) ( ) 2 e ( m) b ( m ) m= 0 m= 0 2 / 6

17 Drect Computaton of k Parameters mnmze sum of forward and backward predcton errors over fxed nterval (covarance method) { 2 2 ( ) ( ) } () = L Burg () + () m= 0 E e m b m E k Burg 2 ( ) ( ) ( ) ( L ) e ( m) k ( ) b m ke ( m) b m= 0 m= = + + () L Burg ( ) ( ) ( ) k m= 0 = = 0= 2 e ( m ) k b ( m ) b ( m ) L ( ) ( ) ( ) 2 ke ( ) + ( ) m b m e ( m) m= 0 ( ) ( ) 2 L e ( m) b ( m ) m= 0 L 2 L 2 ( ) ( ) m= 0 Burg k e ( m ) + b ( m ) always m= 0 ( m ) 2 7

18 Comparson of Autocorrelaton and Burg Spectra sgnfcantly less smearng of formant peaks usng Burg method 8

19 Summary of Lattce Procedure steps nvolved n determnng the predctor coeffcents and the k parameters for the lattce method are as follows ( 0) ( 0). ntal condton, e ( m) = s( m) = b ( m) from Eq. *3 () 2. compute k = α from Eq. *4 () () 3. determne forward and backward predcton errors e ( m ), b ( m ) from Eqs. * and *2 4. set = 2 () = α () j = () () 5. determne k from Eq. *4 6. determne α j for j = 2,,..., from Durbn teraton 7. determne e ( m) and b ( m) from Eqs. * and *2 8. set = + 9. f s less than or equal to p, go to step 5 0. procedure s termnated predctor coeffcents obtaned drectly from speech samples => wthout calculaton of autocorrelaton functon method s guaranteed to yeld stable flters ( k <) wthout usng wndow 9

20 Summary of Lattce Procedure 20

21 Summary of Lattce Procedure (0) [n] () e [ n ] [ n] s ( (2) e ] e [ n ] ( ) e p [ n] e[n] [ ] (0) z k 2 k2 k k z () z k k b [n] n b [nn ] b [nn ] [ n ] ( 0) ( 0) (2) e ( m) = b ( m) = s( m) 3 ( ) b p p p k L + + m= 0 ( ) ( ) e ( m) b ( m ) = 4 2 / L + L + ( ) 2 ( ) 2 [ e ( m )] [ b ( m )] m= 0 m= 0 ( ) ( ) ( ) e ( m) = e ( m) k b ( m ) () ( ) ( ) b ( m) = b ( m ) k e ( m) 2 2

22 Predcton Error Sgnal. Speech Producton Model p k = sn ( ) = asn ( k) + Gun ( ) Sz ( ) Hz ( ) = = Uz ( ) 2. LPC Model: k p G k = az k k en ( ) = sn ( ) sn ( ) = sn ( ) α sn ( k) Ez ( ) Az ( ) = = Sz ( ) 3. LPC Error Model: p k = α z k k Sz ( ) = = Az ( ) Ez ( ) α z p k = p k = k k p k = sn ( ) = en ( ) + α sn ( k) k k Gu(n) s(n) e(n) H(z) A(z) /A(z) s(n) e(n) s(n) Perfect reconstructon even f a k not equal to α k 22

23 LPC Comparsons fle:ah, ss:3000, L:300, Lp:2 23

24 LPC Comparsons fle:test_6k, ss:3000, L:300, p:2 24

25 LPC Comparsons fle:test_6k, ss:000, L:400, p:2 25

26 Comparsons Between LP Methods the varous LP soluton technques can be compared n a number of ways, ncludng the followng: computatonal ssues numercal ssues stablty of soluton number of poles (order of predctor) wndow/secton length for analyss 26

27 LP Soluton Computatons assume L L 2 >>p; choose values of L =300, L 2 =300, L 3 =300, p=0 computaton for covarance method L p+p *,+ autocorrelaton t method L 2 p+p *,+ lattce method 5L 3 p 5000 *,+ 27

28 LP Soluton Comparsons stablty guaranteed for autocorrelaton method cannot be guaranteed for covarance method; as wndow sze gets larger, ths almost always makes the system stable guaranteed for lattce method choce of LP analyss parameters need 2 poles for each vocal tract resonance below F s /2 need 3-4 poles to represent source shape and radaton load use values of p

29 The Predcton Error Sgnal 29

30 LP Speech Analyss fle:s5, ss:000, frame sze (L):320, lpc order (p):4, cov method Top panel: speech sgnal Second panel: error sgnal Thrd panel: log magntude spectra of sgnal and LP model Fourth panel: log magntude spectrum of error sgnal 30

31 LP Speech Analyss fle:s5, ss:000, frame sze (L):320, lpc order (p):4, ac method Top panel: speech sgnal Second panel: error sgnal Thrd panel: log magntude spectra of sgnal and LP model Fourth panel: log magntude spectrum of error sgnal 3

32 LP Speech Analyss fle:s3, ss:4000, frame sze (L):60, lpc order (p):6, cov method Top panel: speech sgnal Second panel: error sgnal Thrd panel: log magntude spectra of sgnal and LP model Fourth panel: log magntude spectrum of error sgnal 32

33 LP Speech Analyss fle:s3, ss:4000, frame sze (L):60, lpc order (p):6, ac method Top panel: speech sgnal Second panel: error sgnal Thrd panel: log magntude spectra of sgnal and LP model Fourth panel: log magntude spectrum of error sgnal 33

34 Varaton of Predcton Error wth LP Order predcton error flattens at around p=3-4 4 normalzed predcton error for unvoced speech larger than for voced speech => model sn t qute as accurate 34

35 LP Soluton Comparsons choce of LP analyss parameters use small values of L for reduced computaton use l order of several ptch perods for relable predcton-especally when usng tapered wndow use l from samples at 0 khz for autocorrelaton for lattce and covarance methods, L as small as 2 p has been used (wthn ptch perods); however er f ptch pulse occurs wthn wndow, bad predcton results => use much larger values of L 35

36 Predcton Error Sgnal Behavor - the predcton error sgnal s computed as p k k = en ( ) = sn ( ) α sn ( k) = Gun ( ) - en ( ) should be large at the begnnng of each ptch perod (voced speech) => good sgnal for ptch detecton - can perform autocorrelaton on en ( ) and detect largest peak - error spectrum s approxmately flat-so effects of formants on ptch detecton are mnmzed 36

37 Normalzed Mean-Squared Error - for autocorrelaton t method V nˆ = L+ p m= 0 L m= 0 L m= 0 L e 2 nˆ 2 n ( m) sˆ ( m) - for covarance method V nˆ = m= 0 e 2 nˆ 2 n ( m) sˆ ( m) - the predcton error sequence (defnng α = ) s e ( m) = α s ( m k) nˆ nˆ p k = 0 k nˆ - gvng many forms for the normalzed error V V V nˆ nˆ = = p p = 0 j= 0 p = 0 p = φnˆ (, j) α α j φ (,) 00 nˆ φnˆ (, 0) α φ (,) 00 nˆ 2 = ( k ) 0 37

38 Expermental Evaluatons of LPC Parameters 38

39 Normalzed Mean-Squared Error V n versus p for synthetc vowel, ptch perod of 83 samples, L=60, ptch synchronous analyss covarance method-error error goes to zero at p=, the order of the synthess flter autocorrelaton method, V n 0. for p 7 snce error domnated by predcton at begnnng of nterval 39

40 Normalzed Mean-Squared Error normalzed error versus p for ptch asynchronous analyss, L=20, normalzed error falls to 0. near p= for both covarance and autocorrelaton methods 40

41 Normalzed Mean-Squared Error normalzed error versus L, p=2 for L < ptch perod (83 samples), covarance method gves smaller normalzed error than autocorrelaton method for values of L at or near multples of ptch perod, normalzed error jumps due to large predcton error n vcnty of ptch pulse when L > 2 * ptch perod, normalzed error same for both autocorrelaton and covarance methods 4

42 Normalzed Mean-Squared Error results for natural speech 42

43 Normalzed Mean-Squared Error varablty of normalzed error wth postonng of frame sample-by-sample LP analyss of 40 msec of vowel // sgnal energy n part a; normalzed error n part b for p=4 pole analyss usng 20 msec frame sze for covarance method; normalzed error n part c for autocorrelaton method usng HW; normalzed error n part d for autocorrelaton method usng RW average ptch perod of 84 samples => 2.5 ptch perods n 20 msec wndow substantal varatons n normalzed error for covarance method-especally peaked when ptch pulses enter wndow => longer wndows gve larger normalzed errors substantal hgh frequency varatons n normalzed error for autocorrelaton method wth some ptch modulatons 43

44 Propertes of the LPC Polynomal 44

45 Mnmum-Phase Property of A(z) A( ( z ) has all ts zeros nsde d the unt crcle 2 P roof : Assume that z ( z > ) s a zero (root) of A( z) o o ( ) ( ) ( ) A z = z o z A z The mnmum mean-squared error s π 2 ω 2 ω 2 ˆ = ˆ[ ] = j j n n ( ) nˆ( ) m= 2π π E e m A e S e dω π jω jω jω = ( ) 0 2 ˆ( ) ω > π ze o A e Sn e d π jω jω o = o ( / o) ze z z e Thus, A( z) could not be the optmum flter because we could replace z by ( / z ) and decrease the error. o o 45

46 PARCORs and Stablty () prove that k z for some j ( ) ( ) ( ) ( ) = = j j = A ( z) A ( z) k z A ( z ) ( z z ) It s easly shown that k s the coeffcent () () of z n A ( z),.e., α = k. Therefore, k p = z j = () j If, then ether all the roots must be on the k unt crcle or at least one of the unt crcle. j them must be outsde 46

47 PARCORs and Stablty f, then ether all the roots must be on the k unt crcle or at least one of them must be outsde () the unt crcle. Snce ths s true for all A ( z), = 2,,..., p, a necessary condton for the roots of ( p A ) ( z) to be nsde the unt crcle s: for the th k <, = 2,,...,, p -order optmum lnear predctor, () 2 ( ) 2 ( 0 ) ( ) ( j ) j = E = k E = k E > 0 ( p) so k < and t herefore A ( z ) has all ts roots nsde the unt crcle. 47

48 Root Locatons for Optmum LP Model H ( z) = G Az ( ) = p G = G = = α z p p = = Gz ( z z ) ( z z ) p 48

49 Pole-Zero Plot for Model Whch poles correspond to formant frequences? 49

50 Pole Locatons 50

51 Pole Locatons (F S =0,000 Hz) F = ( θ /80) ( F S / 2) 5

52 Estmatng Formant Frequences compute A(z) and factor t. fnd roots that are close to the unt crcle. compute equvalent analog frequences from the angles of the roots. plot formant frequences es as a functon of tme. 52

53 Spectrogram wth LPC Roots 53

54 Spectrogram wth LPC Roots 54

55 Comparson to ABS Methods error measure for ABS methods s log rato of power spectra,.e., π 2 ω 2 log j S ( ) n e E = ω ( ω ) 2 d j π He thus for ABS mnmzaton of E s equvalent to mnmzng mean squared error between two log spectra comparng ELPC and EABS we see the followng: both error measures related to rato of sgnal to model spectra both tend to perform unformly over whole frequency range both are sutable to selectve error mnmzaton over specfed frequency ranges error crteron for LP modelng places hgher weght on frequency regons jω 2 jω 2 where ee S n(e e ) < H ( e ), whereas easthe ABS Serror crteron places equal weght on both regons for unsmoothed sgnal spectra (as obtaned by STFT methods), LP error crteron yelds better spectral matches than ABS error crteron for smoothed sgnal spectra (as obtaned at output of flter banks), both error crtera wll perform well 55

56 Relaton of LP Analyss to Lossless Tube Models 56

57 Dscrete-Tme Model -I Make all sectons the same length wth delay τ =Δ x / c where = NΔx 57

58 Dscrete-Tme Model - II N-secton lossless tube model corresponds to dscrete-tme system when: cn FS = 2 where c s the velocty of sound, N s the number of tube sectons, F s the samplng frequency, and s S the total length of the vocal tract. The reflecton coeffcents { r, k N } are related to the areas of the lossless tubes by: r k = A A k+ + k+ Ak A k Can fnd transfer functon of dgtal system subject to constrants of the form r G =, r N = r = L ρc/ AN Z ρc/ A + Z N L L k 58

59 Dscrete-Tme Model - III 59

60 Dscrete-Tme Model - IV Gven the system functon: Vz ( ) 05.( + rg) ( + rk) z U L( z) k= = = U ( z) D( z) G N N / 2 A k+ Ak rk = reflecton coeffcent Ak+ + Ak ρ c / AN Z ρc / A + Z N L f rg = (.e., RG = ) and rn = rl = N L then Dz ( ) satstes the recurson D ( z) = 0 ( ) ( ) k k = k + k k ( ) = 2 D ( z) D ( z) r z D ( z ) k,,, N Dz ( ) = D ( z) N 60

61 e[n] All-Pole Lattce Flter for H(z) s[n] k p k p z k p k p z k k z A ( 0) ( z) = A ( z) = A ( z) k z A ( z ) = 2,,, p () ( ) ( ) ( p) Az ( ) = A ( z) S ( z ) = E ( z ) Az ( ) If r = k and N = p, then D( z) = A( z). 6

62 Tube Areas from PARCORS Relaton to areas: A A ( A / A ) = = = + + k r A+ + A ( A+ / A) + Solve for A + n terms of A A + k A = A > 0 k + = > + k A + k Log area ratos (good for quantzaton) g A + k = log = log A + k senstvty under 0 Mnmzes spectral unform quantzaton 62

63 Estmatng Tube Areas from Speech (Wakta). Sample speech [ sn [ ] = s( nt)] at samplng rate F = / T = ρ c /( 2 ). s 2. Remove effects of glottal source and radaton by pre-emphass emphass xn [ ] = sn [ ] sn [ ]. 3. Compute the PARCOR coeffcents on a short- tme bass: k, = 2,,..., p. 4. Assumng A = (arbtrary), compute k A+ = A, = 2,,..., p + + k H. Wakta, IEEE Trans. Audo and Electroacoustcs,October, 973. a 63

64 Estmaton of Tube Areas Analyss of syllable /IY/ /B/ /AA/ /IY/ sound n part (a) /B/ sound n parts (b) and (c) /AA/ sound n part (d) Usng Wakta method to estmate tube areas from the speech waveform as shown n lower plot. 64

65 Estmatons of Tube Areas Use of Wakta method to estmate tube areas for the fve vowels, /IY/, /EH/, /AA/, /AO/, /UW/ 65

66 Alternatve Representatons of the LP Parameters 66

67 LP Parameter Sets 67

68 PARCORs to Predcton Coeffcents assume that k, = 2,,, p are gven. Then we can skp the computaton of k n the Levnson recurson. for = 2,,, p () α = k f >, then for j = 2,,, α = α k α () ( ) ( ) j j j end end ( p ) α j = α j j = 2,,, p 68

69 Predcton Coeffcents to PARCORs assume that α j, j = 2,,, p are gven. Then we can work backwards through the Levnson Recurson. α k ( p) j p = α for j = 2,,, p = α j ( p) p for = p, p,, 2 for j = 2,,, end α end () () α α ( ) j + k j j = 2 k ( ) k = α 69

70 LP Parameter Transformatons roots of the predctor polynomal p p α k k k k = k = ( ) Az ( ) = z = zz where each root can be expressed as a z-plane or s-plane root,.e., z = z + jz ; s = σ + j Ω gvng z k kr k k k k k = st k e Ω = z = + mportant for formant estmaton ( z z ) k 2 2 k tan ; σ k log kr k T z kr 2T 70

71 LP Parameter Transformatons cepstrum of IR of overall LP system from predctor coeffcents n k = ˆ k hn ( ) = α ˆ n + hk ( ) αn k n n predctor coeffcents from cepstrum of IR where α n n ˆ k = hn ( ) ˆ hk ( ) αn k n n k = n G hnz p n= 0 k Hz ( ) = ( ) = αk z k = 7

72 LP Parameter Transformatons IR of all pole system p k = hn ( ) = α hn ( k) + Gδ( n) 0 n autocorrelaton of IR k R () = hnhn ( ) ( ) = R ( ) n= 0 p R () α R ( k ) = k k = p αk k = R ( 0) = R ( k) + G 2 72

73 LP Parameter Transformatons autocorrelaton of the predctor polynomal p Az ( ) = α z k = wth IR of the nverse flter k k p k k = an ( ) = δ ( n) αδ( n k) wth autocorrelaton R ( ) = akak ( ) ( + ) 0 p a p k = 0 73

74 LP Parameter Transformatons log area rato coeffcents from PARCOR coeffcents A+ k g = log = log A + k p wth nverse relaton g e k = p g + + e 74

75 Quantzaton of LP Parameters consder the magntude-squared of the model frequency response jω 2 He ( ) = = P ( ω, g ) j ω 2 A ( e ) where g s a parameter that affects P. spectral senstvty can be defned d as π S P( ω, g ) = lm log d ω g Δg 0 Δ g 2π P ( ω, g +Δg π ) whch measures senstvty to errors n the g parameters 75

76 Quantzaton of LP Parameters spectral senstvty for k parameters; low senstvty around 0; hgh senstvty around spectral senstvty for log area rato parameters, g low senstvty for vrtually entre range s seen 76

77 Lne Spectral Par Parameters Az ( ) = + α z + α z α z 2 p 2 p = all-zero predcton flter wth all zeros, zk, nsde the unt crcle Az ( ) = z Az ( ) = α z α z + α z + z ( p+ ) p+ p ( p+ ) p 2 = recprocal polynomal wth nverse zeros, / z consder the followng: Az ( ) ω ( ) = j Lz = allpass system Le ( ) =, all ω Az ( ) form the symmetrc polynomal Pz ( ) as: ( + ) ( ) = ( ) + p Pz Az Az ( ) = Az ( ) + z Az ( ) Pz ( ) has zeros for Lz ( ) = ; ( Az ( ) = Az ( )) { } j jω ω k arg L( e ) = ( k + / 2 ) 2π, k = 0,,..., p form the ant-symmetrc polynomal Qz ( ) as: Qz = Az Az = Az z Az Qz Lz = + Az = Az ( p+ ) ( ) ( ) ( ) ( ) ( ) ( ) has zeros for ( ) ;( ( ) ( )) j { ω k } 2π 0 arg Le ( ) = k, k=,,..., p z k 77

78 Lne Spectral Par Parameters zeros of Pz ( ) and Qz ( ) fall on unt crcle and are nterleaved wth { ω } each other set of called Lne Spectral Frequences (LSF) 2 k LSFs are n ascendng order stablty of Hz ( ) guaranteed by quantzng LSF parameters j ω j ω jω P ( e ) + Q ( e ) Ae ( ) = 2 jω Ae ( ) = 2 2 jω jω Pe ( ) + Qe ( ) 4 78

79 Lne Spectrum Par (LSP) Parameters propertes of LSP parameters. Pz ( ) corresponds to a lossless tube, open at the lps and open ( k p+ = ) at the glotts 2. Q ( z ) corresponds to a lossless tube, open at the lps and closed ( k = p+ ) Q p+ at the glotts 3. all the roots of Pz ( ) and Qz ( ) are on the unt crcle 4. f p s an even nteger, then P ( z ) has a root at z =+ and Q ( z ) has a root at z = 5. a necessary and suffcent condton for k <, = 2,,..., p s that the roots of Pz ( ) and Qz ( )alternate on the unt crcle 6. the LSP frequences get close together when roots of Az ( ) are close to the unt crcle 7. the roots of Pz ( ) are approxmately equal to the formant frequences 79

80 LSP Example p = 2 Pz ( ) roots o Qz ( ) roots x A ( z ) roots 80

81 Lne Spectrum Par (LSP) Parameters 8

82 LPC Synthess the basc synthess equaton s p sn ( ) = α sn ( k) + Gun ( ) k = k need to update parameters every 0 msec or so ptch synchronous mode works best 82

83 LPC Analyss-Synthess. Extract α k parameters properly 2. Quantze α k parameters properly so that there s lttle quantzaton error Small number of bts go nto codng the α k coeffcents 3. Represent e(n) va: Ptch pulses and nose LPC Codng Multple pulses per 0 msec nterval MPLPC Codng Codebook vectors CELP Almost all of the codng bts go nto codng of e(n) 83

84 LPC Vocoder header bt rates of 72 F S where F S =00, 67, and d33f for bt rates of 7200 bps, 4800 bps and 2400 bps 3600 bps 2400 bps VEV LPC 84

85 LPC Bascs A(z) H(z) s (m) e (m) s (m) n n n p k Ez ( ) Az ( ) = αkz = Pz ( ) = ; predcton error flter Sz ( ) k = p nˆ nˆ k nˆ k = e ( m) = s ( m ) α s ( m k );predcton error Hz ( ) = = = ; all pole model Az ( ) ( ) α p k Pz z k = k p [ ] 2 Enˆ = enˆ( m) = snˆ( m) αksnˆ( m k) m= m= k= 2 85

86 LPC Bascs-Speech Model e(n)=gu(n) + s(n) P(z) Hz ( ) jω He ( ) = G G S( z) = = = p ( ) ( ) α k P z U z z k = p k G αk e k = jωk 86

87 Summary the LP model has many nterestng and useful propertes that t follow from the structure t of the Levnson-Durbn algorthms the dfferent equvalent representatons have dfferent propertes under quantzaton polynomal coeffcents (bad) polynomal roots (okay) PARCOR coeffcents (okay) lossless tube areas (good) LSP root angles (good) almost all LPC representatons can be used wth a range of compresson schemes and are all good canddates for the technque of Vector Quantzaton 87

Digital Speech Processing Lecture 14. Linear Predictive Coding (LPC)-Lattice Methods, Applications

Digital Speech Processing Lecture 14. Linear Predictive Coding (LPC)-Lattice Methods, Applications Dgtal Seech Processng Lecture 14 Lnear Predctve Codng (LPC)-Lattce Methods, Alcatons 1 Predcton Error Sgnal 1. Seech Producton Model sn ( ) = asn ( k) + Gun ( ) 2. LPC Model: k = 1 Sz ( ) Hz ( ) = = Uz

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析

Chapter 9. Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 Chapter 9 Linear Predictive Analysis of Speech Signals 语音信号的线性预测分析 1 LPC Methods LPC methods are the most widely used in speech coding, speech synthesis, speech recognition, speaker recognition and verification

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Tracking with Kalman Filter

Tracking with Kalman Filter Trackng wth Kalman Flter Scott T. Acton Vrgna Image and Vdeo Analyss (VIVA), Charles L. Brown Department of Electrcal and Computer Engneerng Department of Bomedcal Engneerng Unversty of Vrgna, Charlottesvlle,

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Identification of Linear Partial Difference Equations with Constant Coefficients

Identification of Linear Partial Difference Equations with Constant Coefficients J. Basc. Appl. Sc. Res., 3(1)6-66, 213 213, TextRoad Publcaton ISSN 29-434 Journal of Basc and Appled Scentfc Research www.textroad.com Identfcaton of Lnear Partal Dfference Equatons wth Constant Coeffcents

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise.

Chapter 2 - The Simple Linear Regression Model S =0. e i is a random error. S β2 β. This is a minimization problem. Solution is a calculus exercise. Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where y + = β + β e for =,..., y and are observable varables e s a random error How can an estmaton rule be constructed for the

More information

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X).

where I = (n x n) diagonal identity matrix with diagonal elements = 1 and off-diagonal elements = 0; and σ 2 e = variance of (Y X). 11.4.1 Estmaton of Multple Regresson Coeffcents In multple lnear regresson, we essentally solve n equatons for the p unnown parameters. hus n must e equal to or greater than p and n practce n should e

More information

( ) = ( ) + ( 0) ) ( )

( ) = ( ) + ( 0) ) ( ) EETOMAGNETI OMPATIBIITY HANDBOOK 1 hapter 9: Transent Behavor n the Tme Doman 9.1 Desgn a crcut usng reasonable values for the components that s capable of provdng a tme delay of 100 ms to a dgtal sgnal.

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Chapter 3. Two-Variable Regression Model: The Problem of Estimation

Chapter 3. Two-Variable Regression Model: The Problem of Estimation Chapter 3. Two-Varable Regresson Model: The Problem of Estmaton Ordnary Least Squares Method (OLS) Recall that, PRF: Y = β 1 + β X + u Thus, snce PRF s not drectly observable, t s estmated by SRF; that

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Basic Business Statistics, 10/e

Basic Business Statistics, 10/e Chapter 13 13-1 Basc Busness Statstcs 11 th Edton Chapter 13 Smple Lnear Regresson Basc Busness Statstcs, 11e 009 Prentce-Hall, Inc. Chap 13-1 Learnng Objectves In ths chapter, you learn: How to use regresson

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Digital Modems. Lecture 2

Digital Modems. Lecture 2 Dgtal Modems Lecture Revew We have shown that both Bayes and eyman/pearson crtera are based on the Lkelhood Rato Test (LRT) Λ ( r ) < > η Λ r s called observaton transformaton or suffcent statstc The crtera

More information

Constitutive Modelling of Superplastic AA-5083

Constitutive Modelling of Superplastic AA-5083 TECHNISCHE MECHANIK, 3, -5, (01, 1-6 submtted: September 19, 011 Consttutve Modellng of Superplastc AA-5083 G. Gulano In ths study a fast procedure for determnng the constants of superplastc 5083 Al alloy

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors

is the calculated value of the dependent variable at point i. The best parameters have values that minimize the squares of the errors Multple Lnear and Polynomal Regresson wth Statstcal Analyss Gven a set of data of measured (or observed) values of a dependent varable: y versus n ndependent varables x 1, x, x n, multple lnear regresson

More information

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede

Fall 2012 Analysis of Experimental Measurements B. Eisenstein/rev. S. Errede Fall 0 Analyss of Expermental easurements B. Esensten/rev. S. Errede We now reformulate the lnear Least Squares ethod n more general terms, sutable for (eventually extendng to the non-lnear case, and also

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands

1. Inference on Regression Parameters a. Finding Mean, s.d and covariance amongst estimates. 2. Confidence Intervals and Working Hotelling Bands Content. Inference on Regresson Parameters a. Fndng Mean, s.d and covarance amongst estmates.. Confdence Intervals and Workng Hotellng Bands 3. Cochran s Theorem 4. General Lnear Testng 5. Measures of

More information

Supplementary Notes for Chapter 9 Mixture Thermodynamics

Supplementary Notes for Chapter 9 Mixture Thermodynamics Supplementary Notes for Chapter 9 Mxture Thermodynamcs Key ponts Nne major topcs of Chapter 9 are revewed below: 1. Notaton and operatonal equatons for mxtures 2. PVTN EOSs for mxtures 3. General effects

More information

What would be a reasonable choice of the quantization step Δ?

What would be a reasonable choice of the quantization step Δ? CE 108 HOMEWORK 4 EXERCISE 1. Suppose you are samplng the output of a sensor at 10 KHz and quantze t wth a unform quantzer at 10 ts per sample. Assume that the margnal pdf of the sgnal s Gaussan wth mean

More information

Parameter Estimation for Dynamic System using Unscented Kalman filter

Parameter Estimation for Dynamic System using Unscented Kalman filter Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

Regression Analysis. Regression Analysis

Regression Analysis. Regression Analysis Regresson Analyss Smple Regresson Multvarate Regresson Stepwse Regresson Replcaton and Predcton Error 1 Regresson Analyss In general, we "ft" a model by mnmzng a metrc that represents the error. n mn (y

More information

e i is a random error

e i is a random error Chapter - The Smple Lnear Regresson Model The lnear regresson equaton s: where + β + β e for,..., and are observable varables e s a random error How can an estmaton rule be constructed for the unknown

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

Module 3: Element Properties Lecture 1: Natural Coordinates

Module 3: Element Properties Lecture 1: Natural Coordinates Module 3: Element Propertes Lecture : Natural Coordnates Natural coordnate system s bascally a local coordnate system whch allows the specfcaton of a pont wthn the element by a set of dmensonless numbers

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

6.3.4 Modified Euler s method of integration

6.3.4 Modified Euler s method of integration 6.3.4 Modfed Euler s method of ntegraton Before dscussng the applcaton of Euler s method for solvng the swng equatons, let us frst revew the basc Euler s method of numercal ntegraton. Let the general from

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Zeros and Zero Dynamics for Linear, Time-delay System

Zeros and Zero Dynamics for Linear, Time-delay System UNIVERSITA POLITECNICA DELLE MARCHE - FACOLTA DI INGEGNERIA Dpartmento d Ingegnerua Informatca, Gestonale e dell Automazone LabMACS Laboratory of Modelng, Analyss and Control of Dynamcal System Zeros and

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Originated from experimental optimization where measurements are very noisy Approximation can be actually more accurate than

Originated from experimental optimization where measurements are very noisy Approximation can be actually more accurate than Surrogate (approxmatons) Orgnated from expermental optmzaton where measurements are ver nos Approxmaton can be actuall more accurate than data! Great nterest now n applng these technques to computer smulatons

More information

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to

ρ some λ THE INVERSE POWER METHOD (or INVERSE ITERATION) , for , or (more usually) to THE INVERSE POWER METHOD (or INVERSE ITERATION) -- applcaton of the Power method to A some fxed constant ρ (whch s called a shft), x λ ρ If the egenpars of A are { ( λ, x ) } ( ), or (more usually) to,

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Introduction to Regression

Introduction to Regression Introducton to Regresson Dr Tom Ilvento Department of Food and Resource Economcs Overvew The last part of the course wll focus on Regresson Analyss Ths s one of the more powerful statstcal technques Provdes

More information

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES COMPUTATIONAL FLUID DYNAMICS: FDM: Appromaton of Second Order Dervatves Lecture APPROXIMATION OF SECOMD ORDER DERIVATIVES. APPROXIMATION OF SECOND ORDER DERIVATIVES Second order dervatves appear n dffusve

More information

Chapter 4. Velocity analysis

Chapter 4. Velocity analysis 1 Chapter 4 Velocty analyss Introducton The objectve of velocty analyss s to determne the sesmc veloctes of layers n the subsurface. Sesmc veloctes are used n many processng and nterpretaton stages such

More information

x i1 =1 for all i (the constant ).

x i1 =1 for all i (the constant ). Chapter 5 The Multple Regresson Model Consder an economc model where the dependent varable s a functon of K explanatory varables. The economc model has the form: y = f ( x,x,..., ) xk Approxmate ths by

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

The Concept of Beamforming

The Concept of Beamforming ELG513 Smart Antennas S.Loyka he Concept of Beamformng Generc representaton of the array output sgnal, 1 where w y N 1 * = 1 = w x = w x (4.1) complex weghts, control the array pattern; y and x - narrowband

More information

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011

Stanford University CS359G: Graph Partitioning and Expanders Handout 4 Luca Trevisan January 13, 2011 Stanford Unversty CS359G: Graph Parttonng and Expanders Handout 4 Luca Trevsan January 3, 0 Lecture 4 In whch we prove the dffcult drecton of Cheeger s nequalty. As n the past lectures, consder an undrected

More information

Statistics MINITAB - Lab 2

Statistics MINITAB - Lab 2 Statstcs 20080 MINITAB - Lab 2 1. Smple Lnear Regresson In smple lnear regresson we attempt to model a lnear relatonshp between two varables wth a straght lne and make statstcal nferences concernng that

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

This column is a continuation of our previous column

This column is a continuation of our previous column Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard

More information

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation

since [1-( 0+ 1x1i+ 2x2 i)] [ 0+ 1x1i+ assumed to be a reasonable approximation Econ 388 R. Butler 204 revsons Lecture 4 Dummy Dependent Varables I. Lnear Probablty Model: the Regresson model wth a dummy varables as the dependent varable assumpton, mplcaton regular multple regresson

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Robust observed-state feedback design. for discrete-time systems rational in the uncertainties

Robust observed-state feedback design. for discrete-time systems rational in the uncertainties Robust observed-state feedback desgn for dscrete-tme systems ratonal n the uncertantes Dmtr Peaucelle Yosho Ebhara & Yohe Hosoe Semnar at Kolloquum Technsche Kybernetk, May 10, 016 Unversty of Stuttgart

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus

More information

3GPP TS V ( )

3GPP TS V ( ) TS 6.9 V.. (-9) Techncal Specfcaton 3rd Generaton Partnershp Project; Techncal Specfcaton Group Servces and System Aspects; Speech codec speech processng functons; Adaptve Mult-Rate - Wdeband (AMR-WB)

More information

Communication with AWGN Interference

Communication with AWGN Interference Communcaton wth AWG Interference m {m } {p(m } Modulator s {s } r=s+n Recever ˆm AWG n m s a dscrete random varable(rv whch takes m wth probablty p(m. Modulator maps each m nto a waveform sgnal s m=m

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Revsed: v3 Ordnar Least Squares (OLS): Smple Lnear Regresson (SLR) Analtcs The SLR Setup Sample Statstcs Ordnar Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals)

More information

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg

princeton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there

More information