Grading learning for blind source separation

Size: px
Start display at page:

Download "Grading learning for blind source separation"

Transcription

1 Vol. 46 No. 1 SCIENCE IN CHINA (Seres F) February 2003 Gradg learg for bld source separato ZHANG Xada ( ) 1,2, ZHU Xaolog (r ) 1 & BAO Zheg ( ) 1 1. Key Laboratory for Radar Sgal Processg, Xda Uversty, X a , Cha; 2. Departmet of Automato, State Key Laboratory of Itellget echology ad Systems, sghua Uversty, Bejg , Cha; Correspodece should be addressed to Zhag Xada (emal: zxd-dau@mal.tsghua.edu.c) Receved May 31, 2002 Abstract By geeralzg the learg rate parameter to a learg rate matrx, ths paper proposes a gradg learg algorthm for bld source separato. he whole learg process s dvded to three stages: tal stage, capturg stage ad tracg stage. I dfferet stages, dfferet learg rates are used for each output compoet, whch s determed by ts depedecy o other output compoets. It s show that the gradg learg algorthm s equvarat ad ca eep the separatg matrx from becomg sgular. Smulatos show that the proposed algorthm ca acheve faster covergece, better steady-state performace ad hgher umercal robustess, as compared wth the exstg algorthms usg fxed, tme-descedg ad adaptve learg rates. Keywords: bld source separato, depedet compoet aalyss, eural computato, adaptve learg. Bld source separato (BSS) cossts of recoverg mutually statstcally depedet but otherwse uobserved source sgals from ther lear mxtures wthout owg the mxg coeffcets. BSS has draw a lot of atteto sgal processg ad commucatos sce t s a fudametal problem ecoutered may practcal applcatos. he BSS algorthms may be off-le or o-le. Fxed-pot algorthm of Hyvare [1] s off-le, ad wors most satsfactorly, but t s ot avalable for real-tme applcatos. Moreover, ts performace deterorates serously as the umber of sources creases [2]. he exstg o-le BSS methods ca be dvded to three major categores: 1) etropy maxmzato (EM) [3] ;2) olear prcpal compoet aalyss (PCA) [4,5] ; ad 3) depedet compoet aalyss (ICA) [6 15]. It s show (see, e.g. ref. [9]) that the ICA ad the EM are equvalet. I ths paper, we pay our atteto to the ICA for BSS. As a mastream techque for BSS, the ICA has may effcet algorthms, such as the atural gradet algorthms of Amar et al. [9,14], the EASI algorthm of Cardoso ad Laheld [8],the exteded ICA algorthm [11], the flexble ICA algorthm [12], the teratve verso ICA algorthm [13] ad so o. Although the adaptve ICA algorthms have may dfferet forms, most of whch belog to the LMS-type algorthms, thus oe requres a careful choce of the learg rate to obta better performace. Whe the learg rate or step taes a costat, there s a coflct betwee covergece speed

2 32 SCIENCE IN CHINA (Seres F) Vol. 46 ad steady-state performace ay LMS-type algorthm. A small learg rate s useful for steady-state performace, but t leads to slow covergece. Cotrarly, the LMS algorthm may become ustable f the learg rate s too large. o overcome ths problem, a smple way s to decrease the learg rate [10,14,15] wth tme, but t would cause the ew problem: f the learg rate becomes very small whe some source sgal s ot yet separated wth others, the t may eed a very log tme for ths sgal to be separated, ad eve the sgal ca ever be separated. Aother alteratve s to use the adaptve learg rates, such as auxlary varable based adaptve step learg algorthms [16 19], ad gradet adaptve step sze algorthm [20]. o the best of our owledge, however, oe s sgal-adaptve amog the exstg learg rate rules. I ths paper, we propose a ovel gradg learg algorthm for BSS, whch the learg rates are sgal-adaptve ad determed automatcally by the separateess of outputs of the separato system. Usg the gradg learg, the covergece ad tracg of the BSS algorthm ca be cotrolled automatcally. 1 Adaptve o-le algorthms for BSS I eural etwors, sgal processg ad statstcs, oe cosders the stadard lear data model x = As, = 1,2, (1) Fg. 1. Bld source separato va a lear eural etwor. where x deotes the avalable sesor vector at tme, ads s the source vector whose compoets, called the source sgals, are uow, mutually statstcally depedet, ad at most oe s allowed to be Gaussa. he md mxg matrx A s a uow costat matrx wth full ra. As show fg. 1, we use a lear eural etwor for BSS. he BSS problem s to separate the orgal sources from the mxtures. For ths purpose, oe usually adjusts the Dm separatg matrx W such that y = Wx (2) has depedet compoets, or the mutual formato amog y s mmzed. he mutual formato ca be measured by the Kullbac-Lebler dvergece betwee the jot probablty desty fucto p( y, W) ad ts factorzed verso N p ( yw, ) = p( y, W), amely p( yw, ) I( W) = D[ p( y, W) p ( y, W)] = p( y, W)log d y. (3) p ( yw, ) he mutual formato s oegatve,.e. I(W)ƒ0. By Como [6], I(W) s a cotrast fucto for the ICA, meag = 1

3 No. 1 GRADING LEARNING FOR BLIND SOURCE SEPARAION 33 I ( W ) = 0 ff WA = P, (4) where s a osgular dagoal matrx, ad P s a D permutato matrx. hs mples that the mutual formato s equal to zero f ad oly f y = Wx = Ps s a scaled ad/or permuted verso of the source sgals. he followg are several typcal ICA algorthms for BSS. he atural gradet algorthms [9 12,14,15,21] : + = + η φ W 1 W [ I ( y ) y ] W. (5) ItsshowbyAmar [15] that the atural gradet s a exteso of the geeral gradet the Rema space, whle the geeral gradet s a specal example of the atural gradet the uform Eucldea space. he EASI algorthm [8] : + = + η φ + φ W 1 W [ I ( y ) y y ( y ) y y ] W. (6) he teratve verso algorthm [13] : + = + η φ ψ W 1 W [ I ( y ) ( y )] W. (7) I Algorthms (5) (7), η s the learg rate, φ( y ) = [ φ ( y ( )),, φ ( y ( ))] 1 1 ψ( y) = [ ψ1( y1( )),, ψ( y( ))], where the elemets of colum vectors φ( y ) ad ψ ( y ) 2 have several choces, such as φ ( y ( )) = y ( ) C y ( ), ψ ( y ( )) = sg(re( y ( ))) + j sg (Im( y ( ))). It s easly see that the above three algorthms ca be wrtte as the followg uform form: W+ 1 = W + ηg( y) W. (8) I the BSS, ay effcet algorthm should have two ey propertes: the equvarat property ad the property of eepg the separatg matrx from becomg sgular. he equvarat property guaratees that the BSS algorthm ca offer uform performace, meag ts behavors are depedet of the mxg process, whle the osgular property s the premse that the BSS algorthm wors steadly. A BSS algorthm s equvarat f the combed mxg-separatg matrx C = WA satsfes [8] C+ 1 = C +ηf( Cs) C, (9) where FCs ( ) s a matrx fucto of Cs such that the mxg matrx A does ot eter to the evoluto of C except as a tal codto. It s show [8 10,15] that the EASI algorthm ad the atural gradet oes are equvarat ad have the osgular property of W. As a comparso, the BSS algorthms based o geeral gradet do ot have these two ey performaces. ad

4 34 SCIENCE IN CHINA (Seres F) Vol A gradg learg algorthm for BSS I ths secto, we develop a ovel gradg learg rule for adaptvely selectg the learg step η the BSS algorthm (8). 2.1 Measure of sgal depedece As dscussed the prevous secto, the goal of the BSS s to determe W such that the outputs y of the separato system are mutually depedet. It s well ow that f y s depedet of y j, the for ay measurable fuctos g 1 ad g 2, oe has [22] Eg { ( y) g( y)} Eg { ( y)} Eg { ( y)} = 0, j (10) 1 2 j 1 2 j whch mples g1( y ) ad g2 ( yj ) are ucorrelated wth each other. herefore, to uderstad whether y ad y j are mutually depedet or ot, we should justfy the valdess of eq. (10) for all measurable fuctos g 1 ad g 2, whch s ot oly mpossble but also uecessary practcal applcatos. As a matter of fact, t s suffcet to cosder the secod- ad hgher-order correlatos betwee the output compoets y ad y j.ithspaper,wedefe sc = cov[ y ( ), y ( )] cov[ y ( )]cov[ y ( )], j j j (11) hc = cov[ φ ( y ( )), y ( )] cov[ φ ( y ( ))]cov[ y ( )], j j j (12) where cov[ x ( ), y ( )] = E{[ x ( ) x][ y ( ) y]}, cov[ x ( )] = E{[ x ( ) x] } ad x = Ex {()}. Clearly, sc j represets the secod-order correlato coeffcet betwee y ad y j.sce φ (C) s olear, we refer to hc j as the ormalzed hgher-order correlato coeffcet betwee y ad y j. Note that hcj hc j sce hc j s the cross-correlato betwee φ( y( )) ad y j, whle hc j s that betwee φ ( y ( )) ad y. j j I the BSS, the separateess of a par of source sgals ca be measured by ther depedece. For ay two sgals, we have the followg basc observatos: Sgals y ad y j are of strog depedece f ay of the secod-order correlato coeffcet sc j, the hgher-order oes hc j ad hc j s large ; y ad y j are of wea depedece f all of the sc j, hc j ad hc j are smaller ; y ad y j are almost depedet f sc j, hc j ad hc j are all small eough. Defto 1. he maxmum absolute value of sc j, hc j ad hc j s defed as the measure of sgal depedece betwee y ad y j,.e. D ( ) = max{ sc ( ), hc ( ), hc ( ) }. (13) j j j j Clearly, ths defto cocdes wth the above three observatos. hat s to say, the sgals y ad y j may be sad to be strogly depedet f D j () s large (e.g D j () 1), wealy depedet f t s smaller (e.g D j () 0.25 ), ad almost depedet f suffcetly small 2

5 No. 1 GRADING LEARNING FOR BLIND SOURCE SEPARAION 35 (e.g. 0 D j () 0.05 ). Partcularly, D j () = 0 for two depedet sgals y ad y j,add j () =1 for two coheret sgals. Note that Defto 1 assures D j () =D j (), whch s a expected characterstc sce the depedece of y o y j should be detcal to that of y j o y. Moreover, Defto 1 s avalable as well for two pre-whteed sgals, such a case, the secod-order coeffcet sc j s equal to zero, resultg D ( ) = max{ hc ( ), hc ( ) }. j j j I the BSS, order to measure the depedece of a sgal o the others ad the separateess of all the source sgals, we geeralze Defto 1 as follows. Defto 2. Defe D () as the depedece measure of sgal y () o all the other sgals, amely, D ( ) = max{ D ( )} = max{ sc ( ), hc ( ), hc ( ) }. (14) j j j j j, j j, j Defto 3. Let D() represet the depedece amog the outputs of the separato system, the t descrbes the separateess of all the source sgals, ad ca be defed as D ( ) = max{ D( )} = max { sc( ), hc( ), hc ( ) }. (15) j j j, j, j Sce the BSS s desred to be mplemeted o-le practcal applcatos, we eed to derve the adaptve updates of sc j () adhc j (). o ths ed, we assume that the output y s wde-sese statoary ad ergodc,.e. ts statstcal average ca be substtuted by ts tme average. It s easy to verfy that for ay statoary dscrete sgal x(), the sample mea at tme ca be wrtte as x ( ) = λ x ( ) = λ x ( 1) + x ( ), = 1 where 0 λ 1 s a forgettg factor. Lettg R = cov[ y ( ), y ( )], P = cov[ φ ( y ( )), y ( )], ad Q = cov[ φ ( y ( ))], j j j j we have the followg recursve formulae: 1 1 y( ) = λ y( 1) + y( ), (16) ( ) = y ( ) y ( 1), (17) 1 1 Rj ( ) = λ [ Rj ( 1) + ( ) j ( )] + [ y ( ) y ( )][ yj ( ) yj ( )], (18) φ 1 1 ( ) = ( 1) ( y( )), + (19) ( ) = φ ( ) φ ( 1), (20) 1 1 ( ) Pj = λ [ Pj ( 1) + ( ) j ( )] + [ φ ( y ( )) φ ( )][ yj ( ) yj ( )], (21)

6 36 SCIENCE IN CHINA (Seres F) Vol Q ( ) = λ [ Q ( 1) + ( )] + [ φ ( y ( )) φ ( )] (22) for, j = 1,,. Oce R j, P j ad Q are recursvely estmated, we ca use (11) ad (12) to compute drectly sc j ()adhc j (), ad thus obta D j (), D ()add(). 2.2 A gradg learg algorthm for BSS It s well ow that for the LMS-type algorthms, a fxed learg rate or step sze wll evtably lead to the cotradcto betwee the covergece speed ad the steady-state performace. o solve ths problem, oe may decrease the learg rate as tme [10,14,15] or use the adaptve learg rate [16 18], but oe s related drectly to the depedece or separateess of the source sgals, so the performace mprovemet s qute lmted. Furthermore, a scalar learg rate mples that all the source sgals use the same learg rate, whch s clearly ot optmal. hs s because a sgal already separated from the others should use a small learg rate to trac, whle a sgal ot separated from the others should use a large oe to speed up ts separato. Although a small learg rate s approprate for the separated sgals, t may be a uder-learg rate for other source sgals wth strog depedece o each other, whch slows dow ther separato. Cotrarly, a large learg rate sutable for some source sgals wth strog depedece may be a over-learg rate for the other separated sources, whch may shatter them. Based o these cosderatos, we propose a gradg learg algorthm for BSS whose cetral dea s to use a learg rate matrx Λ = { η ( )} stead of the orgal scalar η, j ad thus the algorthm (8) becomes W+ 1 = W + [ Λ G( y)] W, (23) where represets the Hadamard product of two matrx, amely, A B = { ab}, ad the elemets of Λ are determed by the values D j (), D ()add(). Resortg to D(), we dvde the whole separato process to the followg three stages: Ital stage: 0.25 D() 1;.apture stage: 0.05 D() 0.25; tracg stage: 0 D() We may cosder the tal stage as the pre-separato of the source sgals, the secod stage as coarse separato to capture each source sgal, ad the last stage as the fe separato to mprove the recovery qualty of the source sgals. For the ew BSS algorthm (23), we have the followg result. heorem 1. he gradg learg algorthm (23) s equvarat ad has the property of eepg the separatg matrx from becomg sgular. Proof. Let C = WA be the global mxg-separatg system, ad post-multply both sdes of (23) by A, we mmedately obta C + 1 = C + [ Λ G( y )] C whch s depedet of j j

7 No. 1 GRADING LEARNING FOR BLIND SOURCE SEPARAION 37 W ad A, so the gradg learg algorthm (23) s equvarat. O the other had, let XY, = trace( X Y), W = det( W). Mmcg ref. [9], we have d W W dw =, = WW,[ Λ G( y)] W dt W dt = W trace[ Λ G( y)] = W ηg. t herefore, we get Wt = W0 exp η ( ) ( )d, 0 τ G τ τ from whch t follows that f W0 0, = 1 the W 0, eepg W from becomg sgular. hs completes the proof of heorem 1. t 3 Comparsos wth the exstg algorthms I ths secto, we cosder the desg of Λ, ad compare the gradg learg algorthm (23) wth several typcal BSS algorthms usg fxed or tme-varyg learg rates. 3.1 Ital stage Sce the outputs of the separato system are of strog depedece the tal stage, we should use a hgh learg rate to speed up the separato. Clearly, t s approprate to assg the same learg rate to all the source sgals ths stage, thus we tae the learg rate matrx Λ as a costat matrx,.e. ηj ( ) = η for ay ad j, ad the gradg learg algorthm (23) s the smplfed to (8). he scalar η has may possble choces. A smplest choce s to use a costat learg rate η = η. A alteratve s to use a tme-descedg fucto as the chagg learg rate, such as the coolg scheme [15] 1,whchη =, 2 startg from a adequate tal value. Recetly, Yag [10] suggested usg the expoetal atteuato fucto η0, K0, η = Kd ( K0 ) η, K 0e 0, (24) where η 0, K 0 ad K d are costats. he tme-descedg learg rate has a drawbac: f there are some source sgals that have ot bee separated from the others, ad η has become very small, the these sources are much dffcult to be extracted. o overcome ths lmtato, the thrd choce s the adaptve learg rate. Oe example s the so-called gradet step sze [18 20] that s calculated as J η+ 1 = η + ρc W ) η (25) whch J(C) s some cost or cotrast fucto, ad ρ s a step sze of the step sze. As poted = 1

8 38 SCIENCE IN CHINA (Seres F) Vol. 46 out by Douglas et al. [18], the shortcomgs of ths method are ts poor robustess, ad the fact that the proper selecto of ρ s dffcult. I fact, t s stll a log way to go for the gradet step sze scheme to be used practce. Explotg the formato o how far the estmated separatg matrx s from ts optmum, Murata et al. [17] suggested the followg adaptve learg rate algorthm: Γ+ 1 = (1 δ) Γ δg( y) W, (26) η+ 1 = η + αη[ β Γ η], (27) where α, β ad 0 δ 1 are parameters to be determed, ad deotes the Frobeus orm of Γ. I practce, to assure the stablty of the BSS algorthm, the learg rate should be cofed to a fte rage,.e. η m η η max. o the best of our owledge, however, amog adaptve learg rate algorthms publshed prevously, oe s depedet drectly o the separato state of source sgals. Sce the tas of ths stage s to pre-separate the sources, we tae the scalar η as a lear fucto of D(): η0, K0, η = (28) a + ( η0 ad ) ( ), K0 ad D ( ) ƒ0.25, where a, η 0 ad K 0 are costats. hat η = η0 s tae for K 0 s based o the cosderato that the recursve estmates of sc j () adhc j () have larger bas for a shorter set of samples. O the other had, the source sgals are strogly correlated wth each other at the tal stage of the BSS, whch demostrates that a larger learg rate should be adopted. 3.2 Capture stage he ma tas of BSS ths stage s to capture all the source sgals. Due to the fact that oe or more sources may be completely or partly separated from the others, t s o loger optmal ths stage to assg the same learg rate to all the source sgals. o speed up the separato of the o-separated sources from the others, ad trac the separated sources at the same tme, dfferet source sgals should use dfferet learg rates. o ths ed, we tae the th row of Λ as the same value η ( ), = 1, f,. Based o ths specal structure, the gradg learg algorthm (23) becomes W + 1 = W + D G ( y ) W, (29) where D =dag[ η1 ( ),, η( )] s a dagoal matrx, whose elemets are determed by η ( ) = β( D ( )), = 1,, (30) whch β (C) s a olear fucto, see secto 4 for ts choce. Recetly, from the vewpot of the Fsher formato matrx, Yag [10] proposed a seral updatg rule for adaptve BSS, gve by 1 η µ Φ W + 1 = W + D [ I ( y ) y ] W, (31) Γ

9 No. 1 GRADING LEARNING FOR BLIND SOURCE SEPARAION 39 where Dµ = dag[ µ 1,, µ 2]. hs rule s based o the followg three costrats [10] : C1) All the dstrbutos of the source sgals are symmetrc. C2) he trplets ( µ, λ, v ) of Q = E{ vec( I Φ( y) y ) vec ( I Φ( y) y )} satsfy v = µ λ, where vec(b) deotes the colum vector obtaed by cascadg the colums of B from the left to the rght. C3) he mxg matrx A must be a square matrx. It should be stressed that the learg algorthm (31) has a form smlar to (29), but they have the followg substatal dffereces: (1) D 1 µ (31) s oly a dagoal matrx form, ts elemets have o relatoshp wth the sgal depedece, whle the dagoal elemets of D (29) are determed by the sgal depedece va D (). (2) Algorthm (29) has ot the costrats C1 C racg stage Oce all the source sgals are successfully captured, the remader s to trac them ad mprove the recovery qualty as much as possble. From the vewpot of the covergece of the eural etwor for BSS, the fal objectve of ths stage s to mae the separatg matrx coverge to the optmum oe. Usg the relatoshp betwee the equlbrum pots ad D j (), we tae β ( D ( )), = j, ηj ( ) = (32) β ( Dj ( )), j, where β (C) s the same olear fucto as that (30). We ote that Cchoc et al. [16] proposed a learg algorthm for BSS, whch each syaptc weght has ts ow (local) learg rate, gve by where d w ( ) j t = ηj () t wj () t φ ( y ()) t wpj () t yp () t, dt (33) p= 1 j j φ pj p p= 1 g () t = w () t ( y ()) t w () t y (), t (34) d vj ( t) τ 1 = vj () t + gj (), t dt (35) d ηj ( t) τ2 = ηj () t + γ vj (), t dt (36) whch τ 1, τ 2 ad γ are parameters to be carefully chose. Deote Λ = { η ( )} ad = φ G( y ) I ( y ) y, respectvely, the (33) ca be wrtte a matrx form as j

10 40 SCIENCE IN CHINA (Seres F) Vol. 46 W+ 1 = W + Λ [ G( y) W ] (37) whch form s much smlar to the gradg learg algorthm (23), but they are dfferet at least two aspects. Oe s the determato of the learg rate matrx Λ, the other s that algorthm (37) does ot satsfy the equvarat property. Post-multplyg both sdes of (37) by A yelds C+ 1 = C + { Λ [ G( y) W]} A. Clearly, the global system C depeds o A uless Λ s a costat matrx or all the rows of Λ have the same values. I geeral, algorthm (37) s ot equvarat, whle the equvarat property s the most essetal property to the BSS algorthm. 4 Smulatos ad dscussos I order to verfy the effectveess of the gradg learg algorthm (23) for adaptve BSS, we use t to separate several source sgals from ther mxtures. he uow source sgals are as follows: S1) Sg fucto sgal: sg(cos(2π155 t)); S2) hgh-frequecy susod sgal: s(2π800 t); S3) low-frequecy susod sgal: s(2π90 t); S4) phase-modulated sgal: s(2π300t 6cos(2π60 t)); S5) ampltude-modulated sgal: s(2π9 t)s(2π300 t); S6) radom sgal: uformly dstrbuted ose [ 1, 1]; S7) radom sgal: the square root of the Raylegh dstrbuted sgal wth parameter σ =1. I smulatos, the mxg matrx A s fxed each ru, but the elemets of whch are radomly geerated subject to the uform dstrbuto [ 1, 1] or the ormal dstrbuto wth zero mea ad varace 0.5. For smplcty, we deote t as A~U( 1, 1) ad A~N(0, 0.5), respectvely. he mxed source sgals are sampled at 10 Hz. As the performace dex of the BSS algorthm, we use the cross-talg error [9] E ct cl cl = 1 1 = 1 l= 1 max c + p l= 1 = 1max c pl p p, (38) where C = { c } = W A s the combed mxg-separatg matrx. l I smulatos, we use the olear fucto φ( y)=[ φ ( y ),, φ ( y )] suggested by Yag ad Amar [4],.e. the gradg rule (23), we tae 1 1 N = φ 2 3 ( y) ( 3, 4) y ( 3, 4) y G( y ) I ( y ) y, ad φ = α κ κ + β κ κ (39) N whch 3 3 E y ( ) κ = { } ad 4 4 E y ( ) κ = { } 3 are called the sewess ad the urtoss of y,re- spectvely; ad the fuctos α ( κ3, κ4) ad β ( κ3, κ4) are gve by

11 No. 1 GRADING LEARNING FOR BLIND SOURCE SEPARAION a ( κ3, κ4) = κ3 + κ3κ4, 2 4 (40) β ( κ3, κ4) = κ4 + ( κ3) + ( κ4) (41) he sewess ad the urtoss are updated usg the followg adaptve formulae: 3 3, 1 3, u 3, y, κ + = κ ( κ ), (42) 4 4, 1 4, u 4, y, κ + = κ ( κ + 3), (43) where =1.0D10 4 s s the samplg perod, ad µ s a costat, tae as 60D. For comparso, we execute the atural gradet algorthm wth several dfferet learg rates whose correspodg early optmal parameters for fast covergece ad good steady-state performace are as follows: (1) he fxed learg rate, called the Yag-Amar (Y-A) algorthm [9] : η =65D. (2) he tme-descedg learg rate (24), called the Yag algorthm [10] : η 0 =140D, K 0 = 500 ad K d =15D. (3) he adaptve learg rate (26), (27), called the MMZA algorthm [17] : η max = 140D, η m =5D, δ =5D, α =0.02ad β = 150 Γ / max Γ p. (4) he sgal-adaptve learg rate ths paper, called the gradg learg algorthm: η 0 = 140D, K 0 =500ada =40D are tae (28), ad the forgettg factor λ = (16) (22). he olear fucto β (C) (30) ad (32) must be a mootoc odecreasg ad bouded oegatve fucto. It may tae dfferet forms, such as the expoetal fucto, the sgmod fucto, the pecewse lear fucto ad so o. A better choce of β (C) should provde good capture ad tracg capabltes, ad thus we tae t as a covex fucto the capture stage ad a cocave fucto the tracg stage, resultg a sgmod fucto: x, 0 x 0.05, 0.7 β ( x) = ( x 0.05), 0.05 x 0.2, (44) 0.016, 0.2 x. o chec ad compare the performace of the above four algorthms uder dfferet codtos, we smulated the followg four cases: he source sgals cosst of the fve determstc sgals S1 S5 ad a radom ose S6, ad the mxg matrx A~U( 1, 1). he sx source sgals S1 S6, but A~N(0, 0.5). he sx source sgals S1 S6 plus aother radom ose S7, ad A~U( 1, 1). he seve source sgals S1 S7, but A~N(0, 0.5). p

12 42 SCIENCE IN CHINA (Seres F) Vol. 46 he separatg matrx taes W 0 =0.5I as the tal value. Fgs. 2 ad 3 show respectvely the meas ad the stadard devatos (SDs) of the performace dex E ct of successve 400 depedet rus (where 200 rus for A~U( 1, 1) ad aother 200 rus for A~N(0, 0.5)) for the source sgals S1 S6, whle fgs. 4 ad 5 show those for the source sgals S1 S7. Fg. 2. Average performace dces, = 6. 1, they-aalgo- rthm; 2, the Yag algorthm; 3, the MMZA algorthm; 4, ths paper s algorthm. Fg. 3. SDs of performace dex, = 6. 1, the Y-A algorthm; 2, the Yag algorthm; 3, the MMZA algorthm; 4, ths paper s algorthm. Fg. 4. Average performace dces, = 7. 1, they-aalgo- rthm; 2, the Yag algorthm; 3, the MMZA algorthm; 4, ths paper s algorthm. Fg. 5. SDs of performace dex, = 7. 1, the Y-A algorthm; 2, the Yag algorthm; 3, the MMZA algorthm; 4, ths paper s algorthm. able 1 lsts the falg separato umbers ad the correspodg percetages of the four algorthms the above four cases. By falg separato, we mea that at least oe source sgal could ot be separated from the others utl the 4000th sample. From table 1 ad fgs. 2 5, we ca see that the gradg learg algorthm (23) has the fastest covergece speed (the fastest descedg of E ct ), the best steady-state performace (the least average performace dex E ct after covergece) ad the best umercal robustess (the least SD of E ct ), as compared wth the other three algorthms.

13 No. 1 GRADING LEARNING FOR BLIND SOURCE SEPARAION 43 able 1 Falg separato umber (200 depedet rus/each case) Source ad chael Learg algorthm of the learg rate η Source umber Chael Yag-Amar Yag MMZA Gradg learg =6 A~U( 1, 1) 3(1.5%) 7(3.5%) 5(2.5%) 1(0.5%) =6 A~N(0, 0.5) 5(2.5%) 9(4.5%) 7(3.5%) 1(0.5%) =7 A~U( 1, 1) 15(7.5%) 40(20%) 20(10%) 3(1.5%) =7 A~N(0, 0.5) 15(7.5%) 38(19%) 22(11%) 3(1.5%) 5 Coclusos hs paper defed the measure of sgal depedece ad the separateess of the source sgals for the BSS system, ad proposed a gradg learg algorthm for adaptve BSS, whch the whole learg process s dvded to three stages: tal stage, capturg stage ad tracg stage. he orgal scalar learg rate s geeralzed to a learg rate matrx whose elemets are automatcally determed by the measure of sgal depedece, yeldg three sgal-adaptve learg rates. We compared the three sgal-adaptve learg rate selectos the gradg learg algorthm wth the prevously publshed fxed ad tme-varyg learg rate algorthms. It was show that the gradg learg algorthm has both the equvarace ad the property of eepg the separatg matrx from becomg sgular. Smulato results showed that the ew algorthm performs better tha the other exstg algorthms. Acowledgemets ). Refereces hs wor was supported by the Natoal Natural Scece Foudato of Cha (Grat No. 1. Hyvare, A., Fast ad robust fxed-pot algorthms for depedet compoet aalyss, IEEE ras. Neural Networs, 1999, 10(3): Gaaopoulos, V., Comparso of adaptve depedet compoet aalyss algorthms, Avalable at f/~xgaa/. 3. Bell, A. J., Sejows,. J., A formato-maxmzato approach to bld separato ad bld decovoluto, Neural Computatos, 1995, 7: Karhue, J., Joutsesalo, J., Represetato ad separato of sgals usg olear PCA type learg, Neural Networs, 1994, 7: Karhue, J., Pajue, P., Oja, E., he olear PCA crtero bld source separato: Relatos wth other approaches, Neural Computg, 1998, 22: Como, P., Idepedet compoet aalyss, a ew cocept? Sgal Processg, 1994, 36: Pham, D.., Bld separato of stataeous mxtures of sources va a depedet compoet aalyss, IEEE ras. Sgal Processg, 1996, 44: Cardoso, J. F., Laheld, B., Equvarat adaptve source separato, IEEE ras. Sgal Processg, 1996, 44: Yag, H. H., Amar, S., Adaptve o-le learg algorthms for bld separato maxmum etropy ad mmum mutual formato, Neural Computatos, 1997, 9: Yag, H. H., Seral updatg rule for bld separato derved from the method of scorg, IEEE ras. Sgal Processg, 1999, 47: Lee,. W., Grolam, M., Sejows,. J., Idepedet compoet aalyss usg a exteded fromax algorthm for mxed sub-gaussa ad super-gaussa sources, Neural Computatos, 1999, 11:

14 44 SCIENCE IN CHINA (Seres F) Vol Cho, S., Cchoc, A., Amar, S., Flexble depedet compoet aalyss, Joural of VLSI Sgal Processg, 2000, 26: Cruces, S., Cchoc, A., Castedo, L., A teratve verso approach to bld source separato, IEEE ras. Neural Networs, 2000, 11: Amar, S., Cchoc, A., Yag, H. H., A ew learg algorthm for bld sgal separato, Advaces NIPS, 1996, 8: Amar, S., Natural gradet wors effcetly learg, Neural Computatos, 1998, 10: Cchoc, A., Amar, S., Adach, M. et al., Self-adaptve eural etwors for bld separato of sources, Proc Iteratoal Symp. o Crcuts ad Systems, vol.2, New Yor: IEEE Press, May 1996, Murata, N., Muller, K. R., Zehe, A. et al., Adaptve o-le learg chagg evromets. Advaces NIPS 9, Cambrdge, MA: MI Press, 1997, Douglas, S. C., Cchoc, A., Adaptve step sze techques for decorrelato ad bld source separato, Proc. 32d Aslomar Cof. o Sgals, Systems ad Computers, Pacfc Grove, CA, vol.2, New Yor: IEEE Press, Nov. 1998, Jacobs, R. A., Icreased rates of covergece through learg rate adaptato, Neural Networs, 1988, 1: Mathews, V. J., Xe, Z., A stochastc gradet adpatve flter wth gradet adaptve step sze, IEEE ras. Sgal Processg, 1993, 41: Zhag, X. D., Bao, Z., Commucato Sgal Processg ( Chese), Bejg: Defese Idustry Press, 2000, Papouls, A., Probablty, Radom Varables, ad Stochastc Process, New Yor: McGraw-Hll, 3rd edto, 1991,

Functions of Random Variables

Functions of Random Variables Fuctos of Radom Varables Chapter Fve Fuctos of Radom Varables 5. Itroducto A geeral egeerg aalyss model s show Fg. 5.. The model output (respose) cotas the performaces of a system or product, such as weght,

More information

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions

Estimation of Stress- Strength Reliability model using finite mixture of exponential distributions Iteratoal Joural of Computatoal Egeerg Research Vol, 0 Issue, Estmato of Stress- Stregth Relablty model usg fte mxture of expoetal dstrbutos K.Sadhya, T.S.Umamaheswar Departmet of Mathematcs, Lal Bhadur

More information

Summary of the lecture in Biostatistics

Summary of the lecture in Biostatistics Summary of the lecture Bostatstcs Probablty Desty Fucto For a cotuos radom varable, a probablty desty fucto s a fucto such that: 0 dx a b) b a dx A probablty desty fucto provdes a smple descrpto of the

More information

Lecture 3 Probability review (cont d)

Lecture 3 Probability review (cont d) STATS 00: Itroducto to Statstcal Iferece Autum 06 Lecture 3 Probablty revew (cot d) 3. Jot dstrbutos If radom varables X,..., X k are depedet, the ther dstrbuto may be specfed by specfyg the dvdual dstrbuto

More information

Nonlinear Blind Source Separation Using Hybrid Neural Networks*

Nonlinear Blind Source Separation Using Hybrid Neural Networks* Nolear Bld Source Separato Usg Hybrd Neural Networks* Chu-Hou Zheg,2, Zh-Ka Huag,2, chael R. Lyu 3, ad Tat-g Lok 4 Itellget Computg Lab, Isttute of Itellget aches, Chese Academy of Sceces, P.O.Box 3, Hefe,

More information

Econometric Methods. Review of Estimation

Econometric Methods. Review of Estimation Ecoometrc Methods Revew of Estmato Estmatg the populato mea Radom samplg Pot ad terval estmators Lear estmators Ubased estmators Lear Ubased Estmators (LUEs) Effcecy (mmum varace) ad Best Lear Ubased Estmators

More information

Chapter 5 Properties of a Random Sample

Chapter 5 Properties of a Random Sample Lecture 6 o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for the prevous lecture Cocepts: t-dstrbuto, F-dstrbuto Theorems: Dstrbutos of sample mea ad sample varace, relatoshp betwee sample mea ad sample

More information

Introduction to local (nonparametric) density estimation. methods

Introduction to local (nonparametric) density estimation. methods Itroducto to local (oparametrc) desty estmato methods A slecture by Yu Lu for ECE 66 Sprg 014 1. Itroducto Ths slecture troduces two local desty estmato methods whch are Parze desty estmato ad k-earest

More information

CHAPTER VI Statistical Analysis of Experimental Data

CHAPTER VI Statistical Analysis of Experimental Data Chapter VI Statstcal Aalyss of Expermetal Data CHAPTER VI Statstcal Aalyss of Expermetal Data Measuremets do ot lead to a uque value. Ths s a result of the multtude of errors (maly radom errors) that ca

More information

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model

Lecture 7. Confidence Intervals and Hypothesis Tests in the Simple CLR Model Lecture 7. Cofdece Itervals ad Hypothess Tests the Smple CLR Model I lecture 6 we troduced the Classcal Lear Regresso (CLR) model that s the radom expermet of whch the data Y,,, K, are the outcomes. The

More information

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution:

{ }{ ( )} (, ) = ( ) ( ) ( ) Chapter 14 Exercises in Sampling Theory. Exercise 1 (Simple random sampling): Solution: Chapter 4 Exercses Samplg Theory Exercse (Smple radom samplg: Let there be two correlated radom varables X ad A sample of sze s draw from a populato by smple radom samplg wthout replacemet The observed

More information

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections

ENGI 4421 Joint Probability Distributions Page Joint Probability Distributions [Navidi sections 2.5 and 2.6; Devore sections ENGI 441 Jot Probablty Dstrbutos Page 7-01 Jot Probablty Dstrbutos [Navd sectos.5 ad.6; Devore sectos 5.1-5.] The jot probablty mass fucto of two dscrete radom quattes, s, P ad p x y x y The margal probablty

More information

Simulation Output Analysis

Simulation Output Analysis Smulato Output Aalyss Summary Examples Parameter Estmato Sample Mea ad Varace Pot ad Iterval Estmato ermatg ad o-ermatg Smulato Mea Square Errors Example: Sgle Server Queueg System x(t) S 4 S 4 S 3 S 5

More information

Bayes (Naïve or not) Classifiers: Generative Approach

Bayes (Naïve or not) Classifiers: Generative Approach Logstc regresso Bayes (Naïve or ot) Classfers: Geeratve Approach What do we mea by Geeratve approach: Lear p(y), p(x y) ad the apply bayes rule to compute p(y x) for makg predctos Ths s essetally makg

More information

Chapter 8. Inferences about More Than Two Population Central Values

Chapter 8. Inferences about More Than Two Population Central Values Chapter 8. Ifereces about More Tha Two Populato Cetral Values Case tudy: Effect of Tmg of the Treatmet of Port-We tas wth Lasers ) To vestgate whether treatmet at a youg age would yeld better results tha

More information

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model

( ) = ( ) ( ) Chapter 13 Asymptotic Theory and Stochastic Regressors. Stochastic regressors model Chapter 3 Asmptotc Theor ad Stochastc Regressors The ature of eplaator varable s assumed to be o-stochastc or fed repeated samples a regresso aalss Such a assumpto s approprate for those epermets whch

More information

Point Estimation: definition of estimators

Point Estimation: definition of estimators Pot Estmato: defto of estmators Pot estmator: ay fucto W (X,..., X ) of a data sample. The exercse of pot estmato s to use partcular fuctos of the data order to estmate certa ukow populato parameters.

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Exam: ECON430 Statstcs Date of exam: Frday, December 8, 07 Grades are gve: Jauary 4, 08 Tme for exam: 0900 am 00 oo The problem set covers 5 pages Resources allowed:

More information

Comparison of Dual to Ratio-Cum-Product Estimators of Population Mean

Comparison of Dual to Ratio-Cum-Product Estimators of Population Mean Research Joural of Mathematcal ad Statstcal Sceces ISS 30 6047 Vol. 1(), 5-1, ovember (013) Res. J. Mathematcal ad Statstcal Sc. Comparso of Dual to Rato-Cum-Product Estmators of Populato Mea Abstract

More information

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS

UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS UNIVERSITY OF OSLO DEPARTMENT OF ECONOMICS Postpoed exam: ECON430 Statstcs Date of exam: Jauary 0, 0 Tme for exam: 09:00 a.m. :00 oo The problem set covers 5 pages Resources allowed: All wrtte ad prted

More information

Multiple Choice Test. Chapter Adequacy of Models for Regression

Multiple Choice Test. Chapter Adequacy of Models for Regression Multple Choce Test Chapter 06.0 Adequac of Models for Regresso. For a lear regresso model to be cosdered adequate, the percetage of scaled resduals that eed to be the rage [-,] s greater tha or equal to

More information

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5

THE ROYAL STATISTICAL SOCIETY 2016 EXAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5 THE ROYAL STATISTICAL SOCIETY 06 EAMINATIONS SOLUTIONS HIGHER CERTIFICATE MODULE 5 The Socety s provdg these solutos to assst cadtes preparg for the examatos 07. The solutos are teded as learg ads ad should

More information

Bayes Estimator for Exponential Distribution with Extension of Jeffery Prior Information

Bayes Estimator for Exponential Distribution with Extension of Jeffery Prior Information Malaysa Joural of Mathematcal Sceces (): 97- (9) Bayes Estmator for Expoetal Dstrbuto wth Exteso of Jeffery Pror Iformato Hadeel Salm Al-Kutub ad Noor Akma Ibrahm Isttute for Mathematcal Research, Uverst

More information

The Mathematical Appendix

The Mathematical Appendix The Mathematcal Appedx Defto A: If ( Λ, Ω, where ( λ λ λ whch the probablty dstrbutos,,..., Defto A. uppose that ( Λ,,..., s a expermet type, the σ-algebra o λ λ λ are defed s deoted by ( (,,...,, σ Ω.

More information

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971))

Part 4b Asymptotic Results for MRR2 using PRESS. Recall that the PRESS statistic is a special type of cross validation procedure (see Allen (1971)) art 4b Asymptotc Results for MRR usg RESS Recall that the RESS statstc s a specal type of cross valdato procedure (see Alle (97)) partcular to the regresso problem ad volves fdg Y $,, the estmate at the

More information

Third handout: On the Gini Index

Third handout: On the Gini Index Thrd hadout: O the dex Corrado, a tala statstca, proposed (, 9, 96) to measure absolute equalt va the mea dfferece whch s defed as ( / ) where refers to the total umber of dvduals socet. Assume that. The

More information

ρ < 1 be five real numbers. The

ρ < 1 be five real numbers. The Lecture o BST 63: Statstcal Theory I Ku Zhag, /0/006 Revew for the prevous lecture Deftos: covarace, correlato Examples: How to calculate covarace ad correlato Theorems: propertes of correlato ad covarace

More information

Special Instructions / Useful Data

Special Instructions / Useful Data JAM 6 Set of all real umbers P A..d. B, p Posso Specal Istructos / Useful Data x,, :,,, x x Probablty of a evet A Idepedetly ad detcally dstrbuted Bomal dstrbuto wth parameters ad p Posso dstrbuto wth

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Revew for the prevous lecture: Theorems ad Examples: How to obta the pmf (pdf) of U = g (, Y) ad V = g (, Y) Chapter 4 Multple Radom Varables Chapter 44 Herarchcal Models ad Mxture Dstrbutos Examples:

More information

1 Mixed Quantum State. 2 Density Matrix. CS Density Matrices, von Neumann Entropy 3/7/07 Spring 2007 Lecture 13. ψ = α x x. ρ = p i ψ i ψ i.

1 Mixed Quantum State. 2 Density Matrix. CS Density Matrices, von Neumann Entropy 3/7/07 Spring 2007 Lecture 13. ψ = α x x. ρ = p i ψ i ψ i. CS 94- Desty Matrces, vo Neuma Etropy 3/7/07 Sprg 007 Lecture 3 I ths lecture, we wll dscuss the bascs of quatum formato theory I partcular, we wll dscuss mxed quatum states, desty matrces, vo Neuma etropy

More information

A tighter lower bound on the circuit size of the hardest Boolean functions

A tighter lower bound on the circuit size of the hardest Boolean functions Electroc Colloquum o Computatoal Complexty, Report No. 86 2011) A tghter lower boud o the crcut sze of the hardest Boolea fuctos Masak Yamamoto Abstract I [IPL2005], Fradse ad Mlterse mproved bouds o the

More information

Median as a Weighted Arithmetic Mean of All Sample Observations

Median as a Weighted Arithmetic Mean of All Sample Observations Meda as a Weghted Arthmetc Mea of All Sample Observatos SK Mshra Dept. of Ecoomcs NEHU, Shllog (Ida). Itroducto: Iumerably may textbooks Statstcs explctly meto that oe of the weakesses (or propertes) of

More information

Block-Based Compact Thermal Modeling of Semiconductor Integrated Circuits

Block-Based Compact Thermal Modeling of Semiconductor Integrated Circuits Block-Based Compact hermal Modelg of Semcoductor Itegrated Crcuts Master s hess Defese Caddate: Jg Ba Commttee Members: Dr. Mg-Cheg Cheg Dr. Daqg Hou Dr. Robert Schllg July 27, 2009 Outle Itroducto Backgroud

More information

X ε ) = 0, or equivalently, lim

X ε ) = 0, or equivalently, lim Revew for the prevous lecture Cocepts: order statstcs Theorems: Dstrbutos of order statstcs Examples: How to get the dstrbuto of order statstcs Chapter 5 Propertes of a Radom Sample Secto 55 Covergece

More information

Chapter 14 Logistic Regression Models

Chapter 14 Logistic Regression Models Chapter 4 Logstc Regresso Models I the lear regresso model X β + ε, there are two types of varables explaatory varables X, X,, X k ad study varable y These varables ca be measured o a cotuous scale as

More information

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best

best estimate (mean) for X uncertainty or error in the measurement (systematic, random or statistical) best Error Aalyss Preamble Wheever a measuremet s made, the result followg from that measuremet s always subject to ucertaty The ucertaty ca be reduced by makg several measuremets of the same quatty or by mprovg

More information

Lecture 3. Sampling, sampling distributions, and parameter estimation

Lecture 3. Sampling, sampling distributions, and parameter estimation Lecture 3 Samplg, samplg dstrbutos, ad parameter estmato Samplg Defto Populato s defed as the collecto of all the possble observatos of terest. The collecto of observatos we take from the populato s called

More information

Analysis of Variance with Weibull Data

Analysis of Variance with Weibull Data Aalyss of Varace wth Webull Data Lahaa Watthaacheewaul Abstract I statstcal data aalyss by aalyss of varace, the usual basc assumptos are that the model s addtve ad the errors are radomly, depedetly, ad

More information

Cubic Nonpolynomial Spline Approach to the Solution of a Second Order Two-Point Boundary Value Problem

Cubic Nonpolynomial Spline Approach to the Solution of a Second Order Two-Point Boundary Value Problem Joural of Amerca Scece ;6( Cubc Nopolyomal Sple Approach to the Soluto of a Secod Order Two-Pot Boudary Value Problem W.K. Zahra, F.A. Abd El-Salam, A.A. El-Sabbagh ad Z.A. ZAk * Departmet of Egeerg athematcs

More information

TESTS BASED ON MAXIMUM LIKELIHOOD

TESTS BASED ON MAXIMUM LIKELIHOOD ESE 5 Toy E. Smth. The Basc Example. TESTS BASED ON MAXIMUM LIKELIHOOD To llustrate the propertes of maxmum lkelhood estmates ad tests, we cosder the smplest possble case of estmatg the mea of the ormal

More information

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture)

Feature Selection: Part 2. 1 Greedy Algorithms (continued from the last lecture) CSE 546: Mache Learg Lecture 6 Feature Selecto: Part 2 Istructor: Sham Kakade Greedy Algorthms (cotued from the last lecture) There are varety of greedy algorthms ad umerous amg covetos for these algorthms.

More information

Lecture 2 - What are component and system reliability and how it can be improved?

Lecture 2 - What are component and system reliability and how it can be improved? Lecture 2 - What are compoet ad system relablty ad how t ca be mproved? Relablty s a measure of the qualty of the product over the log ru. The cocept of relablty s a exteded tme perod over whch the expected

More information

Application of Calibration Approach for Regression Coefficient Estimation under Two-stage Sampling Design

Application of Calibration Approach for Regression Coefficient Estimation under Two-stage Sampling Design Authors: Pradp Basak, Kaustav Adtya, Hukum Chadra ad U.C. Sud Applcato of Calbrato Approach for Regresso Coeffcet Estmato uder Two-stage Samplg Desg Pradp Basak, Kaustav Adtya, Hukum Chadra ad U.C. Sud

More information

Unsupervised Learning and Other Neural Networks

Unsupervised Learning and Other Neural Networks CSE 53 Soft Computg NOT PART OF THE FINAL Usupervsed Learg ad Other Neural Networs Itroducto Mture Destes ad Idetfablty ML Estmates Applcato to Normal Mtures Other Neural Networs Itroducto Prevously, all

More information

Continuous Distributions

Continuous Distributions 7//3 Cotuous Dstrbutos Radom Varables of the Cotuous Type Desty Curve Percet Desty fucto, f (x) A smooth curve that ft the dstrbuto 3 4 5 6 7 8 9 Test scores Desty Curve Percet Probablty Desty Fucto, f

More information

Class 13,14 June 17, 19, 2015

Class 13,14 June 17, 19, 2015 Class 3,4 Jue 7, 9, 05 Pla for Class3,4:. Samplg dstrbuto of sample mea. The Cetral Lmt Theorem (CLT). Cofdece terval for ukow mea.. Samplg Dstrbuto for Sample mea. Methods used are based o CLT ( Cetral

More information

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines

Solving Constrained Flow-Shop Scheduling. Problems with Three Machines It J Cotemp Math Sceces, Vol 5, 2010, o 19, 921-929 Solvg Costraed Flow-Shop Schedulg Problems wth Three Maches P Pada ad P Rajedra Departmet of Mathematcs, School of Advaced Sceces, VIT Uversty, Vellore-632

More information

Parameter, Statistic and Random Samples

Parameter, Statistic and Random Samples Parameter, Statstc ad Radom Samples A parameter s a umber that descrbes the populato. It s a fxed umber, but practce we do ot kow ts value. A statstc s a fucto of the sample data,.e., t s a quatty whose

More information

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems

Analysis of System Performance IN2072 Chapter 5 Analysis of Non Markov Systems Char for Network Archtectures ad Servces Prof. Carle Departmet of Computer Scece U Müche Aalyss of System Performace IN2072 Chapter 5 Aalyss of No Markov Systems Dr. Alexader Kle Prof. Dr.-Ig. Georg Carle

More information

Principal Components. Analysis. Basic Intuition. A Method of Self Organized Learning

Principal Components. Analysis. Basic Intuition. A Method of Self Organized Learning Prcpal Compoets Aalss A Method of Self Orgazed Learg Prcpal Compoets Aalss Stadard techque for data reducto statstcal patter matchg ad sgal processg Usupervsed learg: lear from examples wthout a teacher

More information

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand DIS 10b CS 70 Dscrete Mathematcs ad Probablty Theory Fall 206 Sesha ad Walrad DIS 0b. Wll I Get My Package? Seaky delvery guy of some compay s out delverg packages to customers. Not oly does he had a radom package

More information

LECTURE - 4 SIMPLE RANDOM SAMPLING DR. SHALABH DEPARTMENT OF MATHEMATICS AND STATISTICS INDIAN INSTITUTE OF TECHNOLOGY KANPUR

LECTURE - 4 SIMPLE RANDOM SAMPLING DR. SHALABH DEPARTMENT OF MATHEMATICS AND STATISTICS INDIAN INSTITUTE OF TECHNOLOGY KANPUR amplg Theory MODULE II LECTURE - 4 IMPLE RADOM AMPLIG DR. HALABH DEPARTMET OF MATHEMATIC AD TATITIC IDIA ITITUTE OF TECHOLOGY KAPUR Estmato of populato mea ad populato varace Oe of the ma objectves after

More information

Simple Linear Regression

Simple Linear Regression Statstcal Methods I (EST 75) Page 139 Smple Lear Regresso Smple regresso applcatos are used to ft a model descrbg a lear relatoshp betwee two varables. The aspects of least squares regresso ad correlato

More information

Lecture Notes Types of economic variables

Lecture Notes Types of economic variables Lecture Notes 3 1. Types of ecoomc varables () Cotuous varable takes o a cotuum the sample space, such as all pots o a le or all real umbers Example: GDP, Polluto cocetrato, etc. () Dscrete varables fte

More information

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ " 1

STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS. x, where. = y - ˆ  1 STATISTICAL PROPERTIES OF LEAST SQUARES ESTIMATORS Recall Assumpto E(Y x) η 0 + η x (lear codtoal mea fucto) Data (x, y ), (x 2, y 2 ),, (x, y ) Least squares estmator ˆ E (Y x) ˆ " 0 + ˆ " x, where ˆ

More information

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity

ECONOMETRIC THEORY. MODULE VIII Lecture - 26 Heteroskedasticity ECONOMETRIC THEORY MODULE VIII Lecture - 6 Heteroskedastcty Dr. Shalabh Departmet of Mathematcs ad Statstcs Ida Isttute of Techology Kapur . Breusch Paga test Ths test ca be appled whe the replcated data

More information

A Robust Total Least Mean Square Algorithm For Nonlinear Adaptive Filter

A Robust Total Least Mean Square Algorithm For Nonlinear Adaptive Filter A Robust otal east Mea Square Algorthm For Nolear Adaptve Flter Ruxua We School of Electroc ad Iformato Egeerg X'a Jaotog Uversty X'a 70049, P.R. Cha rxwe@chare.com Chogzhao Ha, azhe u School of Electroc

More information

Outline. Point Pattern Analysis Part I. Revisit IRP/CSR

Outline. Point Pattern Analysis Part I. Revisit IRP/CSR Pot Patter Aalyss Part I Outle Revst IRP/CSR, frst- ad secod order effects What s pot patter aalyss (PPA)? Desty-based pot patter measures Dstace-based pot patter measures Revst IRP/CSR Equal probablty:

More information

A New Family of Transformations for Lifetime Data

A New Family of Transformations for Lifetime Data Proceedgs of the World Cogress o Egeerg 4 Vol I, WCE 4, July - 4, 4, Lodo, U.K. A New Famly of Trasformatos for Lfetme Data Lakhaa Watthaacheewakul Abstract A famly of trasformatos s the oe of several

More information

hp calculators HP 30S Statistics Averages and Standard Deviations Average and Standard Deviation Practice Finding Averages and Standard Deviations

hp calculators HP 30S Statistics Averages and Standard Deviations Average and Standard Deviation Practice Finding Averages and Standard Deviations HP 30S Statstcs Averages ad Stadard Devatos Average ad Stadard Devato Practce Fdg Averages ad Stadard Devatos HP 30S Statstcs Averages ad Stadard Devatos Average ad stadard devato The HP 30S provdes several

More information

The number of observed cases The number of parameters. ith case of the dichotomous dependent variable. the ith case of the jth parameter

The number of observed cases The number of parameters. ith case of the dichotomous dependent variable. the ith case of the jth parameter LOGISTIC REGRESSION Notato Model Logstc regresso regresses a dchotomous depedet varable o a set of depedet varables. Several methods are mplemeted for selectg the depedet varables. The followg otato s

More information

CODING & MODULATION Prof. Ing. Anton Čižmár, PhD.

CODING & MODULATION Prof. Ing. Anton Čižmár, PhD. CODING & MODULATION Prof. Ig. Ato Čžmár, PhD. also from Dgtal Commucatos 4th Ed., J. G. Proaks, McGraw-Hll It. Ed. 00 CONTENT. PROBABILITY. STOCHASTIC PROCESSES Probablty ad Stochastc Processes The theory

More information

A Remark on the Uniform Convergence of Some Sequences of Functions

A Remark on the Uniform Convergence of Some Sequences of Functions Advaces Pure Mathematcs 05 5 57-533 Publshed Ole July 05 ScRes. http://www.scrp.org/joural/apm http://dx.do.org/0.436/apm.05.59048 A Remark o the Uform Covergece of Some Sequeces of Fuctos Guy Degla Isttut

More information

This lecture and the next. Why Sorting? Sorting Algorithms so far. Why Sorting? (2) Selection Sort. Heap Sort. Heapsort

This lecture and the next. Why Sorting? Sorting Algorithms so far. Why Sorting? (2) Selection Sort. Heap Sort. Heapsort Ths lecture ad the ext Heapsort Heap data structure ad prorty queue ADT Qucksort a popular algorthm, very fast o average Why Sortg? Whe doubt, sort oe of the prcples of algorthm desg. Sortg used as a subroute

More information

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions.

Ordinary Least Squares Regression. Simple Regression. Algebra and Assumptions. Ordary Least Squares egresso. Smple egresso. Algebra ad Assumptos. I ths part of the course we are gog to study a techque for aalysg the lear relatoshp betwee two varables Y ad X. We have pars of observatos

More information

Non-uniform Turán-type problems

Non-uniform Turán-type problems Joural of Combatoral Theory, Seres A 111 2005 106 110 wwwelsevercomlocatecta No-uform Turá-type problems DhruvMubay 1, Y Zhao 2 Departmet of Mathematcs, Statstcs, ad Computer Scece, Uversty of Illos at

More information

3. Basic Concepts: Consequences and Properties

3. Basic Concepts: Consequences and Properties : 3. Basc Cocepts: Cosequeces ad Propertes Markku Jutt Overvew More advaced cosequeces ad propertes of the basc cocepts troduced the prevous lecture are derved. Source The materal s maly based o Sectos.6.8

More information

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames

KLT Tracker. Alignment. 1. Detect Harris corners in the first frame. 2. For each Harris corner compute motion between consecutive frames KLT Tracker Tracker. Detect Harrs corers the frst frame 2. For each Harrs corer compute moto betwee cosecutve frames (Algmet). 3. Lk moto vectors successve frames to get a track 4. Itroduce ew Harrs pots

More information

Rademacher Complexity. Examples

Rademacher Complexity. Examples Algorthmc Foudatos of Learg Lecture 3 Rademacher Complexty. Examples Lecturer: Patrck Rebesch Verso: October 16th 018 3.1 Itroducto I the last lecture we troduced the oto of Rademacher complexty ad showed

More information

MEASURES OF DISPERSION

MEASURES OF DISPERSION MEASURES OF DISPERSION Measure of Cetral Tedecy: Measures of Cetral Tedecy ad Dsperso ) Mathematcal Average: a) Arthmetc mea (A.M.) b) Geometrc mea (G.M.) c) Harmoc mea (H.M.) ) Averages of Posto: a) Meda

More information

Bootstrap Method for Testing of Equality of Several Coefficients of Variation

Bootstrap Method for Testing of Equality of Several Coefficients of Variation Cloud Publcatos Iteratoal Joural of Advaced Mathematcs ad Statstcs Volume, pp. -6, Artcle ID Sc- Research Artcle Ope Access Bootstrap Method for Testg of Equalty of Several Coeffcets of Varato Dr. Navee

More information

Chapter 10 Two Stage Sampling (Subsampling)

Chapter 10 Two Stage Sampling (Subsampling) Chapter 0 To tage amplg (usamplg) I cluster samplg, all the elemets the selected clusters are surveyed oreover, the effcecy cluster samplg depeds o sze of the cluster As the sze creases, the effcecy decreases

More information

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA

THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA THE ROYAL STATISTICAL SOCIETY 3 EXAMINATIONS SOLUTIONS GRADUATE DIPLOMA PAPER I STATISTICAL THEORY & METHODS The Socety provdes these solutos to assst caddates preparg for the examatos future years ad

More information

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then

X X X E[ ] E X E X. is the ()m n where the ( i,)th. j element is the mean of the ( i,)th., then Secto 5 Vectors of Radom Varables Whe workg wth several radom varables,,..., to arrage them vector form x, t s ofte coveet We ca the make use of matrx algebra to help us orgaze ad mapulate large umbers

More information

Kernel-based Methods and Support Vector Machines

Kernel-based Methods and Support Vector Machines Kerel-based Methods ad Support Vector Maches Larr Holder CptS 570 Mache Learg School of Electrcal Egeerg ad Computer Scece Washgto State Uverst Refereces Muller et al. A Itroducto to Kerel-Based Learg

More information

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements

Chapter 4 (Part 1): Non-Parametric Classification (Sections ) Pattern Classification 4.3) Announcements Aoucemets No-Parametrc Desty Estmato Techques HW assged Most of ths lecture was o the blacboard. These sldes cover the same materal as preseted DHS Bometrcs CSE 90-a Lecture 7 CSE90a Fall 06 CSE90a Fall

More information

A Study of the Reproducibility of Measurements with HUR Leg Extension/Curl Research Line

A Study of the Reproducibility of Measurements with HUR Leg Extension/Curl Research Line HUR Techcal Report 000--9 verso.05 / Frak Borg (borgbros@ett.f) A Study of the Reproducblty of Measuremets wth HUR Leg Eteso/Curl Research Le A mportat property of measuremets s that the results should

More information

ESS Line Fitting

ESS Line Fitting ESS 5 014 17. Le Fttg A very commo problem data aalyss s lookg for relatoshpetwee dfferet parameters ad fttg les or surfaces to data. The smplest example s fttg a straght le ad we wll dscuss that here

More information

Chapter 4 Multiple Random Variables

Chapter 4 Multiple Random Variables Revew o BST 63: Statstcal Theory I Ku Zhag, /0/008 Revew for Chapter 4-5 Notes: Although all deftos ad theorems troduced our lectures ad ths ote are mportat ad you should be famlar wth, but I put those

More information

A Combination of Adaptive and Line Intercept Sampling Applicable in Agricultural and Environmental Studies

A Combination of Adaptive and Line Intercept Sampling Applicable in Agricultural and Environmental Studies ISSN 1684-8403 Joural of Statstcs Volume 15, 008, pp. 44-53 Abstract A Combato of Adaptve ad Le Itercept Samplg Applcable Agrcultural ad Evrometal Studes Azmer Kha 1 A adaptve procedure s descrbed for

More information

On Modified Interval Symmetric Single-Step Procedure ISS2-5D for the Simultaneous Inclusion of Polynomial Zeros

On Modified Interval Symmetric Single-Step Procedure ISS2-5D for the Simultaneous Inclusion of Polynomial Zeros It. Joural of Math. Aalyss, Vol. 7, 2013, o. 20, 983-988 HIKARI Ltd, www.m-hkar.com O Modfed Iterval Symmetrc Sgle-Step Procedure ISS2-5D for the Smultaeous Icluso of Polyomal Zeros 1 Nora Jamalud, 1 Masor

More information

Chapter 9 Jordan Block Matrices

Chapter 9 Jordan Block Matrices Chapter 9 Jorda Block atrces I ths chapter we wll solve the followg problem. Gve a lear operator T fd a bass R of F such that the matrx R (T) s as smple as possble. f course smple s a matter of taste.

More information

Lecture Notes 2. The ability to manipulate matrices is critical in economics.

Lecture Notes 2. The ability to manipulate matrices is critical in economics. Lecture Notes. Revew of Matrces he ablt to mapulate matrces s crtcal ecoomcs.. Matr a rectagular arra of umbers, parameters, or varables placed rows ad colums. Matrces are assocated wth lear equatos. lemets

More information

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem

CS286.2 Lecture 4: Dinur s Proof of the PCP Theorem CS86. Lecture 4: Dur s Proof of the PCP Theorem Scrbe: Thom Bohdaowcz Prevously, we have prove a weak verso of the PCP theorem: NP PCP 1,1/ (r = poly, q = O(1)). Wth ths result we have the desred costat

More information

STRONG CONSISTENCY FOR SIMPLE LINEAR EV MODEL WITH v/ -MIXING

STRONG CONSISTENCY FOR SIMPLE LINEAR EV MODEL WITH v/ -MIXING Joural of tatstcs: Advaces Theory ad Alcatos Volume 5, Number, 6, Pages 3- Avalable at htt://scetfcadvaces.co. DOI: htt://d.do.org/.864/jsata_7678 TRONG CONITENCY FOR IMPLE LINEAR EV MODEL WITH v/ -MIXING

More information

Research Article A New Iterative Method for Common Fixed Points of a Finite Family of Nonexpansive Mappings

Research Article A New Iterative Method for Common Fixed Points of a Finite Family of Nonexpansive Mappings Hdaw Publshg Corporato Iteratoal Joural of Mathematcs ad Mathematcal Sceces Volume 009, Artcle ID 391839, 9 pages do:10.1155/009/391839 Research Artcle A New Iteratve Method for Commo Fxed Pots of a Fte

More information

CHAPTER 4 RADICAL EXPRESSIONS

CHAPTER 4 RADICAL EXPRESSIONS 6 CHAPTER RADICAL EXPRESSIONS. The th Root of a Real Number A real umber a s called the th root of a real umber b f Thus, for example: s a square root of sce. s also a square root of sce ( ). s a cube

More information

1 Onto functions and bijections Applications to Counting

1 Onto functions and bijections Applications to Counting 1 Oto fuctos ad bectos Applcatos to Coutg Now we move o to a ew topc. Defto 1.1 (Surecto. A fucto f : A B s sad to be surectve or oto f for each b B there s some a A so that f(a B. What are examples of

More information

ANALYSIS ON THE NATURE OF THE BASIC EQUATIONS IN SYNERGETIC INTER-REPRESENTATION NETWORK

ANALYSIS ON THE NATURE OF THE BASIC EQUATIONS IN SYNERGETIC INTER-REPRESENTATION NETWORK Far East Joural of Appled Mathematcs Volume, Number, 2008, Pages Ths paper s avalable ole at http://www.pphm.com 2008 Pushpa Publshg House ANALYSIS ON THE NATURE OF THE ASI EQUATIONS IN SYNERGETI INTER-REPRESENTATION

More information

Dimensionality Reduction and Learning

Dimensionality Reduction and Learning CMSC 35900 (Sprg 009) Large Scale Learg Lecture: 3 Dmesoalty Reducto ad Learg Istructors: Sham Kakade ad Greg Shakharovch L Supervsed Methods ad Dmesoalty Reducto The theme of these two lectures s that

More information

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy

Bounds on the expected entropy and KL-divergence of sampled multinomial distributions. Brandon C. Roy Bouds o the expected etropy ad KL-dvergece of sampled multomal dstrbutos Brado C. Roy bcroy@meda.mt.edu Orgal: May 18, 2011 Revsed: Jue 6, 2011 Abstract Iformato theoretc quattes calculated from a sampled

More information

Chapter 2 - Free Vibration of Multi-Degree-of-Freedom Systems - II

Chapter 2 - Free Vibration of Multi-Degree-of-Freedom Systems - II CEE49b Chapter - Free Vbrato of Mult-Degree-of-Freedom Systems - II We ca obta a approxmate soluto to the fudametal atural frequecy through a approxmate formula developed usg eergy prcples by Lord Raylegh

More information

13. Parametric and Non-Parametric Uncertainties, Radial Basis Functions and Neural Network Approximations

13. Parametric and Non-Parametric Uncertainties, Radial Basis Functions and Neural Network Approximations Lecture 7 3. Parametrc ad No-Parametrc Ucertates, Radal Bass Fuctos ad Neural Network Approxmatos he parameter estmato algorthms descrbed prevous sectos were based o the assumpto that the system ucertates

More information

MULTIDIMENSIONAL HETEROGENEOUS VARIABLE PREDICTION BASED ON EXPERTS STATEMENTS. Gennadiy Lbov, Maxim Gerasimov

MULTIDIMENSIONAL HETEROGENEOUS VARIABLE PREDICTION BASED ON EXPERTS STATEMENTS. Gennadiy Lbov, Maxim Gerasimov Iteratoal Boo Seres "Iformato Scece ad Computg" 97 MULTIIMNSIONAL HTROGNOUS VARIABL PRICTION BAS ON PRTS STATMNTS Geady Lbov Maxm Gerasmov Abstract: I the wors [ ] we proposed a approach of formg a cosesus

More information

Lecture 1 Review of Fundamental Statistical Concepts

Lecture 1 Review of Fundamental Statistical Concepts Lecture Revew of Fudametal Statstcal Cocepts Measures of Cetral Tedecy ad Dsperso A word about otato for ths class: Idvduals a populato are desgated, where the dex rages from to N, ad N s the total umber

More information

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018

å 1 13 Practice Final Examination Solutions - = CS109 Dec 5, 2018 Chrs Pech Fal Practce CS09 Dec 5, 08 Practce Fal Examato Solutos. Aswer: 4/5 8/7. There are multle ways to obta ths aswer; here are two: The frst commo method s to sum over all ossbltes for the rak of

More information

Generating Multivariate Nonnormal Distribution Random Numbers Based on Copula Function

Generating Multivariate Nonnormal Distribution Random Numbers Based on Copula Function 7659, Eglad, UK Joural of Iformato ad Computg Scece Vol. 2, No. 3, 2007, pp. 9-96 Geeratg Multvarate Noormal Dstrbuto Radom Numbers Based o Copula Fucto Xaopg Hu +, Jam He ad Hogsheg Ly School of Ecoomcs

More information

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

UNIT 2 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS Numercal Computg -I UNIT SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS Structure Page Nos..0 Itroducto 6. Objectves 7. Ital Approxmato to a Root 7. Bsecto Method 8.. Error Aalyss 9.4 Regula Fals Method

More information

Extreme Value Theory: An Introduction

Extreme Value Theory: An Introduction (correcto d Extreme Value Theory: A Itroducto by Laures de Haa ad Aa Ferrera Wth ths webpage the authors ted to form the readers of errors or mstakes foud the book after publcato. We also gve extesos for

More information

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class)

Assignment 5/MATH 247/Winter Due: Friday, February 19 in class (!) (answers will be posted right after class) Assgmet 5/MATH 7/Wter 00 Due: Frday, February 9 class (!) (aswers wll be posted rght after class) As usual, there are peces of text, before the questos [], [], themselves. Recall: For the quadratc form

More information