Probabilistic image processing and Bayesian network

Size: px
Start display at page:

Download "Probabilistic image processing and Bayesian network"

Transcription

1 Computatonal Intellgence Semnar (8 November, 2005, Waseda Unversty, Tokyo, Japan) Probablstc mage processng and Bayesan network Kazuyuk Tanaka 1 Graduate School of Informaton Scences, Tohoku Unversty, Senda , Japan Abstract The basc frameworks and practcal schemes of the Bayesan network and the belef propagaton to the probablstc mage processng are revewed. The probablstc mage processng s formulated by means of Bayesan statstcs and arkov random felds. The system s regarded as one of Bayesan networks. In general, the Bayesan network has serous computatonal complexty because the probablstc models nclude a number of random varables for nodes or pxels. Recently, many researchers n the ntermedate regon of the mathematcal scences and the computer scences are nterested n the belef propagatons, whch s one of powerful approxmate methods for probablstc nference. In the present tutoral lecture note, we brefly explan the formulaton of the probablstc mage processng and the theoretcal structure of the belef propagaton. As an examples of the applcatons of the probablstc mage processng, we revew a nose reducton by means of Gaussan graphcal model as arkov random felds. 1 Introducton Probablstc nformaton processng based on statstcal scences and statstcal mechancs s applcable to computer scences. Conversely, many problems n probablstc nformaton processng have created research deas n statstcal scence and statstcal mechancs. any schemes n probablstc nformaton processng s based on the Bayesan statstcs and are reduced to probablstc models. The probablstc models can be represented by graphs ncludng nodes and drected lne segments and are sometmes called graphcal models. The system whch conssts of such a graphcal model s called Bayesan network. Probablstc mage processng based on the Bayes formula s one of the useful applcatons n the Bayesan network. By usng the Bayes formula and assumng an a pror probablty for the orgnal mage, one creates probablstc mage processng n the form of an a posteror probablty for the orgnal mage when the observed mage s gven[1]. Image processng by means of probablstc models and the Bayes statstcs are usually based on arkov random felds. In the arkov random felds, the state of a pxel s dependent only on the states of ts neghbourng pxels. any knds of arkov random feld models whch are applcable to the practcal mage processng, mage restoratons, segmentatons, edge detecton, mage compressons and moton detectons, were proposed[2, 3, 4]. Comparng the conventonal technques n the mage processng, the probablstc mage processng s expected to gve good performance even for large nose and to construct the robust systems for varous data. As for most of famlar arkov random felds as well as Bayesan networks, t s hard to acheve the crtera to obtan the estmated mage n the statstcal frameworks exactly except some specal cases[5]. In the artfcal ntellgence, the belef propagaton was proposed n the mddle of 1980s[6], 1 E-mal: kazu@smapp.s.tohoku.ac.p, Webpage: kazu/ 1

2 as an effcent algorthm of statstcal nference, whch ams at mplementng probablty-based artfcal ntellgence capable of handlng uncertantes. The belef propagaton s proved to gve exact results n the specal cases where the graphcal representaton of the underlyng probablty model has no loops n ts graphcal representaton. Although the belef propagaton s not guaranteed to gve exact results for general graphcal models wth loops, t works reasonably well n varous cases, and furthermore, n some specfc cases t turns out to work excellently. In response to such emprcally demonstrated good performance n the belef propagaton, a number of researchers n the mathematcal scence have attempt to acheve the theoretcal understandng of the performance and the frameworks n the algorthm[7, 8]. It s worthy of notce that the belef propagaton has a lnk wth statstcal mechancs[9, 10, 11] and has been equvalent to a Bethe approxmaton and a cluster varaton method, whch are ones of the advanced mean-feld methods n statstcal mechancs. oreover, some nterpretatons for the belef propagaton have been gven also from the nformaton geometrcal pont of vew[12, 13]. The belef propagaton has been appled to many practcal problems n the computer vson[14, 15, 16, 17]. Theoretcal study of the applcaton of the belef propagaton to mage processng has been done n the statstcal mechancal pont of vew[18, 19]. In ths paper, the basc frameworks and ts applcatons of the Bayesan network and the belef propagaton n the probablstc mage processng are revewed. As examples of the probablstc mage processng, we ntroduce a nose reducton and a segmentaton and extend the segmentaton to a moton detecton. Secton 2 descrbes the formulaton of the probablstc mage processng as Bayesan networks. In Secton 3, we explan the belef propagaton whch s reduced to the numercal teratve calculatons of the smultaneous fxed pont equatons wth respect to messages. In Secton 4, we explan the probablstc mage restoratons and nose reductons by usng the Gaussan graphcal model. concludng remarks. Secton 5 provdes 2 Formulaton of mage processng by Bayesan network We consder an mage on a square lattce Ω {1, 2,,L}. The states at pxel ( Ω) n the orgnal mage and the observed mage are regarded as random varables denoted by A and D, respectvely. In the nose reducton, the observed mage corresponds to a degraded mage and the orgnal mage corresponds to an natural mage before degraded by nose. In the segmentaton, a gven natural mage s regarded as the observed mage and a segmented mage wth a few knds of regons s regarded as the orgnal mage. Then the random felds of states n the orgnal mage and the observed mage are represented by A =(A 1,A 2,,A L )andd =(D 1,D 2,,D L ). The actual orgnal mage and the observed mage are denoted by a =(a 1,a 2,,a L )andd =(d 1,d 2,,d L ), respectvely. The probablty that the orgnal mage s a, P( A = a α), s called the a pror probablty of the mage. Here, α s not a set of random varables but a set of hyperparameters, whch specfy the functon to represent the a pror probablty. In the Bayes formula, the a posteror probablty P( A = a D = d), that the orgnal mage s a when 2

3 the gven observed mage s d, s expressed as P( A = a D = d, α, β)= P( D = d ) A = a, β P( A = a, α) P( D = d A = z, β)p( A = z, α), (1) z where the summaton z = z 1 z 2 z L s taken over all possble confguratons of mages z = (z 1,z 2,,z L ). If z takes any real number n the nterval (, + ), where the summaton z [ ] should be replaced by the ntegral d z + [ ]dz 1dz 2 dz L and the probablty should be regarded as a probablty densty functon. The probablty P ( ) D = d A = a, β s the condtonal probablty that the observed mage s d when the orgnal mage s a and descrbes the generatng process of data. Here β s a set of hyperparameters n the condtonal probablty. In the Bayesan statstcs, the estmate â of the state at each pxel n the orgnal mage are determned so as to maxmze the posteror margnal probablty: P(A = a D = d, α, β)= z δ a,z P( A = z D = d, α, β). (2) In the present framework, we have to estmate not only the estmate (â 1, â 2,, â L ) but also the hyperparameters α and β. We apply the maxmum lkelhood estmaton to the estmaton of hyperparameters from an observed mage, whch corresponds to data, n the statstcal scences as follows: ( α, β)=max P( D = d α, β) (3) ( α, β) P( D = d α, β)= z P( A = z, D = d α, β)= z P( D = d A = z, β)p( A = z, α). (4) In ths framework, the probablty P( D = d α, β) s gven by margnalzng the ont probablty P( A = a, D = d α, β) over all the possble mages a and can be regarded as a lkelhood for α and β when the observed mage d s gven n statstcs. P( D = d α, β) s referred to as evdence or margnal lkelhood[21, 22]. In order to maxmze the margnal lkelhood, the expectaton maxmzaton (E) algorthm s often employed. In the E algorthm, we ntroduce a Q-functon defned by Q( α, β α, β, d) z P( A = a D = d, α, β)lnp( A = a, D = d α, β ). (5) The extremum condtons of margnal lkelhood α P( D = d α, β)= 0, are equvalent to the followng equaltes: [ α Q( α, β α, β, d) ] = 0, α = α, β = β β P( D = d α, β)= 0, (6) [ β P( D = d α, ] β) = 0, (7) α = α, β = β respectvely. As an algorthm to calculate the set of estmates, ( α, β), whch satsfes the extremum condtons (6), we terate the followng E-steps untl convergence: E-Step: Q( α, β α(t), β(t), g) P( A = z D = d, α(t), β(t))lnp( A = z, D = d α, β). z 3

4 -Step: ( α(t +1), β(t +1)) argmaxq( α, β α(t), β(t), g). ( α, β) Though the above scheme can gve us the soluton for the extremum condtons (6), we have to remark that t does not necessarly provde the global maxmum for the margnal lkelhood P( D = d α, β). We denote a set of lnks between some pars of pxels by N and assgn a functon Φ (a,a )ateach par of lnks belongng to the set N. The a posteror probabltes P( A = a D = d, α, β)andthe a pror probabltes, P( A = a α) n the Bayesan mage analyss, are often expressed n terms of the followng functon P ( a): Φ (a,a ) N P ( a) = Φ (z,z ). (8) z N In the case, the random feld A s the set of random varables n whch the state of each pxel depends only on the confguraton of all the pxels lnked to the pxel, N { N } and s called the arkov random feld[1]. 3 Belef propagaton Except some specal cases, t s hard to calculate the margnalzaton n Eqs.(2) and (4) exactly, and we have to employ an approxmate algorthm. In the present secton, we use the belef propagaton as one of approxmate algorthms to calculate approxmate values of statstcal quanttes numercally. For a probablty gven n Eq.(8), the approxmate form of margnal probabltes n the belef propagatonsassumedtobep (a D = d) to the posteror margnal probablty: P (a ) k (a ) k N δ z,a P ( a), (9) z k (z ) k N z P (a,a ) z ( ) k (a ) Φ (a,a ) ( ) l (a ) k N l N δ z,a δ z,a P ( a) ( ) k (z ) Φ (z,z ) ( ). (10) l (z ) k N l N z z Here the notaton N { N } represents the set of all the nearest-neghbor pars of pxel. Thequantty (a ) n Eqs.(9) and (10) s called a message propagated from to Both quanttes (a )and (a ) are assgned at each lnk and are determned so as to satsfy the followng smultaneous fxed pont equatons: Φ z (a )= ) k N \{} ( Ω N ). (11) (z,a k (z ) z Φ (z,z ) k (z ) z k N \{} 4

5 2 (a) (b) (c) Fgure 1: Graphcal representatons of the belef propagaton. (a) Eq.(9). (b) Eq.(10). (c) Eq.(11). The graphcal representatons of Eqs.(9)-(11) are shown n Fgure 1. The smultaneous fxed pont equatons (11) are derved by substtutng Eqs.(9) and (10) to the reducblty condtons: P (a )= P (a,z ) ( Ω, N ). (12) z We can calculate the numercal solutons of the smultaneous fxed pont equatons (11), numercally, by usng the teraton method. By applyng the numercal solutons to Eqs.(9) and (10), we obtan the margnal probablty P (a )andp (a,a ). The approxmate expressons of margnal probabltes n Eqs.(9) and (10) can be obtaned by means of Bethe approxmaton n the statstcal mechancs. In Bethe approxmaton, we ntroduce an approxmate free energy for every probablstc model n Eq.(8) whch s defned by F Bethe [{ρ,ρ Ω, N }] F [ρ ]+ ( ) F [ρ ] F [ρ ] F [ρ ], (13) Ω N where F [ρ ] ρ (ζ)ln ( ρ (z ) ), F [ρ ] ( ρ (ζ,ζ ρ (z,z ) ) )ln. (14) Φ z z (z,z ) Here F Bethe [{ρ,ρ Ω, B}] s often referred to as Bethe free energy and s regarded as a good approxmaton for the true free energy of probablstc model n Eq.(8) whch s defned by F[ρ] = ) ln( z N Φ (z,z ) n the statstcal mechancs. The smultaneous fxed-pont equatons n Eqs.(9)- (11) are equvalent to the extremum condtons of F Bethe [{ρ,ρ Ω, B}] wth respect to the margnal probablty dstrbutons {ρ,ρ Ω, B} under the normalzaton condtons, z ρ (z )=1( Ω) and z z ρ (z,z )=1( N ), and the reducblty condtons (12). However, t s known that the Bethe free energy F Bethe [{ρ,ρ Ω, B}] does not provde any bounds for the true free energy F[ρ], whle a mean-feld free energy s a bound for the true free energy[10]. Furthermore, n some cases the soluton of the smultaneous fxed pont equatons (9)-(11) corresponds not to a local mnmum but to a saddle pont of the Bethe free energy[20]. 4 Nose Reducton by Gaussan Graphcal odel As an example for the applcatons of mage processng, we show the nose reducton from the degraded z 5

6 mage whch s corrupted by the addtve whte Gaussan nose wth the mean 0 and the varaton σ 2.In ths case, the random felds of the orgnal mage and of the degraded mage are denoted by A and D, respectvely. The degradaton process s expressed n terms of the followng condtonal probablty: P( D = d A = a, β = σ 2 )= 1 ( exp 1 Ω 2πσ 2σ 2 (a d ) 2), (15) P( A = a α) = N N ( ) exp 12 α(a a ) 2 ( ) exp 12 α(a a ) 2 d z (α >0). (16) By substtutng Eqs.(15) and (16) to the Bayes formula, the a posteror probablty s expressed as follows: P( A = a D = a, α, β = σ 2 )= ( exp ( 1 2σ (a 2 d ) 2))( exp ( 1 2 α(a a ) 2)) Ω N ( exp ( 1 2σ (z 2 d ) 2))( ( )). (17) exp 12 α(z z ) 2 d z Ω N By settng ( Φ (a,a ) exp 1 8σ 2 (a d ) 2 1 8σ 2 (a d ) α(a a ) 2), (18) Eq.(17) can be reduced to P( A = a D = a, α, β = σ 2 )= Φ (a,a ) N ( N ). (19) Φ (z,z ) d z By applyng the belef propagaton to these expressons of the probabltes, we can acheve the maxmzaton of the margnal lkelhood P( D = d α, β = σ 2 ) P( D = d A = z,β = σ 2 )P( A = z α)d z wth respect to the hyperparameters (α, σ), and can calculate the posteror margnal probablty P(A = a D = d, α, β = σ 2 )[19]. Now we assume that (ξ) can be approxmately expressed as (ξ) λ 2π exp ( λ 2 (ξ µ ) 2). (20) The smultaneous fxed-pont equaton (11) can be reduced to the followng fxed pont equatons for {λ,λ,µ,µ N, Ω}: βg + µ k λ k 1 = 1 λ α + 1 β + k N \, µ = λ k β + ( Ω N ). (21) λ k k N \ k N \ If we substtute Eq. (20) nto Eqs. (9) and (10), the one- and two-body margnal probablty denstes ρ (a )andρ (a,a ) are approxmately obtaned. The smultaneous fxed-pont equatons (21) are solved by the followng teratve algorthm: 6

7 Algorthm LBP[ g,α,β] Step 1: Set r 0 asanntalvalue. Step 2: Update r r +1and ( ) 1 1 a (r) α + 1 ( Ω, N ). (22) β + a k (r 1) k N \ b (r) βg + b k (r 1)a k (r 1) k N \ β + k N \ a k (r 1) ( Ω, N ). (23) Step 3: Update R r, λ a (r) andµ b (r) ( Ω). Go to Step 4 f, for prespecfed ε, ( ) a (r) a (r 1) + b (r) b (r 1) <ε, (24) Ω N and go to Step 2 otherwse. Step 4: Substtute {λ,µ N, Ω} nto Eq. (20), and calculate ρ (ξ g, α, β)andρ (ξ,ξ g,α,β)byusng Eqs.(9) and (10). Stop after ρ (ξ g, α, β) andρ (ξ,ξ g,α,β) are set as outputs n the present algorthm. Agan, t s usually adequate to set ε =10 6. In the denomnators of Eqs.(22) and (23), the summatons a k (r 1) and b k (r 1)a k (r 1) can be evaluated n O(1) tme per par of pxels k N \ k N \ and, because the number of elements n the set N \ s equal to 3 per par of pxels. Hence the teratve algorthm for solvng the smultaneous fxed-pont equatons (21) requres a total of O( Ω ) computatons per update. For fxed values of α and σ, the extremum condtons of Q(α,σ α, σ, g) wth respect to σ and α are reduced to the followng equatons: Ω (ξ g ) 2 ρ (ξ g, α, σ 2 )dξ = Ω σ 2, (25) (ξ ξ ) 2 ρ (ξ,ξ g, α, σ 2 )dξdξ = N N (ξ ξ ) 2 ρ (ξ,ξ g, α, 0)dξdξ, (26) respectvely. The margnal probablty densty functon ρ (ξ g, α, σ 2 ), ρ (ξ,ξ g, α, σ 2 )andρ (ξ,ξ g, α, 0) are approxmately obtaned as outputs of the loopy belef propagaton algorthms LBP[ g, α, σ 2 ]and LBP[ g, α, 0]. Therefore, we can gve the E algorthm for the maxmzaton of margnal lkelhood n Eq.(3) by usng the loopy belef propagaton as follows: E Algorthm n Loopy Belef Propagaton Step 1: Set ( α(0),σ(0) ) and t 0. 7

8 Step 2: Run the algorthm LBP[ g,α(t),σ(t) 2 ] and update σ(t) anda as follows: σ(t +1) 1 Ω Ω (ξ g ) 2 ρ (ξ g, α(t),σ(t) 2 )dξ, (27) A N (ξ ξ ) 2 ρ (ξ,ξ g,α(t),σ(t) 2 )dξdξ. (28) Step 3: Run the algorthm LBP[ g,α,0] for varous postve values of α and set α(t +1)tothevalueof α whch satsfes the follows equaton: and t t +1. N (ξ ξ ) 2 ρ (ξ,ξ g, α, 0)dξdξ = A, (29) Step 4: Update ( α, σ ) ( α(t),σ(t) ). (30) Stop f the values of α and σ converge, and return to Step 2 otherwse. Because both probabltes n Eqs.(16) and (17) are mult-dmensonal Gaussan dstrbuton, we can calculate the some statstcal quanttes and the margnal lkelhood, exactly, by means of the multdmensonal Gaussan ntegral formula and the dscrete Fourer transformaton[22]. Hence we can check the accuracy of the belef propagaton for the probablstc mage processng based on the Bayesan statstcs and the maxmum lkelhood estmaton. We show n Fgure 2 and Table 1 results of numercal experments of nose reducton from the degraded mage whch s generated by the addtve whte Gaussan nose wth the mean 0 and the varaton The estmates of α and σ gven n Table 1 are obtaned by usng the E algorthm. The process for the convergence of (α(t),σ(t)) (σ(t) β(t)) n the E algorthm s shown n Fgure 3. The open and the sold crcles are correspondng to the results n the E algorthm for the loopy belef propagaton and the exact calculaton, respectvely. (a) (b) (c) (d) (e) Fgure 2: Nose reducton by usng the Gaussan graphcal model and the belef propagaton. (a) Orgnal mage. (b) Degraded mage by the addtve whte Gaussan nose wth the mean 0 and the varance (c) Restored mage n the loopy belef propagaton. (d) Restored mage n the generalzed belef propagaton. (e) Restored mage n the exact calculaton by means of mult-dmensonal Gaussan ntegral formula and the dscrete Fourer transformaton. The loopy belef propagaton can be extended to a generalzed belef propagaton by means of the cluster varaton method whch s one of famlar advanced mean-feld methods n the statstcal mechancs[11]. The estmates of hyperparameters, α and σ, and the restored mage obtaned by replacng the loopy belef 8

9 Table 1: Values of estmates ( α, σ), mean-squareerrorbetweenthe orgnalmage a andthe restoredmage a, SE( a, a) 1 L a a 2 and the mprovement of sgnal to nose rato, SNR 10log 10 ( a d 2 a a 2 ) (db), n the nose reductons by means of the Gaussan graphcal model gven n Fgure 2. α σ SE( a, a) SNR (db) Fgure 2(c) Fgure 2(d) Fgure 2(e) α(t) Exact Loopy BP σ(t) Fgure 3: Convergence of (α(t),σ(t)) (σ(t) β(t)) n the E algorthm. The open crcle and the sold crcle are correspondng to the results n the E algorthm for the loopy belef propagaton and the exact calculaton, respectvely. propagaton by the generalzed belef propagaton n the scheme of the present secton are also shown n Table 1 and Fgure 2. The results are more close to the exact results than the loopy belef propagaton. The detals have been gven n Ref.[27]. 5 Concludng Remarks In the present tutoral lecture note, we have brefly explaned the formulaton of the probablstc mage processng based on the arkov random felds, the Bayesan networks and the loopy belef propagaton. As an examples of the applcatons of the probablstc mage processng, we have revewed the nose reducton by means of the Gaussan graphcal model as arkov random felds. The accuracy of the loopy belef propagaton has been nvestgated for the Gaussan graphcal model. As s one of mportant results n the nvestgaton of the loopy belef propagaton, t s well known that the loopy belef propagaton can gve us the exact results for the average n the Gaussan graphcal model, though the varance and the co-varance mght be approxmate ones[7]. Ths fact s vald not only n the loopy belef propagaton but also the generalzed belef propagaton for the Gaussan graphcal model[27]. References [1] D. Geman, Random Felds and Inverse Problems n Imagng, Lecture Notes n athematcs, no.1427, pp , Sprnger-Verlag,

10 [2] R. Chellappa and A. Jan (eds), arkov Random Felds: Theory and Applcatons, Academc Press, New York, [3] S. Z. L, arkov Random Feld odelng n Computer Vson, Sprnger-Verlag, Tokyo, [4] A. S. Wllsky, ultresoluton arkov odels for Sgnal and Image Processng, Proceedngs of IEEE, vol.90, no.8, pp , [5] D.. Chckerng, D. Heckermana and C. eek, Large-Sample Learnng of Bayesan Networks s NP-Hard, Journal of achne Learnng Research vol.5, pp , [6] J. Pearl: Probablstc Reasonng n Intellgent Systems: Networks of Plausble Inference, organ Kaufmann, [7] Y. Wess and W. T. Freeman, Correctness of belef propagaton n Gaussan graphcal models of arbtrary topology, Neural Computaton, vol.13, no.10, pp , [8] F. R. Kschschang, B. J. Frey and H.-A. Loelger, Factor graphs and the sum-product algorthm, IEEE Trans. Inform. Theory, vol.47, no.2, pp , February [9] Y. Kabashma and D. Saad: Belef propagaton vs. TAP for decodng corrupted messages, Europhyscs Letters, vol.44, no.5, pp , [10]. Opper and D. Saad (eds): Advanced ean Feld ethods Theory and Practce, IT Press, [11] J. S. Yedda, W. T. Freeman and Y. Wess: Constructng Free-Energy Approxmatons and Generalzed Belef Propagaton Algorthms, IEEE Transactons on Informaton Theory, vol.51, no.7, pp , [12] S. Ikeda, T. Tanaka and S. Amar: Stochastc reasonng, free energy, and nformaton geometry, Neural Computaton, vol.16, no.9, pp , [13] S. Ikeda, T. Tanaka and S. Amar: Informaton geometry of turbo and low-densty party-check codes, IEEE Transactons on Informaton Theory, vol.50, no.6, pp , [14] W. T. Freeman, E. C. Pasztor, O. T. Carmchael, Learnng Low-Level Vson, Internatonal Journal of Computer Vson, vol.40, no.1, pp.25-47, [15] W. T. Freeman, T. R. Jones and E. C. Pasztor, Example-based super-resoluton, IEEE Computer Graphcs and Applcatons, vol.22, no.2, pp.56-65, [16] J. Sun, N.-N. Zheng and H.-Y. Shum, Stereo matchng usng belef propagaton, IEEE Transactons on Patten Analyss and achne Intellgence, vol.25, no.7, pp , [17] B. J. Frey and N. Joc: A comparson of algorthms for nference and learnng n probablstc graphcal models, IEEE Transactons on Pattern Analyss and achne Intellgence, vol.27, no.9, pp , 2005 [18] K. Tanaka, Statstcal-mechancal approach to mage processng (Topcal Revew), Journal of Physcs A: athematcal and General, vol.35, no.37, pp.r81-r150, [19] K. Tanaka, H. Shouno,. Okada and D.. Ttterngton, Accuracy of the Bethe approxmaton for hyperparameter estmaton n probablstc mage processng, Journal of Physcs A: athematcal and General, vol.37, no.36, pp , [20] T. Heskes, On the unqueness of loopy belef propagaton fxed ponts, Neural Computaton, vol.16, pp , [21] D. J. ackay, Bayesan nterpolaton, Neural Computaton, vol.4, pp , [22] K. Tanaka and J. Inoue, axmum lkelhood hyperparameter estmaton for solvable arkov random feld model n mage restoraton, IEICE Transactons on Informaton and Systems, vol.e85-d, no.3, pp , arch [23] K. Tanaka, Statstcal-mechancal Iteratve Algorthm by means of cluster varaton method n compound Gauss-arkov random feld model, Transactons of Japanese Socety for Artfcal Intellgence, vol.16, no.2, pp , [24] K. Tanaka, J. Inoue and D.. Ttterngton, Probablstc mage processng by means of Bethe approxmaton for Q-Isng model, Journal of Physcs A: athematcal and General, vol.36, no.43, pp , [25] H. Nshmor, Statstcal Physcs of Spn Glasses and Informaton Processng: An Introducton, Oxford Unversty Press, Oxford, [26] J. Inoue and K. Tanaka, Dynamcs of the maxmum lkelhood hyper-parameter estmaton n mage restoraton: Gradent descent versus expectaton and maxmzaton algorthm, Physcal Revew E, vol.65, no.1, Artcle No , pp.1-11, [27] K. Tanaka, Generalzed belef propagaton formula n probablstc nformaton processng based on Gaussan graphcal odel, IEICE Transactons (D-II), vol.j88-d-ii, 2005, n press. 10

Probabilistic image processing and Bayesian network

Probabilistic image processing and Bayesian network Randomness and Computaton Jont Workshop New Horzons n Computng and Statstcal echancal Approach to Probablstc Informaton Processng (18-21 July, 2005, Senda, Japan) Lecture Note n Tutoral Sessons Probablstc

More information

Physical Fluctuomatics Applied Stochastic Process 9th Belief propagation

Physical Fluctuomatics Applied Stochastic Process 9th Belief propagation Physcal luctuomatcs ppled Stochastc Process 9th elef propagaton Kazuyuk Tanaka Graduate School of Informaton Scences Tohoku Unversty kazu@smapp.s.tohoku.ac.jp http://www.smapp.s.tohoku.ac.jp/~kazu/ Stochastc

More information

A quantum-statistical-mechanical extension of Gaussian mixture model

A quantum-statistical-mechanical extension of Gaussian mixture model A quantum-statstcal-mechancal extenson of Gaussan mxture model Kazuyuk Tanaka, and Koj Tsuda 2 Graduate School of Informaton Scences, Tohoku Unversty, 6-3-09 Aramak-aza-aoba, Aoba-ku, Senda 980-8579, Japan

More information

A quantum-statistical-mechanical extension of Gaussian mixture model

A quantum-statistical-mechanical extension of Gaussian mixture model Journal of Physcs: Conference Seres A quantum-statstcal-mechancal extenson of Gaussan mxture model To cte ths artcle: K Tanaka and K Tsuda 2008 J Phys: Conf Ser 95 012023 Vew the artcle onlne for updates

More information

Statistical performance analysis by loopy belief propagation in probabilistic image processing

Statistical performance analysis by loopy belief propagation in probabilistic image processing Statstcal perormance analyss by loopy bele propaaton n probablstc mae processn Kazuyuk Tanaka raduate School o Inormaton Scences Tohoku Unversty Japan http://www.smapp.s.tohoku.ac.p/~kazu/ Collaborators

More information

Why BP Works STAT 232B

Why BP Works STAT 232B Why BP Works STAT 232B Free Energes Helmholz & Gbbs Free Energes 1 Dstance between Probablstc Models - K-L dvergence b{ KL b{ p{ = b{ ln { } p{ Here, p{ s the eact ont prob. b{ s the appromaton, called

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline Outlne Bayesan Networks: Maxmum Lkelhood Estmaton and Tree Structure Learnng Huzhen Yu janey.yu@cs.helsnk.f Dept. Computer Scence, Unv. of Helsnk Probablstc Models, Sprng, 200 Notces: I corrected a number

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

Probabilistic & Unsupervised Learning

Probabilistic & Unsupervised Learning Probablstc & Unsupervsed Learnng Convex Algorthms n Approxmate Inference Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computatonal Neuroscence Unt Unversty College London Term 1, Autumn 2008 Convexty A convex

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Gaussian process classification: a message-passing viewpoint

Gaussian process classification: a message-passing viewpoint Gaussan process classfcaton: a message-passng vewpont Flpe Rodrgues fmpr@de.uc.pt November 014 Abstract The goal of ths short paper s to provde a message-passng vewpont of the Expectaton Propagaton EP

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Research Article Green s Theorem for Sign Data

Research Article Green s Theorem for Sign Data Internatonal Scholarly Research Network ISRN Appled Mathematcs Volume 2012, Artcle ID 539359, 10 pages do:10.5402/2012/539359 Research Artcle Green s Theorem for Sgn Data Lous M. Houston The Unversty of

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Probability-Theoretic Junction Trees

Probability-Theoretic Junction Trees Probablty-Theoretc Juncton Trees Payam Pakzad, (wth Venkat Anantharam, EECS Dept, U.C. Berkeley EPFL, ALGO/LMA Semnar 2/2/2004 Margnalzaton Problem Gven an arbtrary functon of many varables, fnd (some

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

1 Motivation and Introduction

1 Motivation and Introduction Instructor: Dr. Volkan Cevher EXPECTATION PROPAGATION September 30, 2008 Rce Unversty STAT 63 / ELEC 633: Graphcal Models Scrbes: Ahmad Beram Andrew Waters Matthew Nokleby Index terms: Approxmate nference,

More information

Joint Statistical Meetings - Biopharmaceutical Section

Joint Statistical Meetings - Biopharmaceutical Section Iteratve Ch-Square Test for Equvalence of Multple Treatment Groups Te-Hua Ng*, U.S. Food and Drug Admnstraton 1401 Rockvlle Pke, #200S, HFM-217, Rockvlle, MD 20852-1448 Key Words: Equvalence Testng; Actve

More information

Information Geometry of Gibbs Sampler

Information Geometry of Gibbs Sampler Informaton Geometry of Gbbs Sampler Kazuya Takabatake Neuroscence Research Insttute AIST Central 2, Umezono 1-1-1, Tsukuba JAPAN 305-8568 k.takabatake@ast.go.jp Abstract: - Ths paper shows some nformaton

More information

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 5. Density estimation. CS 2750 Machine Learning. Announcements CS 750 Machne Learnng Lecture 5 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square CS 750 Machne Learnng Announcements Homework Due on Wednesday before the class Reports: hand n before

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Gaussian Conditional Random Field Network for Semantic Segmentation - Supplementary Material

Gaussian Conditional Random Field Network for Semantic Segmentation - Supplementary Material Gaussan Condtonal Random Feld Networ for Semantc Segmentaton - Supplementary Materal Ravtea Vemulapall, Oncel Tuzel *, Mng-Yu Lu *, and Rama Chellappa Center for Automaton Research, UMIACS, Unversty of

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Physical Fluctuomatics 4th Maximum likelihood estimation and EM algorithm

Physical Fluctuomatics 4th Maximum likelihood estimation and EM algorithm hyscal Fluctuomatcs 4th Maxmum lkelhood estmaton and EM alorthm Kazuyuk Tanaka Graduate School o Inormaton Scences Tohoku Unversty kazu@smapp.s.tohoku.ac.jp http://www.smapp.s.tohoku.ac.jp/~kazu/ hscal

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Expectation Maximization Mixture Models HMMs

Expectation Maximization Mixture Models HMMs -755 Machne Learnng for Sgnal Processng Mture Models HMMs Class 9. 2 Sep 200 Learnng Dstrbutons for Data Problem: Gven a collecton of eamples from some data, estmate ts dstrbuton Basc deas of Mamum Lelhood

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

RELIABILITY ASSESSMENT

RELIABILITY ASSESSMENT CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department

More information

Global Gaussian approximations in latent Gaussian models

Global Gaussian approximations in latent Gaussian models Global Gaussan approxmatons n latent Gaussan models Botond Cseke Aprl 9, 2010 Abstract A revew of global approxmaton methods n latent Gaussan models. 1 Latent Gaussan models In ths secton we ntroduce notaton

More information

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION

CHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION CAPTER- INFORMATION MEASURE OF FUZZY MATRI AN FUZZY BINARY RELATION Introducton The basc concept of the fuzz matr theor s ver smple and can be appled to socal and natural stuatons A branch of fuzz matr

More information

An Application of Fuzzy Hypotheses Testing in Radar Detection

An Application of Fuzzy Hypotheses Testing in Radar Detection Proceedngs of the th WSES Internatonal Conference on FUZZY SYSEMS n pplcaton of Fuy Hypotheses estng n Radar Detecton.K.ELSHERIF, F.M.BBDY, G.M.BDELHMID Department of Mathematcs Mltary echncal Collage

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Low Complexity Soft-Input Soft-Output Hamming Decoder

Low Complexity Soft-Input Soft-Output Hamming Decoder Low Complexty Soft-Input Soft-Output Hammng Der Benjamn Müller, Martn Holters, Udo Zölzer Helmut Schmdt Unversty Unversty of the Federal Armed Forces Department of Sgnal Processng and Communcatons Holstenhofweg

More information

Three-dimensional eddy current analysis by the boundary element method using vector potential

Three-dimensional eddy current analysis by the boundary element method using vector potential Physcs Electrcty & Magnetsm felds Okayama Unversty Year 1990 Three-dmensonal eddy current analyss by the boundary element method usng vector potental H. Tsubo M. Tanaka Okayama Unversty Okayama Unversty

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material Natural Images, Gaussan Mxtures and Dead Leaves Supplementary Materal Danel Zoran Interdscplnary Center for Neural Computaton Hebrew Unversty of Jerusalem Israel http://www.cs.huj.ac.l/ danez Yar Wess

More information

MAXIMUM A POSTERIORI TRANSDUCTION

MAXIMUM A POSTERIORI TRANSDUCTION MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,

More information

Speech and Language Processing

Speech and Language Processing Speech and Language rocessng Lecture 3 ayesan network and ayesan nference Informaton and ommuncatons Engneerng ourse Takahro Shnozak 08//5 Lecture lan (Shnozak s part) I gves the frst 6 lectures about

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING Qn Wen, Peng Qcong 40 Lab, Insttuton of Communcaton and Informaton Engneerng,Unversty of Electronc Scence and Technology

More information

Microwave Diversity Imaging Compression Using Bioinspired

Microwave Diversity Imaging Compression Using Bioinspired Mcrowave Dversty Imagng Compresson Usng Bonspred Neural Networks Youwe Yuan 1, Yong L 1, Wele Xu 1, Janghong Yu * 1 School of Computer Scence and Technology, Hangzhou Danz Unversty, Hangzhou, Zhejang,

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

Density Propagation and Improved Bounds on the Partition Function

Density Propagation and Improved Bounds on the Partition Function Densty Propagaton and Improved Bounds on the Partton Functon Stefano Ermon, Carla P. Gomes Dept. of Computer Scence Cornell Unversty Ithaca NY 14853, U.S.A. Ashsh Sabharwal IBM Watson esearch Ctr. Yorktown

More information

Introduction to Hidden Markov Models

Introduction to Hidden Markov Models Introducton to Hdden Markov Models Alperen Degrmenc Ths document contans dervatons and algorthms for mplementng Hdden Markov Models. The content presented here s a collecton of my notes and personal nsghts

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Lecture 7: Boltzmann distribution & Thermodynamics of mixing

Lecture 7: Boltzmann distribution & Thermodynamics of mixing Prof. Tbbtt Lecture 7 etworks & Gels Lecture 7: Boltzmann dstrbuton & Thermodynamcs of mxng 1 Suggested readng Prof. Mark W. Tbbtt ETH Zürch 13 März 018 Molecular Drvng Forces Dll and Bromberg: Chapters

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

Error Probability for M Signals

Error Probability for M Signals Chapter 3 rror Probablty for M Sgnals In ths chapter we dscuss the error probablty n decdng whch of M sgnals was transmtted over an arbtrary channel. We assume the sgnals are represented by a set of orthonormal

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Engineering Risk Benefit Analysis

Engineering Risk Benefit Analysis Engneerng Rsk Beneft Analyss.55, 2.943, 3.577, 6.938, 0.86, 3.62, 6.862, 22.82, ESD.72, ESD.72 RPRA 2. Elements of Probablty Theory George E. Apostolaks Massachusetts Insttute of Technology Sprng 2007

More information

Artificial Intelligence Bayesian Networks

Artificial Intelligence Bayesian Networks Artfcal Intellgence Bayesan Networks Adapted from sldes by Tm Fnn and Mare desjardns. Some materal borrowed from Lse Getoor. 1 Outlne Bayesan networks Network structure Condtonal probablty tables Condtonal

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations

Robert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations Quantum Physcs 量 理 Robert Esberg Second edton CH 09 Multelectron atoms ground states and x-ray exctatons 9-01 By gong through the procedure ndcated n the text, develop the tme-ndependent Schroednger equaton

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning

Erratum: A Generalized Path Integral Control Approach to Reinforcement Learning Journal of Machne Learnng Research 00-9 Submtted /0; Publshed 7/ Erratum: A Generalzed Path Integral Control Approach to Renforcement Learnng Evangelos ATheodorou Jonas Buchl Stefan Schaal Department of

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015

University of Washington Department of Chemistry Chemistry 453 Winter Quarter 2015 Lecture 2. 1/07/15-1/09/15 Unversty of Washngton Department of Chemstry Chemstry 453 Wnter Quarter 2015 We are not talkng about truth. We are talkng about somethng that seems lke truth. The truth we want

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong

Motion Perception Under Uncertainty. Hongjing Lu Department of Psychology University of Hong Kong Moton Percepton Under Uncertanty Hongjng Lu Department of Psychology Unversty of Hong Kong Outlne Uncertanty n moton stmulus Correspondence problem Qualtatve fttng usng deal observer models Based on sgnal

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH

DETERMINATION OF UNCERTAINTY ASSOCIATED WITH QUANTIZATION ERRORS USING THE BAYESIAN APPROACH Proceedngs, XVII IMEKO World Congress, June 7, 3, Dubrovn, Croata Proceedngs, XVII IMEKO World Congress, June 7, 3, Dubrovn, Croata TC XVII IMEKO World Congress Metrology n the 3rd Mllennum June 7, 3,

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one)

Why Bayesian? 3. Bayes and Normal Models. State of nature: class. Decision rule. Rev. Thomas Bayes ( ) Bayes Theorem (yes, the famous one) Why Bayesan? 3. Bayes and Normal Models Alex M. Martnez alex@ece.osu.edu Handouts Handoutsfor forece ECE874 874Sp Sp007 If all our research (n PR was to dsappear and you could only save one theory, whch

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals Internatonal Journal of Scentfc World, 2 1) 2014) 1-9 c Scence Publshng Corporaton www.scencepubco.com/ndex.php/ijsw do: 10.14419/jsw.v21.1780 Research Paper Statstcal nference for generalzed Pareto dstrbuton

More information

SIO 224. m(r) =(ρ(r),k s (r),µ(r))

SIO 224. m(r) =(ρ(r),k s (r),µ(r)) SIO 224 1. A bref look at resoluton analyss Here s some background for the Masters and Gubbns resoluton paper. Global Earth models are usually found teratvely by assumng a startng model and fndng small

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

Probability Density Function Estimation by different Methods

Probability Density Function Estimation by different Methods EEE 739Q SPRIG 00 COURSE ASSIGMET REPORT Probablty Densty Functon Estmaton by dfferent Methods Vas Chandraant Rayar Abstract The am of the assgnment was to estmate the probablty densty functon (PDF of

More information