This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Size: px
Start display at page:

Download "This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and"

Transcription

1 Ths artcle appeared n a journal publshed by Elsevr. The attached copy s furnshed to the author for nternal non-commercal research and educaton use, ncludng for nstructon at the authors nsttuton and sharng wth colleagues. Other uses, ncludng reproducton and dstrbuton, or sellng or lcensng cops, or postng to personal, nsttutonal or thrd party webstes are prohbted. In most cases authors are permtted to post ther verson of the artcle e.g. n Word or Tex form to ther personal webste or nsttutonal repostory. Authors requrng further nformaton regardng Elsevr s archvng and manuscrpt polcs are encouraged to vst:

2 Journal of Multvarate Analyss Extreme naccuracs n Gaussan Bayesan networks Mguel A. Gómez-Vllegas a,, Paloma Maín a, Rosaro Sus b a Departamento de Estadístca e Investgacón Operatva, Unversdad Complutense de Madrd, 8040 Madrd, Span b Departamento de Estadístca e Investgacón Operatva III, Unversdad Complutense de Madrd, 8040 Madrd, Span Receved 9 March 007 Avalable onlne 19 February 008 Abstract To evaluate the mpact of model naccuracs over the network s output, after the evdence propagaton, n a Gaussan Bayesan network, a senstvty measure s ntroduced. Ths senstvty measure s the Kullback Lebler dvergence and ylds dfferent expressons dependng on the type of parameter to be perturbed,.e. on the naccurate parameter. In ths work, the behavor of ths senstvty measure s studd when model naccuracs are extreme,.e. when extreme perturbatons of the parameters can exst. Moreover, the senstvty measure s evaluated for extreme stuatons of dependence between the man varables of the network and ts behavor wth extreme naccuracs. Ths analyss s performed to fnd the effect of extreme uncertanty about the ntal parameters of the model n a Gaussan Bayesan network and about extreme values of evdence. These deas and procedures are llustrated wth an example. c 008 Elsevr Inc. All rghts reserved. AMS 000 subject classfcatons: 6F15; 6F35 Keywords: Gaussan Bayesan network; Senstvty analyss; Kullback Lebler dvergence Introducton A Bayesan network s a probablstc graphcal model where a drected acyclc graph DAG represents a set of varables wth ts dependence structure. The nodes are random varables and the edges gve dependencs between the varables. A set of condtonal dstrbutons of each varable, gven ts parents, completes the jont descrpton of the varables. Correspondng author. E-mal address: ma.gv@mat.ucm.es M.A. Gómez-Vllegas X/$ - see front matter c 008 Elsevr Inc. All rghts reserved. do: /j.jmva

3 1930 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss Bayesan networks have been studd by some authors, lke Pearl [1], Laurtzen [], Heckerman [3] and Jensen [4], among others. Dependng on the type of varables n the problem, dscrete, Gaussan and mxed Bayesan networks can be descrbed. When a Bayesan network s consdered, t s necessary to assgn dfferent values to the probablts of the network. In ths step, there can be some uncertanty and then there can exst some naccurate parameters. Therefore, t s necessary to develop a senstvty analyss to study how senstve the network s output s to ntal naccurate parameters. In the last few years, some methods of senstvty analyss for Bayesan networks have been developed; for example, for dscrete Bayesan networks Laskey [5] presents a senstvty analyss based on computng the partal dervatve of a posteror margnal probablty wth respect to a gven parameter, Coupé and van der Gaag [6] developed an effcnt senstvty analyss based on nference algorthms, and Chan and Darwche [7] ntroduced a senstvty analyss based on a dstance measure. For Gaussan Bayesan networks, a senstvty analyss based on symbolc propagaton was developed by Castllo and Kjærulff [8], and on the bass of the Kullback Lebler dvergence, Gómez-Vllegas, Maín and Sus [9] proposed a senstvty measure for performng the senstvty analyss. In ths paper, we work wth the senstvty measure presented by Gómez-Vllegas, Maín and Sus [9] to study ts behavor for extreme naccuracs or perturbatons of the parameters that descrbe the Gaussan Bayesan network. Moreover, we descrbe the senstvty measure and ts behavor for extreme naccuracs, when the dependence between the varable of nterest X and the evdental varable X e s extreme, that s, when the squared coeffcnt of correlaton between X and X e s gven by ρ = 0 and ρ = 1, consderng dfferent types of relatve postonng of these varables n the DAG. Ths paper s organzed as follows. In Secton 1 defntons of Bayesan networks and Gaussan Bayesan networks are ntroduced and the process of evdence propagaton n Gaussan Bayesan networks s revwed. Also, we present the workng example. In Secton, the senstvty measure s defned and, dependng on the parameter to be perturbed, dfferent expressons of the senstvty measure are obtaned, beng able to detect the parameter that perturbs the network s output the most. In Secton 3, we study the behavor of the senstvty measure for extreme perturbatons of the ntal parameters. In Secton 4, we evaluate the senstvty measure consderng extreme stuatons of dependence between X and X e gven by the lnear correlaton coeffcnt ρ and evaluate those cases wth extreme perturbatons of the parameters. In Secton 5, the senstvty analyss wth the workng example s performed for some extreme parameter perturbatons consderng dfferent postons of the varable of nterest X and the evdental varable X e. Fnally, n Secton 6 some conclusons are presented. 1. Gaussan Bayesan networks and evdence propagaton A Bayesan network and a Gaussan Bayesan network are defned n ths secton and the process of evdence propagaton, one of the most mportant results n Bayesan networks, s presented. Moreover, these concepts are llustrated wth an example. Defnton 1. A Bayesan network s a par G, P where G s a DAG, nodes beng random varables X = {X 1,..., X n } and edges probablstc dependencs between varables, and P = {px 1 pax 1,..., px n pax n } s a set of condtonal probablty densts one for each varable wth pax the set of parents of node X n G.

4 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss The jont probablty densty of X s n px = px pax. =1 1 Defnton. A Gaussan Bayesan network s a Bayesan network over X = {X 1,..., X n } where the jont probablty dstrbuton s{ a multvarate normal dstrbuton } Nµ, Σ, wth densty gven by f x = π n/ Σ 1/ exp 1 x µ Σ 1 x µ where µ s the n-dmensonal mean vector, Σ the postve defnte covarance matrx of dmenson n n and Σ the determnant of Σ. In a Gaussan Bayesan network, the condtonal densty assocated wth X for = 1,..., n n Eq. 1 s the unvarate normal dstrbuton, wth densty f x pax Nµ + 1 j=1 β jx j µ j, ν where β j s the regresson coeffcnt of X j n the regresson of X on the parents of X wth pax {X 1,..., X 1 }, and ν = Σ Σ Pax Σ 1 Pax Σ Pax s the condtonal varance of X gven ts parents n the DAG. It s usual to work wth a varable of nterest X, so the network s output s the nformaton about ths varable of nterest after the evdence propagaton,.e., the posteror margnal densty of X. The evdence propagaton s one of the man results assocated wth Bayesan networks and s the process of updatng the probablty dstrbuton of the varables of the network ntroducng the avalable nformaton about the state of one or more varables, known as evdence varables. Dfferent algorthms have been proposed for propagatng the evdence n Bayesan networks. To perform the evdence propagaton n a Gaussan Bayesan network an ncremental method s presented, updatng one evdental varable at a tme [10]. Ths method s based on computng the condtonal probablty densty of a multvarate normal dstrbuton gven the evdental varable X e. Then, consderng the partton X = Y, E, wth Y the set of non-evdental varables, X Y beng the varable of nterest and E the evdental varable, the condtonal probablty dstrbuton of Y, gven the evdence E = {X e = e}, s a multvarate normal dstrbuton wth parameters µ Y E=e = µ Y + Σ YE Σ 1 E E e µ E Σ Y E=e = Σ YY Σ YE Σ 1 E E Σ EY. Workng wth a varable of nterest X Y, after the evdence propagaton, we obtan that X E = e Nµ Y E=e, σ Y E=e N µ + σ e µ e, σ σ wth the parameters before the evdence propagaton µ and µ e the means of X and X e, σ and the varances of X and X e, and σ the covarance of X and X e. To llustrate these concepts of Gaussan Bayesan networks and evdence propagaton, the followng example s ntroduced. Example 3. The nterest of the problem s n how a machne works. Ths machne s made up of fve elements, the varables of the problem, connected as the DAG presented n Fg. 1. The element of nterest n the machne s the last one X 5.

5 193 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss Fg. 1. DAG of the Gaussan Bayesan network. Let the tme for whch each element s workng be a normal dstrbuton; then X = {X 1, X, X 3, X 4, X 5 } has a multvarate normal dstrbuton gven by X Nµ, Σ, wth the condtonal parameters descrbed by the DAG µ = 3 4 ; Σ = To obtan Σ, the algorthm presented by Shachter and Kenley [11] s used. Consderng the evdence E = {X = 4}, after evdence propagaton the dstrbuton of the varable of nterest s X 5 X = 4 N6, 4 and the jont probablty dstrbuton s a multvarate normal wth parameters µ Y X = 4 4 ; Σ Y X = Senstvty analyss n Gaussan Bayesan networks The senstvty analyss proposed by Gómez-Vllegas, Maín and Sus [9] conssts n comparng the network output of two dfferent models: the orgnal model Nµ, Σ and the perturbed model obtaned after addng a perturbaton δ R to one naccurate parameter of the model. Wth an evdental varable X e, whose value s known: E = {X e = e}, the evdence propagaton s performed n the orgnal model and n the perturbed model, obtanng the network s output as the margnal densts of nterest f x e, for the orgnal model, and f x e, δ, for the perturbed model. Fnally, to evaluate the effect of addng a perturbaton, the senstvty measure s used. Ths measure s the Kullback Lebler dscrepancy [1] used to compare these condtonal densts of the varable of nterest after the evdence propagaton. The Kullback Lebler dvergence for f and f densty functons over the same doman s defned as follows: K L f w, f w = + f w ln f w f w dw.

6 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss Ths measure has been used from the begnnng n statstcal nference by Jeffreys, Fsher and Lndley. It takes nto account the whole behavor of the dstrbutons to be consdered. Furthermore, t s very useful when there s no dea about whch properts of the varable of nterest wll be used. The senstvty measure s defned as follows. Defnton 4. Let G, P be a Gaussan Bayesan network Nµ, Σ. Let f x e be the margnal densty of nterest after evdence propagaton and f x e, δ the same densty when the perturbaton δ s added to one parameter of the ntal model. Then, the senstvty measure s defned by S p j f x e, f x e, δ = + f x e ln f x e f x e, δ dx 3 where the subscrpt p j s the naccurate parameter and δ the proposed perturbaton, beng the new value of the parameter p δ j = p j + δ. For small values of the senstvty measure t s possble to conclude that the Gaussan Bayesan network s robust aganst the knd of perturbaton proposed. Consderng the parameters of the mean vector and the parameters of the covarance matrx as dfferent, and havng ρ 0, 1, the followng results are obtaned. These results are generally true for condtonal dstrbutons n the case of a jont multvarate normal dstrbuton; however ther full meanng s reached for Gaussan Bayesan Networks. In fact the condtonal dstrbuton, gven some evdence, s the output n ths knd of representaton. Snce the man pont s to carry out a senstvty analyss, the effect of perturbatons on the condtonal dstrbutons has to be studd..1. Mean vector naccuracy Dependng on the element of µ to be perturbed, the perturbaton can affect the mean of the varable of nterest X Y, the mean of the evdental varable X e E or the mean of any other varable X j Y wth j. Proposton 5. Consderng the perturbaton δ R added to any element of the mean vector µ, and havng ρ 0, 1, the senstvty measure 3 s as follows: When the perturbaton s added to the mean of X, then µ δ = µ + δ; the densty of the varable of nterest after the evdence propagaton s X E = e, δ Nµ Y E=e + δ, σ Y E=e, where S µ f x e, f x e, δ = δ σ Y E=e. If the perturbaton s added to the mean of the evdental varable, µ δ e = µ e + δ; the posteror densty of the varable of nterest after the evdence propagaton s X E = e, δ N µ Y E=e σ δ, σ Y E=e, and δ σ. S µ e f x e, f x e, δ = σ Y E=e

7 1934 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss The perturbaton δ added to the mean of any other non-evdental varable, dfferent from the varable of nterest, has no nfluence over X ; then f x e, δ = f x e and the senstvty measure s zero. When the evdence about X e s naccurate, wth e δ = e + δ, the senstvty measure obtaned s smlar to S µ e f x e, f x e, δ. Therefore, ths case s studd when the mean of the evdental varable s naccurate... Covarance matrx naccuracy When the covarance matrx s perturbed, the structure of the network can change. These changes are n the precson matrx of the perturbed network,.e., n the nverse covarance matrx consderng the perturbaton δ. To guarantee the normalty of the network, Σ δ and Σ Y E=e,δ must be postve defnte matrces; ths restrcton, mposed n the followng proposton, ylds dfferent constrants for the perturbaton δ. Proposton 6. Addng the perturbaton δ R to the covarance matrx Σ, wth ρ senstvty measure 3 s as follows: 0, 1, the If the perturbaton s added to the varance of the varable of nterest, then σ δ = σ +δ wth δ > σ + σ and after the evdence propagaton X E = e, δ N µ Y E=e, σ Y E=e,δ where σ Y E=e,δ = σ + δ σ and [ ] S σ f x e, f x e, δ = 1 δ δ ln 1 +. σ Y E=e σ Y E=e,δ When the perturbaton s added to the varance of the evdental varable, wth σee δ = +δ and δ > 1 max X j Y ρ je wth ρ je the correspondng correlaton coeffcnt, the posteror densty of nterest s X E = e, δ N µ Y E=e,δ, σ Y E=e,δ wth µ Y E=e,δ = µ + σ +δ e µ e and σ Y E=e,δ = σ σ +δ and the senstvty measure s gven by S f x e, f x e, δ = 1 Y E=e,δ σ ln σ Y E=e + σ δ +δ 1 + e µ e δ σ Y E=e,δ +δ The perturbaton δ added to the varance of any other non-evdental varable X j Y wth j, wth σ j δ j = σ j j + δ, has no nfluence over X ; then f x e, δ = f x e and the senstvty measure s zero. v When the covarance of the varable of nterest X and the evdental varable X e s perturbed, σ δ = σ + δ = σe δ and, consderng the restrcton over δ gven as σ σ < δ < σ + σ, then X E = e, δ N µ Y E=e,δ µ Y E=e,δ., σ Y E=e,δ wth = µ + σ +δ e µ e and σ Y E=e,δ = σ σ +δ. The senstvty measure

8 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss obtaned s S σ f x e, f x e, δ = 1 ln 1 δ + σ δ σ Y E=e + σ Y E=e + δ e µ e σ Y E=e,δ 1. v The perturbaton δ added to any other covarance,.e., of X and any other non-evdental varable X j, or of the evdence varable X e and X j Y wth j, has no nfluence over the varable of nterest; then f x e, δ = f x e and the senstvty measure s zero. The proof and more detals about the senstvty analyss proposed wth an example can be seen n Gómez-Vllegas, Maín and Sus [9]. 3. Extreme behavor of the senstvty measure To know the effect of extreme uncertanty about the ntal parameters of the network, we study the behavor of the senstvty measure for extreme perturbatons wth the lmt of the senstvty measure. In ths case, the squared correlaton coeffcnt of X and X e consdered s ρ 0, 1. Proposton 7. When the perturbaton added to the mean vector s extreme and ρ senstvty measure s extreme too and t s: a lm δ ± S µ f x e, f x e, δ = +, b lm δ 0 S µ f x e, f x e, δ = 0; a lm δ ± S µ e f x e, f x e, δ = +, b lm δ 0 S µ e f x e, f x e, δ = 0. 0, 1, the Proof. If follows drectly. Therefore the lmt for an extreme value of the evdence e corresponds to the case. Proposton 8. When the extreme perturbaton s added to the elements of the covarance matrx and ρ 0, 1, the senstvty measure s as follows: a lm δ + S σ f x e, f x e, δ = + but S σ f x e, f x e, δ = oδ, b lm δ M S σ f x e, f x e, δ = + wth M = σ + σ = σ 1 ρ, c lm δ 0 S σ f x e, f x e, δ = 0; [ a lm δ + S f x e, f x e, δ = 1 ln 1 ρ ρ 1 e µ e ], b lm S f x e, f x e, δ δ M ee [ = 1 M ln ee ρ Mee 1 ρ + ρ 1 M ee 1 Mee + e µ e 1 M ] ee ρ σ ee Mee where M ee = 1 Mee and M ee = max X j Y ρ je ; f M ee = ρ then lm δ M ee S f x e, f x e, δ = +, c lm δ 0 S f x e, f x e, δ = 0; a lm δ M 1 S σ f x e, f x e, δ = + wth M 1 = σ σ,

9 1936 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss b lm δ M S σ f x e, f x e, δ = + wth M = σ + σ, c lm δ 0 S σ f x e, f x e, δ = 0. Proof. a and c follow drectly. b When σ δ = σ + δ the new varance of X s σ Y E=e,δ = σ Y E=e + δ. Consderng σ Y E=e,δ > 0 then δ > σ Y E=e. Defnng M = σ Y E=e and wth x = σ Y E=e + δ, we have lm S σ f x e, f x e, δ δ M 1 = lm x 0 [ ln x ln σ Y E=e x σ Y E=e x a lm δ + S f x e, f x e, δ = 1 ln σ Y E=e = σ 1 ρ and ρ = σ σ ; then lm δ + S f x e, f x e, δ = 1 [ ln ] = +. σ σ Y E=e + 1 ρ ρ σ 1 e µ e σee σee σ wth 1 e µ e ] b When σ δ ee = + δ, the new condtonal varance for all non-evdental varables s σ Y E=e,δ j j = σ j j σ je Y E=e,δ +δ. If t s mposed that σ j j > 0 for all X j Y then δ must satsfy followng condton: δ > 1 max X j Y ρ je. Defnng Mee = max X j Y ρ je and M ee = 1 Mee, lm S f x e, f x e, δ δ M ee 1 = lm ln σ σ +δ + δ M ee = 1 = 1 ln [ ln σ σ σ M ee σ M ee σ σ M ee ρ M ee 1 ρ + σ δ +δ 1 + e µ e δ σ 1 M ee Mee σ Mee + ρ 1 M ee M ee ρ σ σ 1 + e µ e +δ 1 M ee Mee +δ Mee ρ 1 + e µ e 1 M ] ee. M ee If Mee = ρ 0 then M ee = 1 ρ = + σ σ ; t follows that lm S f x e, f x e, δ δ M ee [ M ee ρ 1 = lm ln δ M ee Mee 1 ρ 1 + e µ e 1 M ee M ee + ρ 1 M ee Mee ρ ] = +

10 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss because for K, M > 0 [ ln x M + K ] x lm x 0 x>0 [ 1 = lm [x ln x ] x 0 x M + K 1 ln x = lm M x 0 x 1 x c It follows drectly for the case M ee = 1. a For σ δ = σ + δ, the new condtonal varance s σ Y E=e,δ + K ] = +. = σ σ +δ ; then, f σ Y E=e,δ > 0 s mposed, δ must satsfy the followng condton: σ σ < δ < σ + σ. Frst, defnng M = σ + σ, t s possble to calculate lm δ M S σ f x e, f x e, δ. But δ M s equvalent to δ + σ δ σ Y E=e and gven that S σ f x e, f x e, δ = 1 σee σ Y E=e δ + σ δ σ Y E=e + δ e µ e ln σ Y E=e + σ Y E=e 1 δ + σ δ [ ] and usng lm x 0 ln x + k x = + for every k, then S σ f x e, f x e, δ = +. lm δ M b Analogously to a. c It follows drectly. The results obtaned are the expected ones, except when the extreme perturbaton s added to the evdental varance havng a fnte lmt. Ths s because the state of the evdental varable s known and the varance of ths varable has a lmted effect on the varable of nterest. In ths case, the posteror densty of nterest wth the perturbaton n the model f x e, δ s not so dfferent to the posteror densty of nterest wthout the perturbaton f x e. Therefore, although an extreme perturbaton added to the evdental varance can exst, the senstvty measure tends to a fnte value. 4. Extreme dependence between the varable of nterest and the evdental varable In ths secton we evaluate some partcular cases of the senstvty measure and ts extreme behavor, dependng on the relatons between the varable of nterest X and the evdental varable X e. Then, consderng extreme values for the squared lnear correlaton coeffcnt, for example ρ = 0, X and X e are ndependent and the connecton between these varables n the DAG must be a convergng connecton only ths connecton consders ndependence between the varables. On the other hand, wth ρ = 1, the connecton n the DAG between X and X e can be a seral or dvergng connecton, wth a lnear dependence relaton between X and X e. If X and X e are ndependent, when the evdence propagaton s done, the nformaton about = µ and σ Y E=e = σ. In ths case, only when the perturbaton s added to the parameters of X s the senstvty measure not zero. Moreover, when the covarance between X and X e s perturbed, the relaton between those varables changes to σ = δ and therefore the structure of the network changes. In those cases the senstvty measure s obtaned consderng σ = 0 n Propostons 5 and 6. Any other perturbaton of the parameters of the network does not dsturb the results about X. X e does not affect the varable of nterest; then after the evdence propagaton µ Y E=e

11 1938 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss The other extreme case for the squared correlaton coeffcnt s ρ = 1. In ths case, any perturbaton assocated wth the ntal parameters of X or X e changes sgnfcantly the results concernng X ; then the senstvty measure computed n all cases s nfnty. Therefore, for a lnear relaton between the evdental varable and the varable of nterest, and then a maxmum value of ρ, the senstvty measure s also extreme. When X and X e are ndependent, only perturbatons assocated wth the parameters of X can dsturb the network s output, and f the perturbatons are extreme, the senstvty measure s also extreme. However, for ρ = 1, any perturbaton added to the parameters of X or X e greatly affects the network s output, the senstvty measure beng nfnty. 5. Workng example Example 9. Consder the Gaussan Bayesan network n Example 3. Experts dsagree wth the parameters assgned to the jont probablty dstrbuton and they want to quantfy the effect of naccuracy when some extreme perturbatons of the parameters are proposed. Let the mean and the varance of the varable of nterest X 5 be µ δ 5 5 = 0 = µ 5 + δ 5 wth δ 5 = 5 and σ δ = 3 wth δ 55 = 3. Let the mean and the varance of the evdental varable X be gven by µ δ = 30 = µ +δ wth δ = 7 and σ δ = 0.7 wth δ = Fnally, let σ δ 5 5 = 3 wth δ 5 = 1. The senstvty measure ylded for these naccuracy parameters s S µ 5 f x 5 X = 4, f x 5 X = 4, δ 5 = 13.0 S σ 55 f x 5 X = 4, f x 5 X = 4, δ 55 = 9.91 S µ f x 5 X = 4, f x 5 X = 4, δ = S σ f x 5 X = 4, f x 5 X = 4, δ =.03 where.113 s the lmt of the senstvty measure when δ tends to M ee ; S σ 5 f x 5 X = 4, f x 5 X = 4, δ 5 = It can be ponted out that although there can exst more perturbed parameters, those naccuracs do not dsturb the network s output. To show the behavor of the measure, we present the senstvty measures as a functon of the perturbaton δ n Fg.. In ths case, t can be noted that the senstvty measure of the mean of X s the same as the senstvty measure of the mean of X e, because of the ntal parameters that have been chosen. As can be seen n the example, the senstvty measure grows when the perturbaton s large. When the evdental varance s perturbed t s necessary to compute the lmt of the measure to know whether the perturbaton proposed s also large. Example 10. Consder the Gaussan Bayesan network gven n Fg. 1, the varable of nterest beng X 1, wth the same evdental varable X. The varables X 1 and X are n a convergng connecton n the graph, beng ndependent varables. In ths case, f any parameter dfferent to µ 1, σ 11 or σ 1 s perturbed, ths naccuracy does not affect the posteror densty of X 1 ; then the senstvty measure s zero. However, f µ 1, σ 11 or σ 1 are perturbed, the senstvty measure s dfferent to zero. Therefore, n ths case t s mportant to be very careful n assgnng the parameters of the varable of nterest X 1.

12 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss Fg.. Senstvty measures obtaned n the example for any perturbaton. 6. Conclusons In ths paper the behavor of the senstvty measure ntroduced by Gómez-Vllegas, Maín and Sus [9] has been studd. Ths measure s useful for evaluatng the mpact of parameter naccuracs n a Gaussan Bayesan network over the densty of nterest after the evdence propagaton, when the naccuracs are extreme. Wth ths analyss t s possble to prove that ths s a well-defned measure for developng a senstvty analyss n a Gaussan Bayesan network even f the proposed perturbatons are extreme. The results obtaned are the expected ones. When the evdental varance s naccurate wth a large perturbaton assocated wth ths value, there exsts a fnte value as the lmt of the senstvty measure. Ths s because the evdence about ths varable explans the behavor of the varable of nterest regardless of ts naccurate varance. Moreover, n all possble cases for the senstvty measure, f the perturbaton added to a parameter s very small, tendng to zero, the senstvty measure s also zero. Furthermore, t s possble to evaluate the senstvty measure n some partcular cases dependng on the dependence relaton between the varable of nterest X and the evdental varable X e. In that way, ρ 0, 1 and also extreme values of the lnear squared correlaton coeffcnt ρ = 0 and ρ = 1 are consdered, havng dfferent types of connectons n the DAG. If there s no relaton between X and X e, the connecton n the graph s a convergng connecton. Therefore the evdence and any perturbatons added to parameters of the evdental varable X e do not affect the nformaton about the varable of nterest X. In ths case, the senstvty measure s dfferent to zero only when the parameters of X are naccurate. For a lnear relaton between X and X e, wth ρ = 1, havng a seral or dvergng connecton n the DAG, the senstvty measure s nfnty because any perturbaton about X or X e produces an mportant effect over the network s output. Therefore, f X e s so related to X, then any perturbaton added to the parameters that descrbe these varables makes the senstvty measure extreme.

13 1940 M.A. Gómez-Vllegas et al. / Journal of Multvarate Analyss Acknowledgments Ths research was supported by the MEC from Span, Grant MTM , and the Unversdad Complutense-Comundad de Madrd, Grant UCM References [1] J. Pearl, Probablstc reasonng n ntellgent systems, n: Networks of Plausble Inference, Morgan Kaufmann, Palo Alto, [] S.L. Laurtzen, Graphcal Models, Clarendon Press, Oxford, [3] D. Heckerman, A tutoral on learnng wth Bayesan networks, n: M.I. Jordan Ed., Learnng n Graphcal Models, MIT Press, Cambrdge, MA, [4] F.V. Jensen, Bayesan Networks and Decson Graphs, Sprnger, New York, 001. [5] K.B. Laskey, Senstvty analyss for probablty assessments n Bayesan networks, IEEE Transactons on Systems, Man, and Cybernetcs [6] V.M.H. Coupé, L.C. van der Gaag, Properts of senstvty analyss of Bayesan belf networks, Annals of Mathematcs and Artfcal Intellgence [7] H. Chan, A. Darwche, A dstance measure for boundng probablstc belf change, Internatonal Journal of Approxmate Reasonng [8] E. Castllo, U. Kjærulff, Senstvty analyss n Gaussan Bayesan networks usng a symbolc numercal technque, Relablty Engneerng and System Safety [9] M.A. Gómez-Vllegas, P. Maín, R. Sus, Senstvty analyss n Gaussan Bayesan networks usng a dvergence measure, Communcatons n Statstcs Theory and Methods [10] E. Castllo, J.M. Gutérrez, A.S. Had, Expert Systems and Probablstc Network Models, Sprnger-Verlag, New York, [11] R. Shachter, C. Kenley, Gaussan nfluence dagrams, Management Scnce [1] S. Kullback, R.A. Lebler, On nformaton and suffcncy, Annals of Mathematcal Statstcs

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline Outlne Bayesan Networks: Maxmum Lkelhood Estmaton and Tree Structure Learnng Huzhen Yu janey.yu@cs.helsnk.f Dept. Computer Scence, Unv. of Helsnk Probablstc Models, Sprng, 200 Notces: I corrected a number

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Artificial Intelligence Bayesian Networks

Artificial Intelligence Bayesian Networks Artfcal Intellgence Bayesan Networks Adapted from sldes by Tm Fnn and Mare desjardns. Some materal borrowed from Lse Getoor. 1 Outlne Bayesan networks Network structure Condtonal probablty tables Condtonal

More information

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications

Durban Watson for Testing the Lack-of-Fit of Polynomial Regression Models without Replications Durban Watson for Testng the Lack-of-Ft of Polynomal Regresson Models wthout Replcatons Ruba A. Alyaf, Maha A. Omar, Abdullah A. Al-Shha ralyaf@ksu.edu.sa, maomar@ksu.edu.sa, aalshha@ksu.edu.sa Department

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Statistics Chapter 4

Statistics Chapter 4 Statstcs Chapter 4 "There are three knds of les: les, damned les, and statstcs." Benjamn Dsrael, 1895 (Brtsh statesman) Gaussan Dstrbuton, 4-1 If a measurement s repeated many tmes a statstcal treatment

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information

Relevance Vector Machines Explained

Relevance Vector Machines Explained October 19, 2010 Relevance Vector Machnes Explaned Trstan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introducton Ths document has been wrtten n an attempt to make Tppng s [1] Relevance Vector Machnes

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1

j) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1 Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

EM and Structure Learning

EM and Structure Learning EM and Structure Learnng Le Song Machne Learnng II: Advanced Topcs CSE 8803ML, Sprng 2012 Partally observed graphcal models Mxture Models N(μ 1, Σ 1 ) Z X N N(μ 2, Σ 2 ) 2 Gaussan mxture model Consder

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Goodness of fit and Wilks theorem

Goodness of fit and Wilks theorem DRAFT 0.0 Glen Cowan 3 June, 2013 Goodness of ft and Wlks theorem Suppose we model data y wth a lkelhood L(µ) that depends on a set of N parameters µ = (µ 1,...,µ N ). Defne the statstc t µ ln L(µ) L(ˆµ),

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Engineering Risk Benefit Analysis

Engineering Risk Benefit Analysis Engneerng Rsk Beneft Analyss.55, 2.943, 3.577, 6.938, 0.86, 3.62, 6.862, 22.82, ESD.72, ESD.72 RPRA 2. Elements of Probablty Theory George E. Apostolaks Massachusetts Insttute of Technology Sprng 2007

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Using Multivariate Rank Sum Tests to Evaluate Effectiveness of Computer Applications in Teaching Business Statistics

Using Multivariate Rank Sum Tests to Evaluate Effectiveness of Computer Applications in Teaching Business Statistics Usng Multvarate Rank Sum Tests to Evaluate Effectveness of Computer Applcatons n Teachng Busness Statstcs by Yeong-Tzay Su, Professor Department of Mathematcs Kaohsung Normal Unversty Kaohsung, TAIWAN

More information

Limited Dependent Variables

Limited Dependent Variables Lmted Dependent Varables. What f the left-hand sde varable s not a contnuous thng spread from mnus nfnty to plus nfnty? That s, gven a model = f (, β, ε, where a. s bounded below at zero, such as wages

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

The Geometry of Logit and Probit

The Geometry of Logit and Probit The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.

More information

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours

UNIVERSITY OF TORONTO Faculty of Arts and Science. December 2005 Examinations STA437H1F/STA1005HF. Duration - 3 hours UNIVERSITY OF TORONTO Faculty of Arts and Scence December 005 Examnatons STA47HF/STA005HF Duraton - hours AIDS ALLOWED: (to be suppled by the student) Non-programmable calculator One handwrtten 8.5'' x

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Gaussian Mixture Models

Gaussian Mixture Models Lab Gaussan Mxture Models Lab Objectve: Understand the formulaton of Gaussan Mxture Models (GMMs) and how to estmate GMM parameters. You ve already seen GMMs as the observaton dstrbuton n certan contnuous

More information

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several

More information

Time-Varying Systems and Computations Lecture 6

Time-Varying Systems and Computations Lecture 6 Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Modelli Clamfim Equazione del Calore Lezione ottobre 2014

Modelli Clamfim Equazione del Calore Lezione ottobre 2014 CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

International Journal of Mathematical Archive-3(3), 2012, Page: Available online through ISSN

International Journal of Mathematical Archive-3(3), 2012, Page: Available online through   ISSN Internatonal Journal of Mathematcal Archve-3(3), 2012, Page: 1136-1140 Avalable onlne through www.ma.nfo ISSN 2229 5046 ARITHMETIC OPERATIONS OF FOCAL ELEMENTS AND THEIR CORRESPONDING BASIC PROBABILITY

More information

Accepted Manuscript. Guillaume Marrelec, Habib Benali. S (08)00059-X DOI: /j.spl Reference: STAPRO 4923.

Accepted Manuscript. Guillaume Marrelec, Habib Benali. S (08)00059-X DOI: /j.spl Reference: STAPRO 4923. Accepted Manuscrpt Condtonal ndependence between two varables gven any condtonng subset mples block dagonal covarance matrx for multvarate Gaussan dstrbutons Gullaume Marrelec Habb Benal PII: S167-7152(8)59-X

More information

This column is a continuation of our previous column

This column is a continuation of our previous column Comparson of Goodness of Ft Statstcs for Lnear Regresson, Part II The authors contnue ther dscusson of the correlaton coeffcent n developng a calbraton for quanttatve analyss. Jerome Workman Jr. and Howard

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013 ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run

More information

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING

Department of Computer Science Artificial Intelligence Research Laboratory. Iowa State University MACHINE LEARNING MACHINE LEANING Vasant Honavar Bonformatcs and Computatonal Bology rogram Center for Computatonal Intellgence, Learnng, & Dscovery Iowa State Unversty honavar@cs.astate.edu www.cs.astate.edu/~honavar/

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Complement of Type-2 Fuzzy Shortest Path Using Possibility Measure

Complement of Type-2 Fuzzy Shortest Path Using Possibility Measure Intern. J. Fuzzy Mathematcal rchve Vol. 5, No., 04, 9-7 ISSN: 30 34 (P, 30 350 (onlne Publshed on 5 November 04 www.researchmathsc.org Internatonal Journal of Complement of Type- Fuzzy Shortest Path Usng

More information

ISQS 6348 Final Open notes, no books. Points out of 100 in parentheses. Y 1 ε 2

ISQS 6348 Final Open notes, no books. Points out of 100 in parentheses. Y 1 ε 2 ISQS 6348 Fnal Open notes, no books. Ponts out of 100 n parentheses. 1. The followng path dagram s gven: ε 1 Y 1 ε F Y 1.A. (10) Wrte down the usual model and assumptons that are mpled by ths dagram. Soluton:

More information

Strong Markov property: Same assertion holds for stopping times τ.

Strong Markov property: Same assertion holds for stopping times τ. Brownan moton Let X ={X t : t R + } be a real-valued stochastc process: a famlty of real random varables all defned on the same probablty space. Defne F t = nformaton avalable by observng the process up

More information

Laboratory 1c: Method of Least Squares

Laboratory 1c: Method of Least Squares Lab 1c, Least Squares Laboratory 1c: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

The Jacobsthal and Jacobsthal-Lucas Numbers via Square Roots of Matrices

The Jacobsthal and Jacobsthal-Lucas Numbers via Square Roots of Matrices Internatonal Mathematcal Forum, Vol 11, 2016, no 11, 513-520 HIKARI Ltd, wwwm-hkarcom http://dxdoorg/1012988/mf20166442 The Jacobsthal and Jacobsthal-Lucas Numbers va Square Roots of Matrces Saadet Arslan

More information

χ x B E (c) Figure 2.1.1: (a) a material particle in a body, (b) a place in space, (c) a configuration of the body

χ x B E (c) Figure 2.1.1: (a) a material particle in a body, (b) a place in space, (c) a configuration of the body Secton.. Moton.. The Materal Body and Moton hyscal materals n the real world are modeled usng an abstract mathematcal entty called a body. Ths body conssts of an nfnte number of materal partcles. Shown

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Group Analysis of Ordinary Differential Equations of the Order n>2

Group Analysis of Ordinary Differential Equations of the Order n>2 Symmetry n Nonlnear Mathematcal Physcs 997, V., 64 7. Group Analyss of Ordnary Dfferental Equatons of the Order n> L.M. BERKOVICH and S.Y. POPOV Samara State Unversty, 4430, Samara, Russa E-mal: berk@nfo.ssu.samara.ru

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method Journal of Electromagnetc Analyss and Applcatons, 04, 6, 0-08 Publshed Onlne September 04 n ScRes. http://www.scrp.org/journal/jemaa http://dx.do.org/0.46/jemaa.04.6000 The Exact Formulaton of the Inverse

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering

Statistics and Probability Theory in Civil, Surveying and Environmental Engineering Statstcs and Probablty Theory n Cvl, Surveyng and Envronmental Engneerng Pro. Dr. Mchael Havbro Faber ETH Zurch, Swtzerland Contents o Todays Lecture Overvew o Uncertanty Modelng Random Varables - propertes

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Continuous Belief Functions: Focal Intervals Properties.

Continuous Belief Functions: Focal Intervals Properties. Contnuous Belef Functons: Focal Intervals Propertes. Jean-Marc Vannobel To cte ths verson: Jean-Marc Vannobel. Contnuous Belef Functons: Focal Intervals Propertes.. BELIEF 212, May 212, Compègne, France.

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Bayesian Networks. Course: CS40022 Instructor: Dr. Pallab Dasgupta

Bayesian Networks. Course: CS40022 Instructor: Dr. Pallab Dasgupta Bayesan Networks Course: CS40022 Instructor: Dr. Pallab Dasgupta Department of Computer Scence & Engneerng Indan Insttute of Technology Kharagpur Example Burglar alarm at home Farly relable at detectng

More information

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30

STATS 306B: Unsupervised Learning Spring Lecture 10 April 30 STATS 306B: Unsupervsed Learnng Sprng 2014 Lecture 10 Aprl 30 Lecturer: Lester Mackey Scrbe: Joey Arthur, Rakesh Achanta 10.1 Factor Analyss 10.1.1 Recap Recall the factor analyss (FA) model for lnear

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Introduction to Hidden Markov Models

Introduction to Hidden Markov Models Introducton to Hdden Markov Models Alperen Degrmenc Ths document contans dervatons and algorthms for mplementng Hdden Markov Models. The content presented here s a collecton of my notes and personal nsghts

More information

6. Stochastic processes (2)

6. Stochastic processes (2) Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space

More information

Quantum and Classical Information Theory with Disentropy

Quantum and Classical Information Theory with Disentropy Quantum and Classcal Informaton Theory wth Dsentropy R V Ramos rubensramos@ufcbr Lab of Quantum Informaton Technology, Department of Telenformatc Engneerng Federal Unversty of Ceara - DETI/UFC, CP 6007

More information

Laboratory 3: Method of Least Squares

Laboratory 3: Method of Least Squares Laboratory 3: Method of Least Squares Introducton Consder the graph of expermental data n Fgure 1. In ths experment x s the ndependent varable and y the dependent varable. Clearly they are correlated wth

More information