A general framework for estimating similarity of datasets and decision trees: exploring semantic similarity of decision trees

Size: px
Start display at page:

Download "A general framework for estimating similarity of datasets and decision trees: exploring semantic similarity of decision trees"

Transcription

1 A generl frmework for estimting similrity of dtsets nd decision trees: exploring semntic similrity of decision trees Irene Ntoutsi Alexndros Klousis Ynnis Theodoridis Abstrct Decision trees re mong the most populr pttern types in dt mining due to their intuitive representtion. owever, little ttention hs been given on the definition of mesures of semntic similrity between decision trees. In this work, we present generl frmework for similrity estimtion tht includes s specil cses the estimtion of semntic similrity between decision trees, s well s vrious forms of similrity estimtion on clssifiction dtsets with respect to different probbility distributions defined over the ttribute-clss spce of the dtsets. The similrity estimtion is bsed on the prtitions induced by the decision trees on the ttribute spce of the dtsets. We use the frmework in order to estimte the semntic similrity of decision trees induced from different subsmples of clssifiction dtsets; we evlute its performnce with respect to the empiricl semntic similrity, which we estimte on the bsis of independent hold-out test sets. The vilbility of similrity mesures on decision trees opens wide rnge of possibilities for met-nlysis nd metmining of the dt mining results. 1 Introduction Decision tree (DT) models re one of the most populr lerning prdigms in the re of dt mining thnks to number of ttrctive properties, such s sclbility to lrge dtsets nd reltive esiness of interprettion, provided tht their size does not exceed certin limits. On the other hnd they re lso notorious for their instbility, smll chnges in the trining dtset my result to completely different decision trees tht contin different tests on the predictive ttributes or even different predictive ttributes. These decision trees, though structurlly different, my describe the sme concept, i.e. they my be semnticlly similr or even identicl; in fct, they should be semnticlly similr if they hve been induced from dtsets tht Deprtment of Informtics, University of Pireus, Greece Computer Science Deprtment, University of Genev, Switzerlnd Deprtment of Informtics, University of Pireus, Greece come from the sme generting distribution. Semntic similrity in the presence of structurl differences might rise for vriety of resons, such s superficilly different tests on ttributes which re in fct equivlent, different ttributes tht convey the sme informtion due to ttribute redundncy, or simply becuse the sme concept cn be described in different wys which re nevertheless semnticlly equivlent. To cpture the degree of semntic similrity between decision trees we need mesure of the semntic similrity of the concepts tht they describe. There is plethor of resons for which the definition of similrity mesures between decision trees nd clssifiction dtsets is required. By fr, the most importnt is being ble to report whether the differences observed in decision trees induced from different trining sets (which however re thought to come from the sme dt generting distribution) re only structurl nd do not correspond to semntic differences, or whether the concepts described by the decision trees re indeed semnticlly different. In the ltter cse, quntifiction of this semntic difference would be useful. Moreover, the vilbility of similrity mesure on clssifiction models mkes it possible to pply number of stndrd mining tsks on clssifiction models rther thn on rw dt, resulting in wht we could cll met-nlysis or met-mining tsks. For exmple, the semntic similrity mesures cn be used to cluster different sites into groups of similr behvior ccording to the decision tree models lerned loclly on ech of the sites, e.g. clustering the different brnches of bnk ccording to the credit strtegy they dopt. Similrity could be lso employed in order to study the effect of the decision tree lerning prmeters, like pruning level, on the resulting models or to compre decision tree model to golden stndrd model. Also, in cse of dynmic dt, like dt strems, similrity could be employed in order to monitor the evolution of the induced decision tree models or clssifiction dtsets cross the time xis. A crucil question in this cse is whether the concept, which is cptured through the induced decision tree models, remins (bout) the sme or there re concept drifts in the popultion.

2 In this pper, we propose generl similrity estimtion frmework tht includes s specil cses i) the estimtion of semntic similrity between decision trees nd ii) the estimtion of similrity between different probbility distributions tht govern different clssifiction dtsets, nmely the mrginl distribution of the ttributes, the joint ttributes-clss probbility distribution nd the ttributes-conditionl clss distribution. The frmework is bsed on the comprison of the prtitionings tht decision trees define over given ttribute spce considering the probbility distribution of the dt spce over tht prtitioning. Similr ides hve been previously used for dtset comprison, however to the best of our knowledge this is the first time tht the semntic similrity of decision trees, on which we focus in this pper, is explored. Depending on the vilble informtion regrding the probbility distribution tht generted the dt, we get different instntitions of the decision tree semntic similrity mesure. To evlute the proposed decision tree semntic similrity mesures we compre them with the empiricl semntic similrity, which is estimted by pplying the decision trees on independent test sets. The rest of the pper is orgnized s follows: In Section 2 we present some preliminries on decision trees. Relted work is presented in Section 3. The proposed similrity frmework is described in Section 4. The experimentl evlution of the different instntitions for the decision tree similrity mesures is reported in Section 5. Finlly, Section 6 discusses conclusions nd outlook. 2 Preliminries on decision trees Consider clssifiction problem described through vector of predictive ttributes A = ( 1, 2,..., m ) nd clss ttribute C. Ech predictive ttribute, i, hs domin, d( i ) nd the domin of the clss ttribute is d(c) = {c 1, c 2,..., c k }, where k is the number of clsses. To built decision tree set, D, of trining exmples is provided s input to the DT induction lgorithm. Trining exmples re drwn from the joint distribution P(A, C) of the predictive ttributes nd the clss ttribute. The Crtesin products S A = d( 1 ) d( 2 )... d( m ) nd S (A,C) = S A d(c) define the ttribute nd ttribute-clss spces, respectively. Trining exmples hve thus the form (x, y) S (A,C), where x S A nd y d(c). Let U(A) denote the uniform distribution over the ttribute spce nd P(A) = C P(A, C) denote the mrginl distribution lso defined over the ttribute spce. 3 Relted work Although decision tree induction is n extensively studied reserch re, limited work hs been done on the problem of decision tree comprison nd more precisely on the computtion of semntic similrity between decision trees; the only notble exception is [5]. More recently, severl pproches hve been proposed tht utilize decision tree comprison s mens for dtset comprison, e.g. [2, 4]. Turney, [5], presented frmework for evluting the stbility of clssifiction lgorithm, nmely the degree to which it genertes repetble results, when trined on different dtsets drwn from the joint distribution P(A, C). To quntify stbility Turney uses semntic similrity mesure clled greement. The greement of two clssifiers is defined s the probbility of producing the sme prediction over instnces drwn from U(A). Note tht, ccording to Turney greement is mesured over instnces drwn from U(A) nd not from P(A, C); the underlying reson is tht the greement should be exmined over ll possible input worlds. Turney estimtes the greement of decision trees empiriclly, by pplying them on rtificil test sets of instnces drwn from the U(A) distribution. Recently, severl chnge detection methods hve been proposed tht utilize decision trees for dtset comprison. The intuition behind these pproches is tht the decision tree models cpture interesting chrcteristics of the dtsets nd thus they cn be used for similrity ssessment between dtsets. All the methods in this ctegory follow the sme rtionle: they combine decision trees to induce finer decision tree structure, nd then they compre the distributions of the two dtsets over this (common) finer structure. Below, we describe some representtive pproches in this ctegory. Gnti et l. [2], propose the FOCUS frmework for mesuring the devition between two dtsets D 1, D 2 in terms of the decision trees DT 1, DT 2 induced from them. Ech DT defines, vi its lef nodes, set of nonoverlpping regions over the ttribute spce, wheres by overlying the regions of the two DTs finer structure rises. Authors compute the probbility of ech region in the overly by querying the originl rw dtsets. Then, the distnce between the two dtsets is computed by ggregting, for ech region in the overly, the difference in the region probbility estimtions between the two dtsets. Recently, Pekersky et l., [4], proposed method for mining chnging regions between two dtsets. A region is chrcterized s chnging if it ppers under different clss lbels in the two dtsets. The uthors extend the trditionl decision tree structure by fur-

3 ther splitting the lef nodes through clustering. The resulting model is clled cluster-embedded decision tree nd provides better pproximtion of the ttribute spce probbility distribution compring to the pproximtion of (simple) decision tree. After extrcting the cluster-embedded DT structure for ech dtset, the overly of the two structures is computed nd its sttistics re estimted without re-querying the originl rw dtsets, s in FOCUS [2]. Rther, the uthors pproximte the mesure component of ech region in the overly by employing the sttistics of the corresponding cluster-embedded decision trees. Wng nd Pei [7] quntify chnges between two dtsets with clss lbels using s common structure for the comprison set of rndom histogrms. The instnces of the two dtsets re projected into this (common) structure nd chnges in their distributions re detected. Finlly, there is considerble mount of work on compring tree structures bsed on the edit distnce, e.g. [8]. These pproches re bsed on counting the number nd the cost of edit opertions (insert, delete, updte) tht re required in order to convert one tree into the other. owever, they work with symbolic trees where the nodes re lbeled with symbols from given lphbet. owever in cse of decision trees, the nodes re more complex since they include conditions over the symbols-ttributes nd furthermore, ech decision tree pth is ssigned n importnce fctor bsed on the number of instnces tht follow tht pth. 4 The similrity estimtion frmework A decision tree DT induced from dtset D prtitions the ttribute spce into set of non-overlpping regions R DT = {r i, i = 1... R DT }, vi its lef nodes. The prtition R DT cn be considered s n pproximtion of the joint ttribute-clss probbility distribution in the form of histogrm (Section 4.1). Ech bin of the histogrm corresponds to region of the prtition nd respectively, to lef node of the decision tree. A bin-region is defined by the tests on the predictive ttributes encountered on the pth from the root to the lef node ssocited with tht region. The frequencies of given bin re the clss counts of the instnces tht belong to the given bin. Different decision trees result in different prtitions; in Section 4.2 we show how to derive the overly prtition of two decision trees nd how to estimte its sttistics depending on whether we hve ccess to the originl rw dtsets or not. Bsed on the overly prtition, we define vrious similrity mesures for decision trees nd clssifiction dtsets (Section 4.3). 4.1 Decision tree prtitions Ech region r R DT is chrcterized by structure nd mesure component tht re directly derived from the decision tree. The structure component of the region is defined s the conjunction of the test conditions on the ttributes long the corresponding tree pth from the root to the lef node ssocited with tht region: r.s := { t( i ), i = 1...m} Test conditions re usully numeric nd cn be expressed in the form t() := min (r) mx (r) denoting the min nd mx vlues of ttribute in region r. Let us lso define the length of test condition on s: t() := mx (r) min (r) nd the length of the domin of s: dom() := mx min. Note here tht, if n ttribute is not included in the structure component r.s of lef node, i.e. no test on tht ttribute hs been included in the pth from the root to the lef node during the trining phse, then the test condition on tht ttribute is t() := min mx, i.e. cn tke ny vlue from its domin. Thus, the structure component of region contins test conditions over ll (i.e. m) predictive ttributes of the problem. The mesure component of region is defined s the number of trining set instnces tht fll into this region for ech clss, nd it depends on the trining set D: (4.1) r.m D := [n c1, n c2,..., n ck ] where n ci, i = 1...k is the number of instnces tht fll into region r nd belong to clss c i. The size of the mesure component is: r.m D = 1 i k n c i nd the clss r.cl ssigned to the region r is given by: r.cl = rgmx ci r.m D. The probbility of region represents the probbility tht some instnce of the problem will follow the corresponding DT pth. Formlly, this probbility is given by: P(r) = r P(A)dA, where P(A) is the probbility density function of the instnces. owever, since we do not hve ccess to the exct form of P(A), we should use the dt to estimte it. More specificlly, if we consider the dtset D used for the construction of the DT, we cn mke dtset dependent estimtion of P(r) s follows 1 : (4.2) P D (r) = r.m D N D This estimtion is simply the percentge of the trining set instnces tht fll in region r. The vector: (4.3) P. P D (A) = [P D (r i ) r i R DT ] 1 We denote the ctul distribution by P nd its estimtion by

4 is n pproximtion of P(A), from the dtset D. Except for P(A), we cn lso pproximte P(A, C) by exploiting the mesure component of the regions, which describe the distribution of trining set instnces within the different problem clsses. The mtrix: (4.4) P D (A,C) = [ r i.m D N D r i R DT ] in which ech row corresponds to region r i R DT nd ech column to clss c j C, is n pproximtion of the joint distribution, P(A, C) from the dtset D. Furthermore, the mesure component cn provide us with n estimtion of the conditionl probbility of the clsses given the region r: P D (C r i ) = r i.m D r i.m D Then, the estimte of the ttributes conditionl distribution of the clss is the mtrix: P D (C A) = [P D (C r i ) r i R DT ] where ech cell of the mtrix corresponds to the probbility of observing specific region under specific clss. 4.2 Decision tree prtitions overly Let R DT1 nd R DT2 be the prtitions defined by the decision trees DT 1 nd DT 2, respectively. Overlying the two prtitions, finer prtition R DT1 DT 2 rises, where ech region r in it is the result of overlying some region r i R DT1 with some region r j R DT2, tht is r = r i r j. The gol is to estimte the region probbility P(r) nd the region-clss probbility P(r, c) for ech region r R DT1 DT 2 nd ech clss c C. To this end, we rely on the observtion tht ech region r in the overly is lso hyperectngle nd thus it cn be described through structure nd mesure component Structure component of the overly regions The structure component of the overly region r i r j is esily defined through the intersections of the DT regions tht prticipte in its formtion: r i r j.s := { t( i ), i = 1...m} t() := min (r i r j ) mx (r i r j ) min (r i r j ) := mx(min (r i ), min (r j )) mx (r i r j ) := min(mx (r i), mx (r j)) If mx (r i r j ) min (r i r j ), the overly region r i r j is not defined since the regions re disjoint Mesure component of the overly regions The estimtion of the mesure component of r i r j is dtset dependent; the obvious choices for the dtset re D 1, D 2 nd D 1 D 2. owever, even if we do not hve nymore ccess to ny of these dtsets, we cn still estimte the mesure component of the overly regions bsed on the mesure components of the regions of the originl prtitions R DT1 nd R DT2. Dt dependent probbility estimtion: If we hve ccess to the originl rw dtsets, we cn get the exct mesure component of the overly regions by simply projecting ech dtset D {D 1, D 2, D 1 D 2 } on R DT1 DT 2. Tht is: (4.5) r i r j.m D = [n c 1,...,n c k ], where n c i = {(x, c i )}, x r i r j,x D which simply gives us the number of trining instnces tht fll within the r i r j region for the D dtset for ech of the problem clsses. Pttern dependent probbility estimtion: Even if we do not hve ccess to the originl rw dtsets, we cn still mke n estimtion of the expected mesure for ech region r i r j R DT1 DT 2 using the mesure components of the originl regions r i R DT1 nd r j R DT2 (which re derived directly from the DTs). The expected mesure of r i r j ccording to D 1 is: (4.6) r i r j.m D1 = r i.m D1 V (r i r j ) V (r i ) represents the reltive volume of the intersection region r i r j with respect to the volume of the region r i. Since the regions estblished by decision tree re xis prllel hyper-rectngles it holds tht: V (r) = t( i ) dom( i ) i where the term V (ri rj) V (r i) where the term t(i) dom( i) represents the reltive importnce of ttribute i in region r. If we ssume uniform distribution U(A) of the instnces over the ttribute spce then V (r) = P(r). In Eqution 4.6, though, we dopt middle ssumption, nmely tht the D 1 instnces re uniformly distributed within the region r i of R DT1, insted of being uniformly distributed within the whole ttribute spce. Following the rtionle of Eqution 4.6, the expected mesure of r i r j ccording to D 2 is: (4.7) r i r j.m D2 = r j.m D2 V (r i r j ) V (r j )

5 Finlly, if we ssume tht the two dtsets come from the sme distribution P(A), we cn get the expected mesure of r i r j ccording to the union, D 1 D 2 : r i r j.m D1 D 2 = r i r j.m D1 + r i r j.m D2 So fr, we hve shown how we cn estimte the probbilities of the overly regions depending on whether we hve ccess to the originl rw dtsets or not. As with the single decision tree prtition cse (c.f. Section 4.1), we cn use these estimtions to pproximte the distributions P(A), P(A, C), P(C A). Depending on which dtset, D {D 1, D 2, D 1 D 2 }, we use to clculte the mesures of the overly regions, we get the corresponding estimtions of P D (A),P D (A,C) nd P D (C A) under the R DT1 DT 2 prtition. To distinguish between the cse where the mesure components re computed by ccessing the originl rw dtsets (Eqution 4.5) or under the uniform region distribution ssumption (Equtions 4.6, 4.7), we use the superscripts nd U respectively. 4.3 Similrity mesures on decision trees nd dtsets In the previous section we described methods for the estimtion of P D (A),P D (A,C) nd P D (C A) under the R DT1 DT 2 prtition nd for the different dtsets D {D 1, D 2, }. These estimtions cn be used to compute similrities between either decision trees or dtsets. Before we proceed with the definition of the ctul similrity mesures, we first provide similrity function between histogrms, since ll our estimtions come in the form of histogrms. Let P, be the probbility density estimtions for rndom vrible X from two different popultions, in the form of histogrms. We ssume tht P nd re defined over the sme bins. The ffinity coefficient between P nd is given by: s(p, ) = Pi i i Bsed on the ffinity coefficient nd the different overly prtition sttistics, we cn now define number of similrity mesures between decision trees nd dtsets: Cse : We cn mesure the similrity of two dtsets D 1, D 2 with respect to their ttribute spce probbility distributions P D1 (A), P D2 (A) by directly computing their ffinity coefficient: (4.8) s(p D1 (A),P D2 (A)) This similrity mesure cn be used to determine if the two dtsets were generted from the sme distribution P(A). The estimtions P Di (A), i = {1, 2} cn be either P D i (A) or P U D i (A) depending on whether rw dt ccess is llowed or not. Cse b: We cn mesure the similrity of two decision trees DT 1, DT 2 with respect to their predictions. This is mesure of their semntic similrity, i.e. how similr re the concepts described by the DTs, nd corresponds to the percentge of times tht they produce the sme predictions on instnces drwn from given ttribute spce distribution. We first define the vector: I(C A) = [I(r i.cl, r j.cl) r i r j R DT1 DT 2 ] which indictes whether the two DTs gree or disgree in their predictions over the regions of the overly prtition R DT1 DT 2. I(r i.cl, r j.cl) returns 1 if the predictions of the two DTs regrding the region r i r j re the sme, i.e. r i.cl = r j.cl, otherwise it returns 0. The inner product 2 : (4.9) S(DT 1, DT 2 ) = I(C A) P(A) computes the similrity in the predictions of DT 1, DT 2 under the P(A) distribution. The similrity score equls to the sum of probbilities of the r i r j regions for which the trees gree in their predictions. One issue tht rises here is which estimtion of P(A) we should employ. Possible choices include: the uniform distribution, U(A). In this cse the greement will be exmined over ll possible input worlds. Under this ssumption, the probbility of region r i r j is given by its hyper-volume. Thus, the similrity between two DTs equls to the totl volume of the regions in which the two decision trees gree in their predictions. In this cse, Eqution 4.9 gives, in closed form, the semntic similrity between the two decision trees s it ws defined by Turney [5]. Note however tht, in contrst to [5], we do not require for this estimtion the genertion of n rtificil test set drwn from U(A). dtset dependent distribution P D (A), where D cn be one of the D 1, D 2 nd D 1 D2 dtsets. In this cse, instnces re ssumed to follow the distribution of the dtset D {D 1, D 2, D 1 D 2 }. The union, D 1 D2, is the most pproprite choice if the trees re generted from dtsets following the sme distribution nd we re interested in evluting their similrity under tht distribution. 2 We denote by X the inverse of mtrix X.

6 finlly, P(A) might be distribution tht is different from the distributions tht govern the trining sets. Cse c: We cn lso mesure the similrity of two dtsets with respect to the ttribute conditionl probbility distribution of the clss ttribute P(C A) tht the decision trees, which were induced from these dtsets, impose over the ttribute spce. We first define the vector: S(C A) = [s(p D1 (C A)[r i, ],P D2 (C A)[r j, ]) r i r j R DT1 DT 2 ] S(C A) hs the sme structure s I(C A), but the 0/1 similrity function I(.,.) hs been replced by s(.,.), which computes the similrity of the ttribute conditionl clss distributions of the r i r j region in the D 1 nd D 2 dtsets. The inner product: (4.10) S(D 1, D 2 ) = S(C A) P(A) provides mesure of the similrity of the two dtsets with respect to their ttribute conditionl clss distributions under n ttribute spce tht follows the P(A) distribution. Note here tht this mesure is similr to the mesure tht [4] use to rnk the chnging regions between two dtsets. In fct their pproch is equivlent to introducing distnce mesure of the form: D(D 1, D 2 ) = D(C A) P(A) where D(C A) hs the sme structure s S(C A) but the similrity function is replced by the Eucliden distnce nd P(A) is pproximted by: P(A) = 1 2 (P D 1 (A) + P D2 (A)) owever, [4] do not go s fr s to define the D(D 1, D 2 ). They rther define the product of D(C A) nd P(A), i.e., the vector consisting of the pirwise products of the coordintes of the two vectors, nd use tht in order to rnk regions ccording to their level of chnge from one dtset to the other. Cse d: Finlly, we cn mesure the similrity of the joint ttribute-clss probbility distribution of the two dtsets P D1 (A, C), P D2 (A, C) by simply pplying the ffinity coefficient: (4.11) s(p D1 (A,C),P D2 (A,C)) P Di (A,C) is the estimtion of P Di (A, C) under the overly prtition. Note here tht if the two dtsets cme from the sme P(A) distribution then it cn be esily shown tht this mesure is equivlent to S(D 1, D 2 ) given in Eqution In fct this is the pproch tht ws followed by FOCUS [2] for mesuring dtset devition. The difference lies in the fct tht insted of the ffinity coefficient, FOCUS employs difference function f (e.g. bsolute or reltive difference) to compute the mesure similrity within ech region nd n ggregtion function g (e.g. sum or mx) to ggregte the scores of the overly regions into n overll score. In this section we presented generl frmework for similrity estimtion between either decision trees or dtsets. Under this frmework, we cn estimte the similrities of clssifiction dtsets with respect to number of probbility distributions: i) the ttribute spce distribution P(A) (Eqution 4.8), ii) the clss ttribute conditionl distribution P(C A) (Eqution 4.10) nd iii) the joint ttribute-clss distributions P(A, C) (Eqution 4.11). We cn lso use this frmework in order to estimte the semntic similrity of decision trees (Eqution 4.9) under different ssumptions for the ttribute spce probbility distribution; It is this direction tht we re going to explore nd evlute in more detil in the next section. 5 Evlution of the proposed similrity mesure on decision trees The semntic similrity of ny two clssifiction models M 1, M 2 is defined s the frction of times tht the two models produce the sme predictions over instnces generted from given ttribute spce probbility distribution P(A). As lredy mentioned, Turney [5] defined semntic similrity mesure for clssifiction models, clled greement, s the probbility tht they will produce the sme predictions over ll possible instnces drwn from the uniform distribution on the ttribute spce, U(A). Turney estimtes the greement between two clssifiction models empiriclly, by pplying both of them on test set D of instnces drwn from the U(A) distribution, nd computing the percentge of times tht they produce the sme predictions. The rgument for employing U(A), insted of the distribution P(A) tht generted the dt, ws tht the greement of two concepts should be exmined in ll possible input worlds. We believe tht in rel world ppliction wht is more importnt is not the similrity of the DTs in ll possible worlds, but rther similrity in the world in which the dt exist. So, unlike [5], in order to estimte the semntic similrity, we drw the D dtset from P(A), the distribution tht governs the ttribute spce. We denote by S (DT 1, DT 2 ) the semntic similrity between DT 1 nd DT 2 ; this similrity is empiriclly esti-

7 mted on the D dtset by pplying the two decision trees on D nd computing the number of times tht they produce the sme predictions. S (DT 1, DT 2 ) provides the ground truth to which we will compre the proposed DT semntic similrity mesures. 5.1 Dtsets We experimented with six different dtsets, short description of which is given in Tble 1. The different mfet dtsets re versions of the sme pttern recognition problem in which the gol is to clssify hndwritten numerls. The versions correspond to different fetures used to describe the numerls: in mfet-fctors, ttributes re profile correltions, in mfet-zernike zernike moments, in mfet-krhunen Krnhunen-Love coefficients nd in mfet-fourier fourier coefficients of the chrcter shpes [3]. Wveform-5000 is n rtificil dtset where clsses correspond to different types of wves [1]. In the segment-chllenge dtset, [6], fetures re high level descriptors of regions of imges nd the gol is to clssify ech region to the correct clss, e.g. sky, grss. 5.2 Experimentl setup We need systemtic wy to generte decision trees tht exhibit vrying degrees of semntic similrity. To this end, we rndomly divide given dtset D in two prts, trining set D T used during the model construction phse, nd test set D used s the hold out set for the computtion of S ( D = 1 3 D ). Then, we crete rndom sub-smples of the D T of size p (p = 5%... 95%) with step of 5%. On ech sub-smple DT p, decision tree is trined nd compred to the decision tree tht ws creted on the complete trining set, DT 100. Then, we compute the semntic similrity between the complete DT nd the smpled one, i.e. S (DT p, DT 100 ), on the hold out set D. First of ll, we should verify tht the procedure we employed for the genertion of the different decision trees DT p indeed results in trees tht exhibit vrying levels of semntic similrity with respect to DT 100. We expect S (DT p, DT 100 ) to increse s p increses nd pproches 100%, since the trining set D p used in the construction of DT p becomes more nd more similr to the trining set D 100 used in the construction of DT 100. This is indeed the cse s one cn see in Figure 1, where we plot S s function of the smpling size p; there is smooth increse in the vlues of S s p increses towrds 100%. 5.3 Evluting semntic similrity The gol of the experimentl evlution tht we present in this section is to exmine how the different semntic similrity mesures tht we propose correlte with S. dtset # inst # ttrs # clsses mfet-fctors 2, mfet-fourier 2, mfet-krhunen 2, mfet-zernike 2, segment-chllenge wveform , Tble 1: Description of dtsets. mfet fctors mfet fourier mfet krhunen mfet zernike segment chllenge wveform 5000 Figure 1: Evolution of S (DT p, DT 100 ) The decision tree semntic similrity mesure S(DT 1, DT 2 ) tht we propose (Eqution 4.9) depends on the estimtion of the P(A) distribution tht governs the ttribute. In fct, the computtion of similrity mkes sense for given world, in which specific distribution P(A) holds for the ttribute spce. Then, S(DT 1, DT 2 ) is simply the sum of the probbility densities, under the chosen P(A), of the r i r j regions in which the two decision trees gree. As lredy mentioned, under the uniform distribution ssumption this sum equls to the sum of the hypervolumes of these regions. Moreover, under tht ssumption S(DT 1, DT 2 ) provides the semntic similrity of Turney [5] without hving to pply the lerned models on the hold out set. We will not further exmine the uniform ssumption s possible estimtion for P(A). Insted, we will experiment with three different instntitions of S(DT 1, DT2) tht differ with respect to the estimtion of P(A) they employ. In prticulr, we will investigte the following estimtions for P(A): P U D 1 D 2 : this is the estimtion of P(A) tht we get when the mesure components re computed under the uniform region distribution ssumption, s in Equtions 4.6, 4.7.

8 S (DT p,dt 100 ) S (DT p,dt 100 ) S (DT p,dt 100 ) Figure 2: Evolution of the decision trees semntic similrity mesures with the smpling rte (first column) nd with S (second column) for dtsets: mfet-fctors (top), mfet-krhunen (middle), mfet-zernike (bottom)

9 P D 1 D 2 : this is the estimtion of P(A) tht we get when the mesure components re computed from the direct ppliction of the overly prtition R DTp DT 100 on the D p nd D 100 dtsets. P : this is the estimtion of P(A) tht we get from the direct ppliction of the overly prtition R DTp DT 100 on the hold out set D. Ech of these estimtions, PX Y, of P(A) results in different instntition of S(DT 1, DT 2 ) which we denote by Y X (DT 1, DT 2 ). We should note tht the order in which the different PX Y re listed reflects n incresing mount of knowledge bout the P(A) distribution tht governs the computtion of the semntic similrity S which we use in order to evlute the proposed similrity mesures. PD U 1 D 2 ssumes the less knowledge bout P(A); to estimte the mesure components of the overly tree, it only relies on the nlysis of the structures of the respective decision trees, under the ssumption of uniform within region distribution. P D 1 D 2 requires querying D 1 nd D 2 in order to estimte the mesure components of the overly tree; s result, its estimtion of P(A) is more precise thn the one provide by PD U 1 D 2. Finlly, P hs complete knowledge of P(A), s this knowledge underlies in the D dtset, since we derive it by querying D. As result, (DT 1, DT 2 ) will correlte perfectly with S. In tht sense, represents the idel behvior tht we get when we hve knowledge of the true P(A). For ech Y (DT p, DT 100 ) we show how its vlue X vries with respect to the smple size p, in the first column of Figures 2, 3. All the mesures exhibit similr pttern; similrity increse s p increses. More prticulr, nd S D 1 D P 2 hve very regulr behvior, with n lmost stedy increse of vlues nd smll fluctutions. In cse of the U similrity, the trend is lso incresing but here the fluctutions cn be considerbly lrger, s it hppens in the mfetzernike, mfet-fctors, segment-chllenge, mfet-fourier dtsets. is constntly overestimting decision tree similrity compred to, while U D1 considerbly underestimtes it; recll here tht D 2 reflects the idel behvior. In the second column of Figures 2, 3, we see how the three different versions of Y (DT p, DT X 100 ) correlte with the ctul evlution mesure S (DT p, DT 100 ). As it ws expected, correltes perfectly since its estimtion of P(A) is tken from the D dtset on which S (DT p, DT 100 ) is computed. Consequently, is constntly overestimting S (DT p, DT 100 ), while U is considerbly underestimting it. The performnce of is quite close to the idel performnce of with the most notble cses being segment-chllenge nd mfet-fctors, while the highest discrepncy ppers in the cse of mfet-krhunen. Note here tht dtsets D p, D T nd D re ll drwn from the sme P(A) distribution. The discrepncy between the behvior of nd S D 1 D P cn be 2 explined by the inccurcy in the smpling procedure. As the number of instnces increses, the behviors of nd S D 1 D P will converge since the estimtions of 2 P(A) tht the two methods employ will lso converge. Alterntively, if we use repeted smpling over the D, D T nd D p dtsets nd subsequently verge over the different smples, the two mesures would lso converge. On the other hnd, the behvior of U only to the level tht will be similr to tht of the ssumption of within region uniform distribution is vlid ssumption for the P(A) governing D ; nevertheless, s it is pprent for the dtsets we hve considered here, this is fr from being vlid ssumption untittive nlysis of the mesures In order to quntify the behvior of ech of the Y (DT p, DT X 100 ) we computed their Person correltion coefficient with S (DT p, DT 100 ). The results re depicted in Tble 2, where it seems tht exhibits very strong correltion with S (DT p, DT 100 ). For most of the dtsets, the correltion is higher thn 0.9, with the notble exception of wveform-5000 for which low correltion coefficient is recorded. U hs lso strong correltion with S (DT p, DT 100 ) lthough not s strong s, gin with the remrkble exception of wveform-5000 for which it exhibits its highest correltion vlue. The Person correltion coefficient is n estimte of the liner correltion of two vlues, nevertheless it does not indicte how good predictor one vrible is for the other. This is especilly true in our cse, since the pttern of liner correltion of ny given Y X (DT p, DT 100 ) with S (DT p, DT 100 ) chnges from dtset to dtset s it is obvious from Figures 2, 3. In order to estimte the predictive vlue of the vrious Y X (DT p, DT 100 ) with respect to S (DT p, DT 100 ), we compute their Men Absolute Devition (MAD). The MAD of two vribles nd b for which we hve N

10 S (DT p,dt 100 ) S (DT p,dt 100 ) S (DT p,dt 100 ) Figure 3: Evolution of the decision trees semntic similrity mesures with the smpling rte (first column) nd with S (second column) for dtsets: mfet-fourier (top), segment-chllenge (middle), wveform-5000 (bottom)

11 dtset U mfet-fctors mfet-fourier mfet-krhunen mfet-zernike segment-chllenge wveform Tble 2: Correltion coefficient of Y X (DT p, DT 100 ) with S (DT p, DT 100 ) dtset U mfet-fctors mfet-fourier mfet-krhunen mfet-zernike segment-chllenge wveform Averge Tble 3: Men bsolute devition of Y X (DT p, DT 100 ) with S (DT p, DT 100 ) pired observtions is given by: MAD(, b) = N i i b i N, The MAD results re given in Tble 3. These results indicte the good predictive performnce of, its verge error (MAD) in predicting S (DT p, DT 100 ) is 0.1. The performnce of U is considerbly worse, its verge MAD is roughly 0.3. The gol of the current section ws to compre nd evlute number of different instntitions of decision tree semntic similrity mesure. The different instntitions re the result of different ssumptions or different wys of estimting the ttribute spce distribution under which the semntic similrity computtion will tke plce. In fct, the semntic similrity computtion of two decision trees mkes sense if we cn ssume specific probbility distribution P(A) governing the ttribute spce. The overlyed tree provides prtition of the full ttribute spce, the greement or disgreement of the two decision trees in given segment of tht prtition is more or less importnt depending on the density of tht region under P(A). If, for exmple, the two decision trees disgree on given region this is not going to ffect their similrity, even if the volume of the region is lrge, s fr s the probbility density of tht region under P(A) is zero. Alterntively, if we do not wnt to ssume specific ttribute spce distribution nd we wnt to compute similrity under ll possible worlds we should mke the ssumption of uniform distribution on the ttribute spce, cse tht is lso covered by our frmework. In order to evlute our semntic similrity mesures, we used the semntic similrity empiriclly estimted on seprte hold-out set. The performnce of the different instntitions of the semntic similrity mesures depends on how different ws the estimtion of P(A) used in them from the P(A) governing the hold-out set, on which the semntic similrity ws computed. In fct, the choice of the pproprite P(A) should be done bsed on the knowledge of the ppliction domin. If we know tht our lerning problem is governed by specific P(A), then it is tht P(A) tht should be plugged in the decision tree similrity mesure. Alterntively, if no such knowledge exists, we cn estimte P(A) from the dtsets from which the decision trees were constructed, s it ws done in the semntic similrity mesure. 6 Conclusions nd Future Work In this pper we presented generl frmework for the estimtion of similrities between decision trees nd dtsets, within clssifiction problem setting. We employ the decision tree models in order to compute either their semntic similrity or the similrity of the dtsets tht were used for their induction. The decision tree similrity is computed in terms of the greement of the clss predictions they return over the ttribute spce nd, it corresponds to the decision tree semntic similrity. The computtion of dtset similrity cn be done on the bsis of their ttribute spce probbility distribution P(A), their ttribute-clss joint probbility distribution P(A, C) or their ttribute conditionl clss probbility distribution P(C A). All the bove comprise specil cses of our frmework. Previous efforts hve focused on compring dtsets, either with respect to P(A, C) [2], or with respect to P(C A) [4]. In this pper we focus on the estimtion of the semntic similrity between decision trees, i.e., the degree to which the decision trees gree in their predictions over the ttribute spce. To the best of our knowledge, this is the first work towrds this im. The criticl point in the computtion of the decision tree similrity is the selection of n pproprite ttribute spce probbility distribution P(A) under which the computtion will tke plce. This choice reflects our belief bout the rel world on which the decision trees would be pplied. If no prior knowledge exists, we could simply select the uniform distribution U(A) for P(A) nd thus we would exmine the decision

12 tree similrity over ll possible input worlds. We experimented with different wys of estimting the ttribute spce probbility distribution P(A) nd we compred the resulting instntitions of the decision tree semntic similrity mesure with the ctul semntic similrity, s this ws estblished by the ppliction of the decision trees on n independent hold-out set. Depending on the knowledge we hve bout the P(A) distribution tht governs the independent holdout set, the computed decision tree semntic similrity is more or less good predictor of the ctul semntic similrity. More specificlly, when P(A) is computed by querying the ctul dtsets, the corresponding decision tree similrity is very good predictor of the true semntic similrity. Actully, we expect the vlue of to converge to the rel vlue of the semntic similrity s the size of the dtsets increses, since the estimted P(A) will converge to the true P(A). We believe tht the gretest contribution of decision tree semntic similrity mesure is the potentil tht it offers to determine whether the observed differences re simply superficil structurl differences or they reflect rel semntic differences on the described concepts, nd moreover, to quntify these differences - this is problem tht deplores decision trees due to their high sensitivity to trining dtset chnges. The provision of semntic similrity mesure for clssifiction models, here decision trees, llows us to perform number of stndrd mining tsks tht re bsed on similrity/distnce mesures, not on the rw dt nymore, but rther on the clssifiction models extrcted from these rw dt, i.e. met mining. For exmple, using the semntic similrity mesure we cn cluster decision trees nd compute representtive decision tree for ech cluster. A typicl ppliction of tht could be the simplifiction of ensembles of decision trees, such s the ones produced by boosting, bgging nd rndom forests, where only the prototype decision tree of ech cluster is retined. Another lterntive to the simplifiction of decision tree ensembles, tht does not mke use of the semntic similrity mesure, is the construction of the overlyed tree from ll the component decision trees of the ensemble. Ech region of the overlyed tree will be lbeled ccording to the lbels of the corresponding regions of the originl trees. The overlyed tree will hve the sme predictive power s the ensemble, since it will mke exctly the sme predictions, however its prtitions will be much finer thn the prtitions of the originl trees thus hving lrger complexity thn its constituents. Nevertheless, it is possible to simplify the overlyed decision tree by pplying stndrd pruning techniques. The pprent dvntge of hving single decision tree, or smll set of decision trees, insted of the full ensemble is the much esier interprettion of the lerned model. The ide of representtive decision tree for set of decision trees could be lso useful in clssifiction error estimtion scenrio. Typiclly, in error estimtion re-smpling technique is pplied resulting in number of different models, the finl result is n estimtion of the clssifiction performnce of the lgorithm nd not tht of single tree. The question is which model to choose mong the different models tht were produced; one solution would be to choose the medin model, i.e. the one tht bstins the smller distnce from ll the other models. Finlly, there re severl extensions/ improvements over the bsic frmework. In the current version, we restrict on continuous predictive ttributes, however ctegoricl ttributes should be lso considered. Also, in cse of the pttern dependent probbility estimtion (Equtions 4.6, 4.7), we dopt the ssumption tht instnces re uniformly distributed within ech region. We pln to relese this ssumption by employing some density pproximtion technique like histogrms. Acknowledgment I. Ntoutsi is prtilly supported by the ercletos progrm co-funded by the Europen Socil Fund nd ntionl resources (Opertionl Progrm for Eductionl nd Voctionl Trining II - EPEAEK II). References [1] L. Breimn, J. Friedmn, R. Olshen, nd C. Stone. Clssifiction nd Regression Trees. Wdsworth Interntionl, [2] V. Gnti, J. Gehrke, nd R. Rmkrishnn. A frmework for mesuring chnges in dt chrcteristics. In PODS, pges , [3] A. Jin, R. Duin, nd J. Mo. Sttisticl pttern recognition: A review. IEEE Trnsctions on Pttern Anlysis nd Mchine Intelligence, 22(1):4 37, [4] I. Pekersky, J. Pei, nd K. Wng. Mining chnging regions from ccess-constrined snpshots: A clusterembedded decision tree pproch. J. of Intelligent Informtion Systems (Specil Issue on Mining Sptio- Temporl Dt), [5] P. Turney. Technicl note: Bis nd the quntifiction of stbility. Mchine Lerning, 20:23 33, [6] UCI. Mchine lerning repository. mlern/mlsummry.html. [7]. Wng nd J. Pei. A rndom method for quntifying chnging distributions in dt strems. In PKDD, pges , [8] K. Zhng nd D. Shsh. Fst lgorithms for the editing distnce between trees nd relted problems. SIAM, Journl of Computtion, 18: , 1989.

Goals: Determine how to calculate the area described by a function. Define the definite integral. Explore the relationship between the definite

Goals: Determine how to calculate the area described by a function. Define the definite integral. Explore the relationship between the definite Unit #8 : The Integrl Gols: Determine how to clculte the re described by function. Define the definite integrl. Eplore the reltionship between the definite integrl nd re. Eplore wys to estimte the definite

More information

Properties of Integrals, Indefinite Integrals. Goals: Definition of the Definite Integral Integral Calculations using Antiderivatives

Properties of Integrals, Indefinite Integrals. Goals: Definition of the Definite Integral Integral Calculations using Antiderivatives Block #6: Properties of Integrls, Indefinite Integrls Gols: Definition of the Definite Integrl Integrl Clcultions using Antiderivtives Properties of Integrls The Indefinite Integrl 1 Riemnn Sums - 1 Riemnn

More information

Predict Global Earth Temperature using Linier Regression

Predict Global Earth Temperature using Linier Regression Predict Globl Erth Temperture using Linier Regression Edwin Swndi Sijbt (23516012) Progrm Studi Mgister Informtik Sekolh Teknik Elektro dn Informtik ITB Jl. Gnesh 10 Bndung 40132, Indonesi 23516012@std.stei.itb.c.id

More information

Tests for the Ratio of Two Poisson Rates

Tests for the Ratio of Two Poisson Rates Chpter 437 Tests for the Rtio of Two Poisson Rtes Introduction The Poisson probbility lw gives the probbility distribution of the number of events occurring in specified intervl of time or spce. The Poisson

More information

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 7

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 7 CS 188 Introduction to Artificil Intelligence Fll 2018 Note 7 These lecture notes re hevily bsed on notes originlly written by Nikhil Shrm. Decision Networks In the third note, we lerned bout gme trees

More information

New Expansion and Infinite Series

New Expansion and Infinite Series Interntionl Mthemticl Forum, Vol. 9, 204, no. 22, 06-073 HIKARI Ltd, www.m-hikri.com http://dx.doi.org/0.2988/imf.204.4502 New Expnsion nd Infinite Series Diyun Zhng College of Computer Nnjing University

More information

Math 1B, lecture 4: Error bounds for numerical methods

Math 1B, lecture 4: Error bounds for numerical methods Mth B, lecture 4: Error bounds for numericl methods Nthn Pflueger 4 September 0 Introduction The five numericl methods descried in the previous lecture ll operte by the sme principle: they pproximte the

More information

Week 10: Line Integrals

Week 10: Line Integrals Week 10: Line Integrls Introduction In this finl week we return to prmetrised curves nd consider integrtion long such curves. We lredy sw this in Week 2 when we integrted long curve to find its length.

More information

Review of Calculus, cont d

Review of Calculus, cont d Jim Lmbers MAT 460 Fll Semester 2009-10 Lecture 3 Notes These notes correspond to Section 1.1 in the text. Review of Clculus, cont d Riemnn Sums nd the Definite Integrl There re mny cses in which some

More information

The Regulated and Riemann Integrals

The Regulated and Riemann Integrals Chpter 1 The Regulted nd Riemnn Integrls 1.1 Introduction We will consider severl different pproches to defining the definite integrl f(x) dx of function f(x). These definitions will ll ssign the sme vlue

More information

Acceptance Sampling by Attributes

Acceptance Sampling by Attributes Introduction Acceptnce Smpling by Attributes Acceptnce smpling is concerned with inspection nd decision mking regrding products. Three spects of smpling re importnt: o Involves rndom smpling of n entire

More information

Numerical integration

Numerical integration 2 Numericl integrtion This is pge i Printer: Opque this 2. Introduction Numericl integrtion is problem tht is prt of mny problems in the economics nd econometrics literture. The orgniztion of this chpter

More information

p-adic Egyptian Fractions

p-adic Egyptian Fractions p-adic Egyptin Frctions Contents 1 Introduction 1 2 Trditionl Egyptin Frctions nd Greedy Algorithm 2 3 Set-up 3 4 p-greedy Algorithm 5 5 p-egyptin Trditionl 10 6 Conclusion 1 Introduction An Egyptin frction

More information

CS667 Lecture 6: Monte Carlo Integration 02/10/05

CS667 Lecture 6: Monte Carlo Integration 02/10/05 CS667 Lecture 6: Monte Crlo Integrtion 02/10/05 Venkt Krishnrj Lecturer: Steve Mrschner 1 Ide The min ide of Monte Crlo Integrtion is tht we cn estimte the vlue of n integrl by looking t lrge number of

More information

Driving Cycle Construction of City Road for Hybrid Bus Based on Markov Process Deng Pan1, a, Fengchun Sun1,b*, Hongwen He1, c, Jiankun Peng1, d

Driving Cycle Construction of City Road for Hybrid Bus Based on Markov Process Deng Pan1, a, Fengchun Sun1,b*, Hongwen He1, c, Jiankun Peng1, d Interntionl Industril Informtics nd Computer Engineering Conference (IIICEC 15) Driving Cycle Construction of City Rod for Hybrid Bus Bsed on Mrkov Process Deng Pn1,, Fengchun Sun1,b*, Hongwen He1, c,

More information

1 Online Learning and Regret Minimization

1 Online Learning and Regret Minimization 2.997 Decision-Mking in Lrge-Scle Systems My 10 MIT, Spring 2004 Hndout #29 Lecture Note 24 1 Online Lerning nd Regret Minimiztion In this lecture, we consider the problem of sequentil decision mking in

More information

Solution for Assignment 1 : Intro to Probability and Statistics, PAC learning

Solution for Assignment 1 : Intro to Probability and Statistics, PAC learning Solution for Assignment 1 : Intro to Probbility nd Sttistics, PAC lerning 10-701/15-781: Mchine Lerning (Fll 004) Due: Sept. 30th 004, Thursdy, Strt of clss Question 1. Bsic Probbility ( 18 pts) 1.1 (

More information

Math 8 Winter 2015 Applications of Integration

Math 8 Winter 2015 Applications of Integration Mth 8 Winter 205 Applictions of Integrtion Here re few importnt pplictions of integrtion. The pplictions you my see on n exm in this course include only the Net Chnge Theorem (which is relly just the Fundmentl

More information

Recitation 3: More Applications of the Derivative

Recitation 3: More Applications of the Derivative Mth 1c TA: Pdric Brtlett Recittion 3: More Applictions of the Derivtive Week 3 Cltech 2012 1 Rndom Question Question 1 A grph consists of the following: A set V of vertices. A set E of edges where ech

More information

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique?

How do we solve these things, especially when they get complicated? How do we know when a system has a solution, and when is it unique? XII. LINEAR ALGEBRA: SOLVING SYSTEMS OF EQUATIONS Tody we re going to tlk bout solving systems of liner equtions. These re problems tht give couple of equtions with couple of unknowns, like: 6 2 3 7 4

More information

Chapter 0. What is the Lebesgue integral about?

Chapter 0. What is the Lebesgue integral about? Chpter 0. Wht is the Lebesgue integrl bout? The pln is to hve tutoril sheet ech week, most often on Fridy, (to be done during the clss) where you will try to get used to the ides introduced in the previous

More information

Genetic Programming. Outline. Evolutionary Strategies. Evolutionary strategies Genetic programming Summary

Genetic Programming. Outline. Evolutionary Strategies. Evolutionary strategies Genetic programming Summary Outline Genetic Progrmming Evolutionry strtegies Genetic progrmming Summry Bsed on the mteril provided y Professor Michel Negnevitsky Evolutionry Strtegies An pproch simulting nturl evolution ws proposed

More information

New data structures to reduce data size and search time

New data structures to reduce data size and search time New dt structures to reduce dt size nd serch time Tsuneo Kuwbr Deprtment of Informtion Sciences, Fculty of Science, Kngw University, Hirtsuk-shi, Jpn FIT2018 1D-1, No2, pp1-4 Copyright (c)2018 by The Institute

More information

Riemann is the Mann! (But Lebesgue may besgue to differ.)

Riemann is the Mann! (But Lebesgue may besgue to differ.) Riemnn is the Mnn! (But Lebesgue my besgue to differ.) Leo Livshits My 2, 2008 1 For finite intervls in R We hve seen in clss tht every continuous function f : [, b] R hs the property tht for every ɛ >

More information

Monte Carlo method in solving numerical integration and differential equation

Monte Carlo method in solving numerical integration and differential equation Monte Crlo method in solving numericl integrtion nd differentil eqution Ye Jin Chemistry Deprtment Duke University yj66@duke.edu Abstrct: Monte Crlo method is commonly used in rel physics problem. The

More information

Classification Part 4. Model Evaluation

Classification Part 4. Model Evaluation Clssifiction Prt 4 Dr. Snjy Rnk Professor Computer nd Informtion Science nd Engineering University of Florid, Ginesville Model Evlution Metrics for Performnce Evlution How to evlute the performnce of model

More information

Chapter 4 Contravariance, Covariance, and Spacetime Diagrams

Chapter 4 Contravariance, Covariance, and Spacetime Diagrams Chpter 4 Contrvrince, Covrince, nd Spcetime Digrms 4. The Components of Vector in Skewed Coordintes We hve seen in Chpter 3; figure 3.9, tht in order to show inertil motion tht is consistent with the Lorentz

More information

13: Diffusion in 2 Energy Groups

13: Diffusion in 2 Energy Groups 3: Diffusion in Energy Groups B. Rouben McMster University Course EP 4D3/6D3 Nucler Rector Anlysis (Rector Physics) 5 Sept.-Dec. 5 September Contents We study the diffusion eqution in two energy groups

More information

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by.

NUMERICAL INTEGRATION. The inverse process to differentiation in calculus is integration. Mathematically, integration is represented by. NUMERICAL INTEGRATION 1 Introduction The inverse process to differentition in clculus is integrtion. Mthemticlly, integrtion is represented by f(x) dx which stnds for the integrl of the function f(x) with

More information

5.7 Improper Integrals

5.7 Improper Integrals 458 pplictions of definite integrls 5.7 Improper Integrls In Section 5.4, we computed the work required to lift pylod of mss m from the surfce of moon of mss nd rdius R to height H bove the surfce of the

More information

Non-Linear & Logistic Regression

Non-Linear & Logistic Regression Non-Liner & Logistic Regression If the sttistics re boring, then you've got the wrong numbers. Edwrd R. Tufte (Sttistics Professor, Yle University) Regression Anlyses When do we use these? PART 1: find

More information

Student Activity 3: Single Factor ANOVA

Student Activity 3: Single Factor ANOVA MATH 40 Student Activity 3: Single Fctor ANOVA Some Bsic Concepts In designed experiment, two or more tretments, or combintions of tretments, is pplied to experimentl units The number of tretments, whether

More information

APPROXIMATE INTEGRATION

APPROXIMATE INTEGRATION APPROXIMATE INTEGRATION. Introduction We hve seen tht there re functions whose nti-derivtives cnnot be expressed in closed form. For these resons ny definite integrl involving these integrnds cnnot be

More information

Lecture 14: Quadrature

Lecture 14: Quadrature Lecture 14: Qudrture This lecture is concerned with the evlution of integrls fx)dx 1) over finite intervl [, b] The integrnd fx) is ssumed to be rel-vlues nd smooth The pproximtion of n integrl by numericl

More information

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004

Advanced Calculus: MATH 410 Notes on Integrals and Integrability Professor David Levermore 17 October 2004 Advnced Clculus: MATH 410 Notes on Integrls nd Integrbility Professor Dvid Levermore 17 October 2004 1. Definite Integrls In this section we revisit the definite integrl tht you were introduced to when

More information

5.1 How do we Measure Distance Traveled given Velocity? Student Notes

5.1 How do we Measure Distance Traveled given Velocity? Student Notes . How do we Mesure Distnce Trveled given Velocity? Student Notes EX ) The tle contins velocities of moving cr in ft/sec for time t in seconds: time (sec) 3 velocity (ft/sec) 3 A) Lel the x-xis & y-xis

More information

Improper Integrals. Type I Improper Integrals How do we evaluate an integral such as

Improper Integrals. Type I Improper Integrals How do we evaluate an integral such as Improper Integrls Two different types of integrls cn qulify s improper. The first type of improper integrl (which we will refer to s Type I) involves evluting n integrl over n infinite region. In the grph

More information

LECTURE NOTE #12 PROF. ALAN YUILLE

LECTURE NOTE #12 PROF. ALAN YUILLE LECTURE NOTE #12 PROF. ALAN YUILLE 1. Clustering, K-mens, nd EM Tsk: set of unlbeled dt D = {x 1,..., x n } Decompose into clsses w 1,..., w M where M is unknown. Lern clss models p(x w)) Discovery of

More information

CMDA 4604: Intermediate Topics in Mathematical Modeling Lecture 19: Interpolation and Quadrature

CMDA 4604: Intermediate Topics in Mathematical Modeling Lecture 19: Interpolation and Quadrature CMDA 4604: Intermedite Topics in Mthemticl Modeling Lecture 19: Interpoltion nd Qudrture In this lecture we mke brief diversion into the res of interpoltion nd qudrture. Given function f C[, b], we sy

More information

dt. However, we might also be curious about dy

dt. However, we might also be curious about dy Section 0. The Clculus of Prmetric Curves Even though curve defined prmetricly my not be function, we cn still consider concepts such s rtes of chnge. However, the concepts will need specil tretment. For

More information

Lecture 3. In this lecture, we will discuss algorithms for solving systems of linear equations.

Lecture 3. In this lecture, we will discuss algorithms for solving systems of linear equations. Lecture 3 3 Solving liner equtions In this lecture we will discuss lgorithms for solving systems of liner equtions Multiplictive identity Let us restrict ourselves to considering squre mtrices since one

More information

Chapter 14. Matrix Representations of Linear Transformations

Chapter 14. Matrix Representations of Linear Transformations Chpter 4 Mtrix Representtions of Liner Trnsformtions When considering the Het Stte Evolution, we found tht we could describe this process using multipliction by mtrix. This ws nice becuse computers cn

More information

f(x) dx, If one of these two conditions is not met, we call the integral improper. Our usual definition for the value for the definite integral

f(x) dx, If one of these two conditions is not met, we call the integral improper. Our usual definition for the value for the definite integral Improper Integrls Every time tht we hve evluted definite integrl such s f(x) dx, we hve mde two implicit ssumptions bout the integrl:. The intervl [, b] is finite, nd. f(x) is continuous on [, b]. If one

More information

Chapter 5 : Continuous Random Variables

Chapter 5 : Continuous Random Variables STAT/MATH 395 A - PROBABILITY II UW Winter Qurter 216 Néhémy Lim Chpter 5 : Continuous Rndom Vribles Nottions. N {, 1, 2,...}, set of nturl numbers (i.e. ll nonnegtive integers); N {1, 2,...}, set of ll

More information

Fig. 1. Open-Loop and Closed-Loop Systems with Plant Variations

Fig. 1. Open-Loop and Closed-Loop Systems with Plant Variations ME 3600 Control ystems Chrcteristics of Open-Loop nd Closed-Loop ystems Importnt Control ystem Chrcteristics o ensitivity of system response to prmetric vritions cn be reduced o rnsient nd stedy-stte responses

More information

Chapters 4 & 5 Integrals & Applications

Chapters 4 & 5 Integrals & Applications Contents Chpters 4 & 5 Integrls & Applictions Motivtion to Chpters 4 & 5 2 Chpter 4 3 Ares nd Distnces 3. VIDEO - Ares Under Functions............................................ 3.2 VIDEO - Applictions

More information

THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS.

THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS. THE EXISTENCE-UNIQUENESS THEOREM FOR FIRST-ORDER DIFFERENTIAL EQUATIONS RADON ROSBOROUGH https://intuitiveexplntionscom/picrd-lindelof-theorem/ This document is proof of the existence-uniqueness theorem

More information

Math& 152 Section Integration by Parts

Math& 152 Section Integration by Parts Mth& 5 Section 7. - Integrtion by Prts Integrtion by prts is rule tht trnsforms the integrl of the product of two functions into other (idelly simpler) integrls. Recll from Clculus I tht given two differentible

More information

Continuous Random Variables

Continuous Random Variables STAT/MATH 395 A - PROBABILITY II UW Winter Qurter 217 Néhémy Lim Continuous Rndom Vribles Nottion. The indictor function of set S is rel-vlued function defined by : { 1 if x S 1 S (x) if x S Suppose tht

More information

Bayesian Networks: Approximate Inference

Bayesian Networks: Approximate Inference pproches to inference yesin Networks: pproximte Inference xct inference Vrillimintion Join tree lgorithm pproximte inference Simplify the structure of the network to mkxct inferencfficient (vritionl methods,

More information

The steps of the hypothesis test

The steps of the hypothesis test ttisticl Methods I (EXT 7005) Pge 78 Mosquito species Time of dy A B C Mid morning 0.0088 5.4900 5.5000 Mid Afternoon.3400 0.0300 0.8700 Dusk 0.600 5.400 3.000 The Chi squre test sttistic is the sum of

More information

MIXED MODELS (Sections ) I) In the unrestricted model, interactions are treated as in the random effects model:

MIXED MODELS (Sections ) I) In the unrestricted model, interactions are treated as in the random effects model: 1 2 MIXED MODELS (Sections 17.7 17.8) Exmple: Suppose tht in the fiber breking strength exmple, the four mchines used were the only ones of interest, but the interest ws over wide rnge of opertors, nd

More information

EFEFCTS OF GROUND MOTION UNCERTAINTY ON PREDICTING THE RESPONSE OF AN EXISTING RC FRAME STRUCTURE

EFEFCTS OF GROUND MOTION UNCERTAINTY ON PREDICTING THE RESPONSE OF AN EXISTING RC FRAME STRUCTURE 13 th World Conference on Erthquke Engineering Vncouver, B.C., Cnd August 1-6, 2004 Pper No. 2007 EFEFCTS OF GROUND MOTION UNCERTAINTY ON PREDICTING THE RESPONSE OF AN EXISTING RC FRAME STRUCTURE Ftemeh

More information

W. We shall do so one by one, starting with I 1, and we shall do it greedily, trying

W. We shall do so one by one, starting with I 1, and we shall do it greedily, trying Vitli covers 1 Definition. A Vitli cover of set E R is set V of closed intervls with positive length so tht, for every δ > 0 nd every x E, there is some I V with λ(i ) < δ nd x I. 2 Lemm (Vitli covering)

More information

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007 A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H Thoms Shores Deprtment of Mthemtics University of Nebrsk Spring 2007 Contents Rtes of Chnge nd Derivtives 1 Dierentils 4 Are nd Integrls 5 Multivrite Clculus

More information

20 MATHEMATICS POLYNOMIALS

20 MATHEMATICS POLYNOMIALS 0 MATHEMATICS POLYNOMIALS.1 Introduction In Clss IX, you hve studied polynomils in one vrible nd their degrees. Recll tht if p(x) is polynomil in x, the highest power of x in p(x) is clled the degree of

More information

Duality # Second iteration for HW problem. Recall our LP example problem we have been working on, in equality form, is given below.

Duality # Second iteration for HW problem. Recall our LP example problem we have been working on, in equality form, is given below. Dulity #. Second itertion for HW problem Recll our LP emple problem we hve been working on, in equlity form, is given below.,,,, 8 m F which, when written in slightly different form, is 8 F Recll tht we

More information

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies

State space systems analysis (continued) Stability. A. Definitions A system is said to be Asymptotically Stable (AS) when it satisfies Stte spce systems nlysis (continued) Stbility A. Definitions A system is sid to be Asymptoticlly Stble (AS) when it stisfies ut () = 0, t > 0 lim xt () 0. t A system is AS if nd only if the impulse response

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Lerning Tom Mitchell, Mchine Lerning, chpter 13 Outline Introduction Comprison with inductive lerning Mrkov Decision Processes: the model Optiml policy: The tsk Q Lerning: Q function Algorithm

More information

Math 113 Exam 2 Practice

Math 113 Exam 2 Practice Mth 3 Exm Prctice Februry 8, 03 Exm will cover 7.4, 7.5, 7.7, 7.8, 8.-3 nd 8.5. Plese note tht integrtion skills lerned in erlier sections will still be needed for the mteril in 7.5, 7.8 nd chpter 8. This

More information

Natural examples of rings are the ring of integers, a ring of polynomials in one variable, the ring

Natural examples of rings are the ring of integers, a ring of polynomials in one variable, the ring More generlly, we define ring to be non-empty set R hving two binry opertions (we ll think of these s ddition nd multipliction) which is n Abelin group under + (we ll denote the dditive identity by 0),

More information

Comparison Procedures

Comparison Procedures Comprison Procedures Single Fctor, Between-Subects Cse /8/ Comprison Procedures, One-Fctor ANOVA, Between Subects Two Comprison Strtegies post hoc (fter-the-fct) pproch You re interested in discovering

More information

Riemann Sums and Riemann Integrals

Riemann Sums and Riemann Integrals Riemnn Sums nd Riemnn Integrls Jmes K. Peterson Deprtment of Biologicl Sciences nd Deprtment of Mthemticl Sciences Clemson University August 26, 203 Outline Riemnn Sums Riemnn Integrls Properties Abstrct

More information

Jim Lambers MAT 169 Fall Semester Lecture 4 Notes

Jim Lambers MAT 169 Fall Semester Lecture 4 Notes Jim Lmbers MAT 169 Fll Semester 2009-10 Lecture 4 Notes These notes correspond to Section 8.2 in the text. Series Wht is Series? An infinte series, usully referred to simply s series, is n sum of ll of

More information

2008 Mathematical Methods (CAS) GA 3: Examination 2

2008 Mathematical Methods (CAS) GA 3: Examination 2 Mthemticl Methods (CAS) GA : Exmintion GENERAL COMMENTS There were 406 students who st the Mthemticl Methods (CAS) exmintion in. Mrks rnged from to 79 out of possible score of 80. Student responses showed

More information

A New Grey-rough Set Model Based on Interval-Valued Grey Sets

A New Grey-rough Set Model Based on Interval-Valued Grey Sets Proceedings of the 009 IEEE Interntionl Conference on Systems Mn nd Cybernetics Sn ntonio TX US - October 009 New Grey-rough Set Model sed on Intervl-Vlued Grey Sets Wu Shunxing Deprtment of utomtion Ximen

More information

1B40 Practical Skills

1B40 Practical Skills B40 Prcticl Skills Comining uncertinties from severl quntities error propgtion We usully encounter situtions where the result of n experiment is given in terms of two (or more) quntities. We then need

More information

Riemann Sums and Riemann Integrals

Riemann Sums and Riemann Integrals Riemnn Sums nd Riemnn Integrls Jmes K. Peterson Deprtment of Biologicl Sciences nd Deprtment of Mthemticl Sciences Clemson University August 26, 2013 Outline 1 Riemnn Sums 2 Riemnn Integrls 3 Properties

More information

P 3 (x) = f(0) + f (0)x + f (0) 2. x 2 + f (0) . In the problem set, you are asked to show, in general, the n th order term is a n = f (n) (0)

P 3 (x) = f(0) + f (0)x + f (0) 2. x 2 + f (0) . In the problem set, you are asked to show, in general, the n th order term is a n = f (n) (0) 1 Tylor polynomils In Section 3.5, we discussed how to pproximte function f(x) round point in terms of its first derivtive f (x) evluted t, tht is using the liner pproximtion f() + f ()(x ). We clled this

More information

CBE 291b - Computation And Optimization For Engineers

CBE 291b - Computation And Optimization For Engineers The University of Western Ontrio Fculty of Engineering Science Deprtment of Chemicl nd Biochemicl Engineering CBE 9b - Computtion And Optimiztion For Engineers Mtlb Project Introduction Prof. A. Jutn Jn

More information

Section 6: Area, Volume, and Average Value

Section 6: Area, Volume, and Average Value Chpter The Integrl Applied Clculus Section 6: Are, Volume, nd Averge Vlue Are We hve lredy used integrls to find the re etween the grph of function nd the horizontl xis. Integrls cn lso e used to find

More information

Lecture 21: Order statistics

Lecture 21: Order statistics Lecture : Order sttistics Suppose we hve N mesurements of sclr, x i =, N Tke ll mesurements nd sort them into scending order x x x 3 x N Define the mesured running integrl S N (x) = 0 for x < x = i/n for

More information

Lecture 3 Gaussian Probability Distribution

Lecture 3 Gaussian Probability Distribution Introduction Lecture 3 Gussin Probbility Distribution Gussin probbility distribution is perhps the most used distribution in ll of science. lso clled bell shped curve or norml distribution Unlike the binomil

More information

Improper Integrals, and Differential Equations

Improper Integrals, and Differential Equations Improper Integrls, nd Differentil Equtions October 22, 204 5.3 Improper Integrls Previously, we discussed how integrls correspond to res. More specificlly, we sid tht for function f(x), the region creted

More information

Lecture 20: Numerical Integration III

Lecture 20: Numerical Integration III cs4: introduction to numericl nlysis /8/0 Lecture 0: Numericl Integrtion III Instructor: Professor Amos Ron Scribes: Mrk Cowlishw, Yunpeng Li, Nthnel Fillmore For the lst few lectures we hve discussed

More information

Estimation of Binomial Distribution in the Light of Future Data

Estimation of Binomial Distribution in the Light of Future Data British Journl of Mthemtics & Computer Science 102: 1-7, 2015, Article no.bjmcs.19191 ISSN: 2231-0851 SCIENCEDOMAIN interntionl www.sciencedomin.org Estimtion of Binomil Distribution in the Light of Future

More information

Credibility Hypothesis Testing of Fuzzy Triangular Distributions

Credibility Hypothesis Testing of Fuzzy Triangular Distributions 666663 Journl of Uncertin Systems Vol.9, No., pp.6-74, 5 Online t: www.jus.org.uk Credibility Hypothesis Testing of Fuzzy Tringulr Distributions S. Smpth, B. Rmy Received April 3; Revised 4 April 4 Abstrct

More information

ECO 317 Economics of Uncertainty Fall Term 2007 Notes for lectures 4. Stochastic Dominance

ECO 317 Economics of Uncertainty Fall Term 2007 Notes for lectures 4. Stochastic Dominance Generl structure ECO 37 Economics of Uncertinty Fll Term 007 Notes for lectures 4. Stochstic Dominnce Here we suppose tht the consequences re welth mounts denoted by W, which cn tke on ny vlue between

More information

7.2 The Definite Integral

7.2 The Definite Integral 7.2 The Definite Integrl the definite integrl In the previous section, it ws found tht if function f is continuous nd nonnegtive, then the re under the grph of f on [, b] is given by F (b) F (), where

More information

Preparation for A Level Wadebridge School

Preparation for A Level Wadebridge School Preprtion for A Level Mths @ Wdebridge School Bridging the gp between GCSE nd A Level Nme: CONTENTS Chpter Removing brckets pge Chpter Liner equtions Chpter Simultneous equtions 6 Chpter Fctorising 7 Chpter

More information

How to simulate Turing machines by invertible one-dimensional cellular automata

How to simulate Turing machines by invertible one-dimensional cellular automata How to simulte Turing mchines by invertible one-dimensionl cellulr utomt Jen-Christophe Dubcq Déprtement de Mthémtiques et d Informtique, École Normle Supérieure de Lyon, 46, llée d Itlie, 69364 Lyon Cedex

More information

Unit #9 : Definite Integral Properties; Fundamental Theorem of Calculus

Unit #9 : Definite Integral Properties; Fundamental Theorem of Calculus Unit #9 : Definite Integrl Properties; Fundmentl Theorem of Clculus Gols: Identify properties of definite integrls Define odd nd even functions, nd reltionship to integrl vlues Introduce the Fundmentl

More information

Chapter 9: Inferences based on Two samples: Confidence intervals and tests of hypotheses

Chapter 9: Inferences based on Two samples: Confidence intervals and tests of hypotheses Chpter 9: Inferences bsed on Two smples: Confidence intervls nd tests of hypotheses 9.1 The trget prmeter : difference between two popultion mens : difference between two popultion proportions : rtio of

More information

2D1431 Machine Learning Lab 3: Reinforcement Learning

2D1431 Machine Learning Lab 3: Reinforcement Learning 2D1431 Mchine Lerning Lb 3: Reinforcement Lerning Frnk Hoffmnn modified by Örjn Ekeberg December 7, 2004 1 Introduction In this lb you will lern bout dynmic progrmming nd reinforcement lerning. It is ssumed

More information

Chapter 3 Polynomials

Chapter 3 Polynomials Dr M DRAIEF As described in the introduction of Chpter 1, pplictions of solving liner equtions rise in number of different settings In prticulr, we will in this chpter focus on the problem of modelling

More information

Vyacheslav Telnin. Search for New Numbers.

Vyacheslav Telnin. Search for New Numbers. Vycheslv Telnin Serch for New Numbers. 1 CHAPTER I 2 I.1 Introduction. In 1984, in the first issue for tht yer of the Science nd Life mgzine, I red the rticle "Non-Stndrd Anlysis" by V. Uspensky, in which

More information

MATH 144: Business Calculus Final Review

MATH 144: Business Calculus Final Review MATH 144: Business Clculus Finl Review 1 Skills 1. Clculte severl limits. 2. Find verticl nd horizontl symptotes for given rtionl function. 3. Clculte derivtive by definition. 4. Clculte severl derivtives

More information

A Signal-Level Fusion Model for Image-Based Change Detection in DARPA's Dynamic Database System

A Signal-Level Fusion Model for Image-Based Change Detection in DARPA's Dynamic Database System SPIE Aerosense 001 Conference on Signl Processing, Sensor Fusion, nd Trget Recognition X, April 16-0, Orlndo FL. (Minor errors in published version corrected.) A Signl-Level Fusion Model for Imge-Bsed

More information

Riemann Integrals and the Fundamental Theorem of Calculus

Riemann Integrals and the Fundamental Theorem of Calculus Riemnn Integrls nd the Fundmentl Theorem of Clculus Jmes K. Peterson Deprtment of Biologicl Sciences nd Deprtment of Mthemticl Sciences Clemson University September 16, 2013 Outline Grphing Riemnn Sums

More information

Discrete Mathematics and Probability Theory Spring 2013 Anant Sahai Lecture 17

Discrete Mathematics and Probability Theory Spring 2013 Anant Sahai Lecture 17 EECS 70 Discrete Mthemtics nd Proility Theory Spring 2013 Annt Shi Lecture 17 I.I.D. Rndom Vriles Estimting the is of coin Question: We wnt to estimte the proportion p of Democrts in the US popultion,

More information

Math 426: Probability Final Exam Practice

Math 426: Probability Final Exam Practice Mth 46: Probbility Finl Exm Prctice. Computtionl problems 4. Let T k (n) denote the number of prtitions of the set {,..., n} into k nonempty subsets, where k n. Argue tht T k (n) kt k (n ) + T k (n ) by

More information

Reinforcement learning II

Reinforcement learning II CS 1675 Introduction to Mchine Lerning Lecture 26 Reinforcement lerning II Milos Huskrecht milos@cs.pitt.edu 5329 Sennott Squre Reinforcement lerning Bsics: Input x Lerner Output Reinforcement r Critic

More information

UNIFORM CONVERGENCE. Contents 1. Uniform Convergence 1 2. Properties of uniform convergence 3

UNIFORM CONVERGENCE. Contents 1. Uniform Convergence 1 2. Properties of uniform convergence 3 UNIFORM CONVERGENCE Contents 1. Uniform Convergence 1 2. Properties of uniform convergence 3 Suppose f n : Ω R or f n : Ω C is sequence of rel or complex functions, nd f n f s n in some sense. Furthermore,

More information

8 Laplace s Method and Local Limit Theorems

8 Laplace s Method and Local Limit Theorems 8 Lplce s Method nd Locl Limit Theorems 8. Fourier Anlysis in Higher DImensions Most of the theorems of Fourier nlysis tht we hve proved hve nturl generliztions to higher dimensions, nd these cn be proved

More information

For the percentage of full time students at RCC the symbols would be:

For the percentage of full time students at RCC the symbols would be: Mth 17/171 Chpter 7- ypothesis Testing with One Smple This chpter is s simple s the previous one, except it is more interesting In this chpter we will test clims concerning the sme prmeters tht we worked

More information

Section 14.3 Arc Length and Curvature

Section 14.3 Arc Length and Curvature Section 4.3 Arc Length nd Curvture Clculus on Curves in Spce In this section, we ly the foundtions for describing the movement of n object in spce.. Vector Function Bsics In Clc, formul for rc length in

More information

An approximation to the arithmetic-geometric mean. G.J.O. Jameson, Math. Gazette 98 (2014), 85 95

An approximation to the arithmetic-geometric mean. G.J.O. Jameson, Math. Gazette 98 (2014), 85 95 An pproximtion to the rithmetic-geometric men G.J.O. Jmeson, Mth. Gzette 98 (4), 85 95 Given positive numbers > b, consider the itertion given by =, b = b nd n+ = ( n + b n ), b n+ = ( n b n ) /. At ech

More information

Math 113 Fall Final Exam Review. 2. Applications of Integration Chapter 6 including sections and section 6.8

Math 113 Fall Final Exam Review. 2. Applications of Integration Chapter 6 including sections and section 6.8 Mth 3 Fll 0 The scope of the finl exm will include: Finl Exm Review. Integrls Chpter 5 including sections 5. 5.7, 5.0. Applictions of Integrtion Chpter 6 including sections 6. 6.5 nd section 6.8 3. Infinite

More information

Synoptic Meteorology I: Finite Differences September Partial Derivatives (or, Why Do We Care About Finite Differences?

Synoptic Meteorology I: Finite Differences September Partial Derivatives (or, Why Do We Care About Finite Differences? Synoptic Meteorology I: Finite Differences 16-18 September 2014 Prtil Derivtives (or, Why Do We Cre About Finite Differences?) With the exception of the idel gs lw, the equtions tht govern the evolution

More information

I1 = I2 I1 = I2 + I3 I1 + I2 = I3 + I4 I 3

I1 = I2 I1 = I2 + I3 I1 + I2 = I3 + I4 I 3 2 The Prllel Circuit Electric Circuits: Figure 2- elow show ttery nd multiple resistors rrnged in prllel. Ech resistor receives portion of the current from the ttery sed on its resistnce. The split is

More information