Fast parameter estimation in loss tomography for networks of general topology

Size: px
Start display at page:

Download "Fast parameter estimation in loss tomography for networks of general topology"

Transcription

1 Fast parameter estmaton n loss tomography for networks of general topology The Harvard communty has made ths artcle openly avalable. Please share how ths access benefts you. Your story matters Ctaton Deng, Ke, Yang L, Wepng Zhu, and Jun S. Lu Fast Parameter Estmaton n Loss Tomography for Networks of General Topology. The Annals of Appled Statstcs 10 (1) (March): do: /15-aoas883. Publshed Verson do: /15-aoas883 Ctable lnk Terms of Use Ths artcle was downloaded from Harvard Unversty s DASH repostory, and s made avalable under the terms and condtons applcable to Open Access Polcy Artcles, as set forth at nrs.harvard.edu/urn-3:hul.instrepos:dash.current.terms-ofuse#oap

2 Submtted to the Annals of Appled Statstcs FAST PARAMETER ESTIMATION IN LOSS TOMOGRAPHY FOR NETWORKS OF GENERAL TOPOLOGY arxv: v1 [stat.me] 24 Oct 2015 By Ke Deng, Yang L, Wepng Zhu and Jun S. Lu Tsnghua Unversty, Unversty of New South Wales and Harvard Unversty As a technque to nvestgate lnk-level loss rates of a computer network wth low operatonal cost, loss tomography has receved consderable attentons n recent years. A number of parameter estmaton methods have been proposed for loss tomography of networks wth a tree structure as well as a general topologcal structure. However, these methods suffer from ether hgh computatonal cost or nsuffcent use of nformaton n the data. In ths paper, we provde both theoretcal results and practcal algorthms for parameter estmaton n loss tomography. By ntroducng a group of novel statstcs and alternatve parameter systems, we fnd that the lkelhood functon of the observed data from loss tomography keeps exactly the same mathematcal formulaton for tree and general topologes, revealng that networks wth dfferent topologes share the same mathematcal nature for loss tomography. More mportantly, we dscover that a re-parametrzaton of the lkelhood functon belongs to the standard exponental famly, whch s convex and has a unque mode under regularty condtons. Based on these theoretcal results, novel algorthms to fnd the MLE are developed. Compared to exstng methods n the lterature, the proposed methods enjoy great computatonal advantages. 1. Introducton. Network characterstcs such as loss rate, delay, avalable bandwdth, and ther dstrbutons are crtcal to varous network operatons and mportant for understandng network behavors. Although consderable attenton has been gven to network measurements, due to varous reasons (e.g., securty, commercal nterest and admnstratve boundary) some characterstcs of the network cannot be obtaned drectly from a large network. To overcome ths dffculty, network tomography was proposed n [1], suggestng the use of end-to-end measurement and statstcal nference to estmate characterstcs of a large network. In an end-to-end measurement, a Ths study s partally supported by Natonal Scence Foundaton of USA grant DMS , Natonal Scence Foundaton of Chna grant , Shenzhen Fund for Basc Scence grant JC A and grant JC A. Keywords and phrases: network tomography, loss tomography, general topology, lkelhood equaton, pattern-collapsed EM algorthm 1

3 2 K. DENG ET AL. number of sources are attached to the network of nterest to send probes to recevers attached to the other sde of the network, and paths from sources to recevers cover lnks of nterest. Arrval orders and arrval tmes of the probes carry the nformaton of the network, from whch many network characterstcs can be nferred statstcally. Characterstcs that have been estmated n ths manner nclude lnk-level loss rates [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], delay dstrbutons [14, 15, 16, 17, 18, 19, 20, 21, 22, 23], orgn-destnaton traffc [1, 16, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38], loss patterns [39], and the network topology [40]. In ths paper, we focus on the problem of estmatng loss rates. Network topologes connectng sources to recevers can be dvded nto two classes: tree and general. A tree topology, as named, has a sngle source attached to the root of a multcast tree to send probes to recevers attached to the leaf nodes of the tree. A network wth a general topology, however, requres a number of trees to cover all lnks of the network. Each tree has one source sendng probes to recevers n t. Because of the use of multple sources to send probes n a network wth general topology, those nodes and recevers located at ntersectons of multple trees can receve probes from multple sources. In ths case, we must consder mpacts of probes sent by all sources smultaneously n order to get a good estmate. Ths task s much more challengng than the tree topology case. Numerous methods have been proposed for loss tomography n a tree topology. Cáceres et al. [2] used a Bernoull model to descrbe the loss behavor of a lnk, and derved the MLE for the pass rate of a path connectng the source to a node, whch was expressed as the soluton of a polynomal equaton [2, 3, 4]. To ease the concern of usng numercal methods to solve a hgh degree polynomal, several papers have been publshed to accelerate the calculaton at the prce of a lttle accuracy loss: Zhu and Geng proposed a recursvely defned estmator based on a bottom-up strategy n [11, 12]; Duffeld et al. proposed a closed-form estmator n [13], whch has the same asymptotc varance as the MLE to the frst order. Consderng the unavalablty of multcast n some networks, Harfoush et al. [5] and Coates et al. [6] ndependently proposed the use of the uncast-based multcast to send probes to recevers, where Coates et al. also suggested the use of the EM algorthm [41] to estmate lnk-level loss rates. For networks beyond a tree, however, lttle research has been done, although a majorty of networks n practce fall nto ths category. Conceptually, the topology of a general network can be arbtrarly complcated. However, no matter how complcated a general topology s, t can always be covered by a group of carefully selected trees, n each of whch an end-

4 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 3 to-end experment can be carred out ndependently to study the propertes of the subnetwork covered by the tree. If these trees do not overlap wth each other, the problem of studyng the whole general network can be decomposed nto a group of smaller subproblems, each for one tree. However, due to the complexty of the network topology and practcal constrants, t s more than often that the selected trees overlaps sgnfcantly,.e., some lnks are shared by two or more trees (see Fgure 2 for example). The shared lnks of two selected trees nduce dependence between the two trees. Smply gnorng the dependence leads to a loss of nformaton. How to effectvely ntegrate the nformaton from multple trees to acheve a jont analyss s a major challenge n network tomography of general topology. The frst effort on network tomography of general topology s due to Bu et al. [8], who attempted to extend the method n [2] to networks wth general topology. Unfortunately, the authors faled to derve an explct expresson for the MLE lke the one presented n [2] for ths more general case. They then resorted to an teratve procedure (.e., the EM algorthm) to search for the MLE. In addton, a heurstc method, called mnmum varance weghted average (MVWA) algorthm, was also proposed n [8], whch deals wth each tree n a general topology separately and averages the results. The MVWA algorthm s less effcent than the EM algorthm, especally when the sample sze s small. Rabbat et al. n [40] consdered the tomography problem for networks wth an unknown but general topology, manly focusng on network topology dentfcaton, whch s beyond the scope of our current paper. In ths paper, we provde a new perspectve for the study of loss tomography, whch s applcable to both tree and general topologes. Our theoretcal contrbutons are: 1) ntroducng a set of novel statstcs, whch are complete and mnmal suffcent; 2) dervng two alternatve parameter systems and the correspondng re-parameterzed lkelhood functons, whch beneft us both theoretcally and computatonally; 3) dscoverng that the loss tomography for a general topology shares the same mathematcal formulaton as that of a tree topology; and, 4) showng that the lkelhood functon belongs to the exponental famly and has a unque mode (whch s the MLE) under regularty condtons. Based on these theoretcal results, we propose two new algorthms (a lkelhood-equaton-based algorthm called LE-ξ and an EM-based algorthm called PCEM) to fnd MLE. Compared to exstng methods n the lterature, the proposed methods are computatonally much more effcent. The rest of the paper s organzed as follows. Secton 2 ntroduces notatons for tree topologes and the stochastc model for loss rate nference. Secton 3 descrbes a set of novel statstcs and two alternatve parameter

5 4 K. DENG ET AL. θ θ θ4 θ θ 5 6 θ Fg 1. A multcast tree. 1 θ systems for loss rate analyss n tree topologes. New forms of the lkelhood functon are establshed based on the novel statstcs and re-parametrzaton. Secton 4 extends the above results to general topologes. Secton 5 and 6 propose two new algorthms for fndng MLE of θ, whch enjoy great computatonal advantages over exstng methods. Secton 7 evaluates performances of the proposed methods by smulatons. We conclude the artcle n Secton Notatons and Assumptons Notatons for Tree Topologes. We use T = (V, E) to denote the multcast tree of nterest, where V = {v 0, v 1,...v m } s a set of nodes representng the routers and swtches n the network, and E = {e 1,..., e m } s a set of drected lnks connectng the nodes. Two nodes connected by a drected lnk are called the parent node and the chld node, respectvely, and the drecton of the arrow ndcates that the parent forwards receved probes to the chld. Fgure 1 shows a typcal multcast tree. Note that the root node of a multcast tree has only one chld, whch s slghtly dfferent from an ordnary tree, and each non-root node has exactly one parent. Each lnk s assgned a unque ID number rangng from 1 to m, based on whch each node obtans ts unque ID number rangng from 0 to m correspondngly so that lnk connects node wth ts parent node. Number 0 s reserved for the source node. In contrast to [2] and [13], whose methods are node-centrc, our methods here focus on lnks nstead. For a network wth tree topology, the two reference systems are equvalent as there exsts an one-to-one mappng between nodes and lnks: every node n the network (except for the source) has a unque parent lnk. For a network wth general topology, however, the lnk-centrc system s more convenent, as a node n the network may have multple parent lnks. We let f denote the unque parent lnk, B the brother lnks, and C the

6 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 5 chld lnks, respectvely, of lnk. To be precse, B contans all lnks that share the same parent node wth lnk, ncludng lnk tself. A subtree T = {V, E } s defned as the subnetwork composed of lnk and all ts descendant lnks. We let R and R denote the set of recevers (.e., leaf nodes) n T and T, respectvely. Sometmes, we also use R or R to denote the leaf lnks of T or T. The concrete meanng of R and R can be determned based on the context. Takng the multcast tree dsplayed n Fgure 1 as an example, we have V = {0, 1,, 7}, E = {1, 2,, 7} and R = {4, 5, 6, 7}. For the subtree T 2, however, we have V 2 = {1, 2, 4, 5}, E 2 = {2, 4, 5} and R 2 = {4, 5}. For lnk 2, ts parent lnk s f 2 = {1}, ts brother lnks are B 2 = {2, 3}, and ts chld lnks are C 2 = {4, 5} Stochastc Model. In a multcast experment, n probe packages are sent from the root node 0 to all the recevers. Let X (t) = 1 f the t-th probe package reached node, and 0 otherwse. The status of probes at recevers {X r (t) } r R,1 t n can be drectly observed from the multcast experment; but, the status of probes at nternal lnks cannot be drectly observed. In the followng, we use X R = {X (t) R } 1 t n, where X (t) R = {X(t) r } r R, to denote the data collected n a multcast experment n tree T. In ths paper, we model the loss behavor of lnks by the Bernoull dstrbuton and assume the spatal-temporal ndependence and temporal homogenety for the network,.e. the event {X (t) = 0 X (t) f = 1} are ndependent across and t, and P (X (t) = 0 X (t) f = 0) = 1, P (X (t) = 0 X (t) f = 1) = θ. We call θ = {θ } E the lnk-level loss rates, and the goal of loss tomography s to estmate θ from X R Parameter Space. In prncple, θ can be any value n [0, 1]. Thus, the natural parameter space of θ s Θ = [0, 1] m, an m-dmensonal closed unt cube. In ths paper, however, we assume that θ (0, 1) for every E, and thus constran the parameter space to Θ = (0, 1) m, to smplfy the problem. If θ = 1 for some E, then the subtree T s actually dsconnected from the other part of the network, snce no probes can go through lnk. In ths case, the loss rate of other lnks n T are not estmable due to the lack of nformaton. On the other hand, f θ = 0 for some E, the orgnal network of nterest degenerates to an equvalent network where node s removed and all ts chld nodes are connected drectly to node s parent node. By constranng the parameter space of θ nto Θ, we exclude these degenerate cases from consderaton.

7 6 K. DENG ET AL. 3. Statstcs and Lkelhood Functon The lkelhood functon and the MLE. Gven the loss model for each lnk, we can wrte down the lkelhood functon and use the maxmum lkelhood prncple to determne unknown parameters. That s, we am to fnd the parameter value that maxmzes the log-lkelhood functon: (3.1) arg max θ Θ L(θ) = arg max θ Θ n(x) log P (x; θ), where x stands for the observaton at recevers (.e., a realzaton of X R ), Ω = {0, 1} R s the space of all possble observatons wth R denotng the sze of set R, n(x) s the number of occurrences of observaton x, and P (x; θ) s the probablty of observng x gven parameter value θ. However, the log-lkelhood functon (3.1) s more symbolc than practcal because of the followng reasons: (1) evaluatng the log-lkelhood functon (3.1) s an expensve operaton as t needs to scan through all possble x Ω; and, (2) the lkelhood equaton derved from (3.1) cannot be solved analytcally, and t s often computatonally expensve to pursue a numercal soluton Internal State and Internal Vew. Instead of usng the log-lkelhood functon (3.1) drectly, we consder to rewrte t n a dfferent form. Under the posted probablstc model, the overall lkelhood P (X R θ) s the product of the lkelhood from each probe,.e., P (X R θ) = x Ω n t=1 P (X (t) R θ). Thus, an alternatve form of the overall lkelhood can be obtaned by explcatng the lkelhood of each sngle probe and accumulatng them. Two concepts called nternal state and nternal vew can be generated from ths process Internal State. For a lnk E, gven the observaton of probe t at R and R f, we are able to partally confrm whether the probe passes lnk. Formally, for observaton {X (t) j } j R, we defne Y (t) = max j R X (t) j as the nternal state of lnk for probe t. If Y (t) = 1, probe t reaches at least one recever attached to T, whch mples that the probe passes lnk. Furthermore, by consderng Y (t) f and Y (t) smultaneously, we have three possble scenaros for each nternal node : Y (t) f = Y (t) = 1,.e., we observed that probe t passed lnk ; or

8 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 7 Y (t) f = 1 and Y (t) = 0,.e., we observed that probe t reached node f, but we dd not know whether t reached node or not; or Y (t) f = Y (t) = 0,.e., we dd not know whether probe t reached node f or not at all; as Y (t) f = 0 and Y (t) = 1 can never happen by defnton of Y. The three scenaros have dfferent mpacts on the lkelhood functon. Formally, defne We have (3.2) P (X (t) R where E t,1 = { E : Y (t) = 1}, E t,2 = { E : Y (t) f E t,3 = { E : Y (t) f θ) = E t,1 (1 θ ) = 1, Y (t) = 0}, = Y (t) = 0}. E t,2 ξ (θ), ξ (θ) = P (X j = 0, j R X f = 1; θ) represents the probablty that a probe sendng out from the root node of T fals to reach any leaf node n T Internal Vew. Accumulatng the nternal states of each lnk n the experment, we have (3.3) n (1) = n t=1 Y (t), whch counts the number of probes whose pass through lnk can be confrmed from observatons. Specfcally, we defne n 0 (1) = n. Moreover, defne (3.4) n (0) = n f (1) n (1), for E. We call the statstcs {n (1), n (0)} the nternal vew of lnk. Based on nternal vews, we can wrte the log-lkelhood of X R n a more convenent form: L(θ) = [ ] n (1) log(1 θ ) + n (0) log ξ (θ). E

9 8 K. DENG ET AL Re-parametrzaton. Two alternatve parameter systems can be ntroduced to re-parameterze the above log-lkelhood functon. Frst, based on the defnton of ξ (θ), we have (3.5) ξ (θ) = θ + (1 θ ) j C ξ j (θ), E. Note that f R, we have C =, and (3.5) degenerates to ξ (θ) = θ. Let ξ ξ (θ) for E. Equaton (3.5) defnes a one-to-one mappng between two parameter systems θ = {θ } E and ξ = {ξ } E,.e., Γ : Θ Ξ, ξ = Γ(θ) ( ξ 1 (θ),, ξ m (θ) ), where Θ and Ξ are the doman and mage, respectvely. The nverse mappng of Γ s Γ 1 : Ξ Θ, θ = Γ 1 (ξ) ( θ 1 (ξ),, θ m (ξ) ), where (3.6) θ (ξ) = ξ j C ξ j 1 j C ξ j, E. Usng ξ to replace θ n L(θ), we have the followng alternatve log-lkelhood functon wth ξ as parameters: L(ξ) = [ 1 ξ ] n (1) log( 1 ) + n (0) log ξ. E j C ξ j Second, L(ξ) can be further re-organzed nto L(ξ) = E n (1) log( 1 θ (ξ) ) + n f (1) log ξ ξ E = n log ξ 1 + E n (1) log ψ (ξ), where ψ (ξ) s defned as (3.7) ψ (ξ) = { log 1 θ (ξ) ξ, R, log ξ θ (ξ) ξ, / R. Smlarly, let ψ ψ(ξ) for E. Equaton (3.7) defnes a one-to-one mappng between parameter system ξ = {ξ } E and ψ = {ψ } E,.e., Λ : Ξ Ψ, ψ = Λ(ξ) ( ψ 1 (ξ),, ψ m (ξ) ),

10 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 9 where Ψ Λ(Ξ) s the mage of ψ. The nverse mappng of Λ s It can be shown that Λ 1 : Ψ Ξ, ξ = Λ 1 (ψ) ( ξ 1 (ψ),, ξ m (ψ) ). ψ = log P ( X = 1 X f = 1; X j = 0, j R ) for / R, from whch the physcal meanng of ψ can be better understood. Usng ψ to replace ξ n L(ξ), we have the followng log-lkelhood functon wth ψ as parameters: L(ψ) = n log ξ 1 (ψ) + E n (1)ψ. To llustrate the relatons of θ, ξ and ψ, let s consder the toy network n Fgure 1 wth θ = 0.1 for = 1,, 7. It s easy to check that: ξ 4 = ξ 5 = ξ 6 = ξ 7 = 0.1, ξ 2 = ξ 3 = (1 0.1) = 0.109, ξ 1 = (1 0.1) ; ψ 4 = ψ 5 = ψ 6 = ψ 7 = log , 0.1 ψ 2 = ψ 3 = log , ψ log We lst these concrete values n Table 1 for comparson purpose. The notatons, statstcs and parameter systems are summarzed nto Table 2 for easy reference. Table 1 Comparng dfferent parameter systems for the toy network n Fgure 1 Lnk θ ξ ψ Lkelhood Functon for General Networks. In ths secton, we wll extend the concept of nternal vew and alternatve parameter systems to general networks contanng multple trees. Formally, we use N = {T 1,, T K } to denote a general network covered by K multcast trees,

11 10 K. DENG ET AL. Table 2 Collecton of symbols Symbol Meanng T the multcast tree of nterest V the node set of T E the lnk set of T R the set of leaf nodes (recevers) or leaf lnks of T T the subtree wth lnk as the root lnk V the node set of subtree T R the set of leaf nodes (recevers) or leaf lnks of T f the parent lnk of lnk B the brother lnks of lnk C the chld lnks of lnk X (t) X (t) = 1 f probe t reached node, and 0 otherwse Y (t) max j R X (t) j n n (1) t=1 Y (t), number of probes that passed lnk for sure n (0) n f (1) n (1) θ P (X = 0 X f = 1), the loss rate of lnk ξ P (X j = 0, j R X f = 1), the loss rate of T ψ log P (X = 1 X f = 1; X j = 0, j R ) where each multcast tree T k covers a subnetwork wth S k as the root lnk. For example, Fgure 2 llustrates a network wth K = 2, S 1 = 0 and S 2 = 32. Let S = {S 1,, S K } be the set of root lnks n N. For each tree T k, let {n k, (1), n k, (0)} be the nternal vew of lnk E k n T k based on the n k probes sent out from S k. Specally, defne n k, (1) = 0 for lnk / E k. The experment on T k results n the followng log-lkelhood functons wth θ, ξ and ψ as parameters, respectvely: L k (θ) = [ ] n k, (1) log(1 θ ) + n k, (0) log ξ (θ), E L k (ξ) = [ 1 ξ ] n k, (1) log( 1 ) + n k, (0) log ξ, E j C ξ j L k (ψ) = n k log ξ Sk (ψ) + E n k, (1)ψ. Consderng that a multcast experment n a general network N s a pool of K ndependent experments n the K trees of N, the log-lkelhood functon of the whole experment s just the summaton of K components,

12 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY Fg 2. A 5-layer network covered by two trees..e., L(θ) = K L k (θ), L(ξ) = k=1 K L k (ξ) and L(ψ) = k=1 Defne the nternal vew of lnk n a general network as: (4.1) n (1) = K n k, (1), n (0) = k=1 K n k, (0). k=1 It s straghtforward to see that L(θ) = [ ] (4.2) n (1) log(1 θ ) + n (0) log ξ (θ), E K L k (ψ). k=1 (4.3) L(ξ) = [ 1 ξ ] n (1) log( 1 ) + n (0) log ξ, E j C ξ j K (4.4) L(ψ) = n k log ξ Sk (ψ) + n (1)ψ. k=1 E Note that n ths general case, we have: for S, n Sk (1) = n k ; and for / S, n (0) + n (1) = n j (1). j F

13 12 K. DENG ET AL. Here, F stands for the unque or multple parent lnks of lnk n the general network N. 5. Lkelhood Equaton. The bjectons among the the three parameter systems mean that we can swtch among them freely wthout changng the result of parameter estmaton: Proposton 1. The results of lkelhood nference based on the three parameter systems are equvalent,.e., Γ ( arg max θ Θ L(θ)) = arg max ξ Ξ L(ξ) = Λ 1( arg max ψ Ψ L(ψ)) ; and the lkelhood equatons under dfferent parameters share the same soluton,.e., L(θ) θ = 0 L(ξ) θ=θ ξ = 0 L(ψ) ξ=ξ ψ = 0, ψ=ψ f θ Θ, or ξ Ξ, or ψ Ψ, where Γ(θ ) = ξ = Λ 1 (ψ ). Ths flexblty provdes us great theoretcal and computatonal advantages. On the the theoretcal aspect, as {n k } K k=1 are known constants, L(ψ) falls nto the standard exponental famly wth ψ as the natural parameters. Thus, based on the propertes of the exponental famly [42], we have the followng results mmedately: Proposton 2. The followng results hold for loss tomography: 1. Statstcs {n (1)} E are complete and mnmal suffcent; 2. The lkelhood equaton L(ψ) ψ = 0 has at most one soluton ψ Ψ; 3. If ψ exsts, ψ (or θ = (Λ Γ) 1 (ψ )) s the MLE. On the computatonal aspect, the parameter system ξ plays a central role. Dfferent from the lkelhood equaton wth θ as parameters, whch s ntractable, the lkelhood equaton wth ξ as parameters enjoys unque computatonal advantages. Let r = n (1) n (1) + n (0). It can be shown that the lkelhood equaton wth ξ as parameters s: (5.1) ξ = (1 r ) + r j B ξ j I( / S ),

14 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 13 for any lnk E. For lnk S, (5.1) degenerates to ξ = 1 r. For lnk / S, defne π = j B ξ j. We note that solvng ξ from (5.1) s equvalent to solvng the followng equaton about π : (5.2) π = [ ] (1 rj ) + r j π, j B whch can be solved analytcally when B = 2, and by numercal approaches [43] when B > 2. From (5.1) and (5.2), t s transparent that only local statstcs {r j } j B are nvolved n estmatng ξ. Ths observaton leads to the followng fact mmedately: f a processor keeps the values of {r j } j B n ts memory, {ξ j } j B can be effectvely estmated by the processor ndependent of estmatng the other parameters. Based on ths unque property of the lkelhood equaton wth ξ as parameter, we propose a parallel procedure called LE-ξ algorthm to estmate θ (see Algorthm 1). Algorthm 1. The LE-ξ Algorthm Hardwre requrement: {P } : a collecton of processors ndexed by node ID V ; Input: {r j} j E wth dstrbuted storage where P keeps {r j} j C ; Output: ˆθ = {ˆθ j} j E. Procedure: Operate n parallel for IDs of all non-leaf nodes { V : / R} If S, Get ˆξ j = 1 r j for the unque chld lnk j of node wth P ; If / S, Get ˆx wth P from {r j} j C by solvng equaton below about x x = [ j C (1 rj) + r ] jx, Get ˆξ j = (1 r j) + r j ˆx for every j C wth P ; End parallel operaton Return ˆθ = Γ 1 (ˆξ). The valdty of the LE-ξ algorthm s guaranteed by the followng theorem: Theorem 1. When n k for all 1 k K, wth probablty one, the LE-ξ algorthm has a unque soluton n Θ = (0, 1) m that s the MLE.

15 14 K. DENG ET AL. Proof of Theorem 1. Frst, we wll show that under the followng regularty condtons: (5.3) n (1) > 0, n (0) > 0 and j F n j (1) < j B n j (1) for E, the lkelhood equaton wth ξ as parameters has a unque soluton n (0, 1) m. Note that when (5.3) holds, we have 0 < r < 1 and j B r j > 1 for E. Because the Lemma 1 n [2] has shown that: f 0 < c j < 1 and j c j > 1, equaton for x x = [ (1 cj ) + c j x ] j has a unque soluton n (0, 1). It s transparent that under the regularty condtons n (5.3), equaton (5.2) has a unque soluton ˆπ (0, 1) for every non-root lnk. Consderng that for every lnk E, ξ = (1 r ) + r π I( / S ), t s straghtforward to see that the lkelhood equaton wth ξ as parameters has a unque soluton ˆξ (0, 1) m. Moreover, f (5.4) ˆξ Ξ, we have ˆψ = Λ(ˆξ) Ψ. Thus, based on Proposton 1 and Proposton 2, we know that f both (5.3) and (5.4) are satsfed, ˆψ, whch s the soluton of the lkelhood equaton wth ψ as parameter, s the MLE. Consderng the bjectons among θ, ξ and ψ, ths also means that ˆθ = Γ 1 (ˆξ) s the MLE. Note that the probablty that (5.3) or (5.4) fals goes to zero when n k for k = 1,, K, we complete the proof. The large sample propertes of MLE have been well studed by [2] for tree topology, where the asymptotc normalty, asymptotc varance and confdence nterval are establshed. Consderng that the lkelhood functon keeps exactly the same form for both tree and general topology, t s natural to extend these theoretcal results for MLE establshed n [2] to a network of general topology.

16 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 15 Wth a fnte sample, there s a chance that (5.3) or (5.4) fals. In ths case, the MLE of θ falls out of Θ, and may be mssed by the LE-ξ algorthm. For example, 1. If n (1) = 0 and n (0) > 0, we have ˆξ = 1; 2. If n (1) > 0 and n (0) = 0, we have ˆξ = 0; 3. If j F n j (1) = j B n j (1) > 0, we have ˆθ = 0; 4. If ˆξ ˆπ = j B ˆξj (.e., ˆξ / Ξ), we have ˆθ 0. Moreover, f n (1) = n (0) = 0, the parameters n subtree T (.e., {θ j } j T ) are not estmable due to lack of nformaton. In all these cases, we cannot guarantee that the estmate from the LE-ξ algorthm ˆθ s the global maxmum n Θ = [0, 1] m. In practce, we can ncrease sample sze by sendng addtonal probes to avod ths dlemma. In case that t s not realstc to send addtonal probes, we can smply replace ˆθ by the pont n Θ that s the closest to ˆθ, or search the boundary of Θ to maxmze L(θ). 6. Impacts on the EM Algorthm. The above theoretcal results have two major mpacts to the EM algorthm wdely used n loss tomography. Frst, as we have shown n Theorem 1, as long as regularty condtons (5.3) and (5.4) hold, lkelhood functon L(θ) has a unque mode n Θ. In ths case, the EM algorthm always converges to the MLE for any ntal value θ (0) Θ. Consderng that the chance of volatng (5.3) or (5.4) goes to zero wth the ncrease of sample sze, we have the followng corollary for the EM algorthm mmedately: Corollary 1. When n k for all 1 k K, the EM algorthm converges to the MLE wth probablty one for any ntal value θ (0) Θ. Second, the formulaton of the new statstcs {n (1)} E naturally leads to a pattern-collapsed mplementaton of the EM algorthm, whch s computatonally much more effcent than the wdely used nave mplementaton where the samples are processed one by one separately. In the nave mplementaton of the EM algorthm, one enumerates all possble confguratons of the nternal lnks compatble wth each sample carryng the nformaton on whether the correspondng probe reached the leaf nodes or not. The complexty of the nave mplementaton n each E-step can be O(n2 m ) n the worst case. Wth the pattern-collapsed mplementaton, however, the complexty of an E-step can be dramatcally reduced to O(m). The frst pattern collapsed EM algorthm s proposed by Deng et al. [23] n the context of delay tomography. The basc dea s to reorganze the observed

17 16 K. DENG ET AL. data at recevers n delay tomography nto delay patterns, and make use of a delay pattern database to greatly reduce the computatonal cost n the E-step. We wll show below that the sprt of ths method can be naturally extended to the study of loss tomography. Defne event {Y = 0 Y f = 1}, or equvalently {X j = 0, j R X f = 1}, as the loss event n subtree T, denoted as L. Let L(X V ; θ) be the loglkelhood of the complete data where the status of probes at nternal nodes are also observed, θ (r) be the estmaton obtaned at the r-th teraton, the objectve functon to be maxmzed n the (r + 1)-th teraton of the EM algorthm s: (6.1) Q(θ, θ (r) ( ) n ) = E θ (r) L(XV ; θ) X R = t=1 E θ (r) ( L(X (t) E = [ ] n (1) log(1 θ ) + n (0)Q(θ T, θ (r) ), E ; θ) X(t) R ) where Q L (θ T, θ (r) ), whch s called the localzed Q-functon of loss event L, s defned as: ] Q L (θ T, θ (r) ) = E θ (r) [logp (X T ; θ T ) L. Smlar to the results for delay patterns shown n the Proposton 1 of [23], Q L (θ T, θ (r) ) has the followng decomposton: (6.2) Q L (θ T, θ (r) ) = ( 1 ψ (θ (r) ) ) [ log(θ )+ψ (θ (r) ) log(1 θ )+ ] Q Lj (θ Tj, θ (r) ). j C Integratng (6.1) and (6.2), we have Q(θ, θ (r) ) = [ ] ω (1) log(1 θ ) + ω (0) log(θ ), E where {ω (1), ω (0)} E are defned recursvely as follows: for {S 1,, S K },.e., beng one of the root lnks, for / {S 1,, S K }, ω (1) = n (1) + n (0) ψ (θ (r) ), ω (0) = n (0) (1 ψ (θ (r) ) ) ; ω (1) = n (1) + j F ω j (0) ψ j (θ (r) ), ω (0) = j F ω j (0) (1 ψ j (θ (r) ) ).

18 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 17 And, the M-step s: θ (r+1) = ω (0) ω (0) + ω (1), E. In ths paper, we refer to the EM wth the pattern-collapsed mplementaton as PCEM to dstngush from the EM wth the nave mplementaton, whch s referred to as NEM. We have shown n [23] by smulaton and theoretcal analyss that PCEM s computatonally much more effcent than the nave EM n the context of delay tomography. In context of loss tomography, t s easy to see from the above analyss that PCEM enjoys a complexty of O(m) for each E-step, whch s much more effcent than the nave EM, whose complexty can be O(n2 m ) for each E-step n the worst case. 7. Smulaton Studes. We verfy the followng facts va smulatons: 1. PCEM obtans exactly same results as the nave EM for both tree topology and general topology, but s much faster; 2. LE-ξ obtans exactly same results as the EM algorthm when regularty condtons (5.3) and (5.4) hold, but s much faster when the speedup from parallel computaton s consdered. We carred out smulatons on a 5-layer network covered by two trees as showed n Fgure 2. The network has 49 nodes labeled from Node 0 to Node 48, and two sources Node 0 and Node 32. We conduct two set of smulatons. One s based on the deal model, and the other s by usng network smulator 2 (ns-2, [44]) Smulaton study wth the deal model. In the frst set of smulatons, the data s generated from the deal model where each lnk has a pre-defned constant loss rate. To test the performance of methods on dfferent magntude of loss rates, we draw the lnk-level loss rates from Beta dstrbutons wth dfferent parameters. The lnk-level loss rates θ = {θ } are randomly sampled from Beta(1, 99), Beta(5, 995), Beta(2, 998) or Beta(1, 999). For example, the mean of Beta(1, 999) s 0.001, so f we draw loss rates from Beta(1, 999), then on average we expect a 0.1% lnk-level loss rate n the network. Gven the loss rate vector θ for each network, we generated 100 ndependent datasets wth sample szes 50, 100, 200 and 500, respectvely. Thus, a total of = 1600 datasets were smulated. The sample sze s evenly dstrbuted nto the two trees, e.g., the source of each tree wll send out 100 probes when sample sze n = 200.

19 18 K. DENG ET AL. Table 3 Performance of dfferent methods on smulated data from deal model on the 5-layer network n Fgure 2. Beta(1,100) Beta(5,1000) Beta(2,1000) Beta(1,1000) n Method Tme (ms) MSE (1e-5) Tme (ms) MSE (1e-5) Tme (ms) MSE (1e-5) Tme (ms) MSE (1e-5) NEM PCEM LE-ξ MVWA NEM PCEM LE-ξ MVWA NEM PCEM LE-ξ MVWA NEM PCEM LE-ξ MVWA To each of the 1600 smulated data sets, we appled NEM, PCEM, LEξ and MVWA, respectvely. The stoppng rule of the EM algorthms s max θ (t+1) θ (t) 10 6, the ntal values are θ (0) = 0.03 for all E. We mplement MVWA algorthm by obtanng MLE estmate n each tree wth LE-ξ algorthm. Table 3 summarze the average runnng tme and MSEs of these methods under dfferent settngs. From the tables, we can see that: 1. The performances of all four methods n term of MSE mprove wth the ncrease of sample sze n n both networks. 2. NEM and PCEM always get exactly same results. 3. The result from LE-ξ s slghtly dfferent from these of the EM algorthms when sample sze n s as small as 100 or 200 due to the volaton of the regularty condtons (5.3) and (5.4). When n = 500, the results of LE-ξ and EM algorthms become dentcal. 4. LE-ξ, PCEM and MVWA algorthm mplemented wth LE-ξ are dramatcally faster than NEM. For example, n smulaton confguraton wth Beta(1, 1000) and sample sze n = 500, the runnng tme of NEM s almost 2,000 tmes of LE-ξ and PCEM. 5. The MSE of MVWA s always larger than the MSE of the other three methods, whch capture the MLE. Ths s consstent wth the result n [8] Smulaton study by network smulator 2. We also conduct the smulaton study usng ns-2. We use the network topology shown n Fgure 2, where the two sources located at Node 0 and Node 32 multcast probes to the attached recevers. Besdes the network traffc created by multcast

20 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 19 Table 4 Performance of dfferent methods on smulated data from network smulator 2. Sm tme (n) Method Tme (ms) MSE (1e-5) NEM s (528) PCEM LE-ξ MVWA NEM s (2038) PCEM LE-ξ MVWA NEM s (5648) PCEM LE-ξ MVWA NEM s (13355) PCEM LE-ξ MVWA probng, a number of TCP sources wth varous wndow szes and a number of UDP sources wth dfferent burst rates and perods are added at several nodes to produce cross-traffc. The TCP and UDP cross-traffc takes about 80% of the total network traffc. We generated 100 ndependent datasets by runnng the ns-2 smulaton for 1, 2, 5 or 10 smulaton seconds. The longer experments generate more multcast probes samples. We record all pass and loss events for each lnk durng the smulaton, and the actual loss rates are calculated and consdered as the true loss rates for calculatng MSEs. Table 4 shows the average number of packets (sample sze), runnng tme of methods and MSEs for dfferent ns-2 smulaton tmes. In Table 4, we observe smlar results as shown n Table 3. The performances of all four methods n term of MSE mprove wth the ncrease of sample sze n n both networks. The result from LE-ξ s slghtly dfferent from these of the EM algorthms when sample sze n s as small. More mportantly, LE-ξ and PCEM are dramatcally faster than NEM. 8. Concluson. We proposed a set of suffcent statstcs called nternal vew and two alternatve parameter systems ξ and ψ for loss tomography. We found that the lkelhood functon keeps the exactly same formulaton for both tree and general topologes under all three types of parametrzaton (θ, ξ and ψ), and we can swtch among the three parameter systems freely wthout changng the result of parameter estmaton. We also dscovered that the parameterzaton of the lkelhood functon based on ψ falls nto the standard exponental famly, whch has a unque mode n parameter space Ψ under regularty condtons, and the parametrzaton based on ξ leads to an effcent algorthm called LE-ξ to calculate the MLE, whch can be carred out n a parallel fashon. These results ndcate that loss tomography for general topologes enjoys the same mathematcal nature

21 20 K. DENG ET AL. as that for tree topologes, and can be resolved effectvely. The proposed statstcs and alternatve parameter systems also lead to a more effcent pattern-collapsed mplementaton of the EM algorthm for fndng MLE, and a theoretcal promse that the EM algorthm converges to the MLE wth probablty one when sample sze s large enough. Smulaton studes confrmed our theoretcal analyss as well as the superorty of the proposed methods over exstng methods. REFERENCES [1] Y. Vard. Network Tomography: Estmatng Source-Destnaton Traffc Intenstes from Lnk Data. Journal of the Amercan Statstcal Assocaton, 91(433): , [2] R. Cáceres, N.G. Duffeld, J. Horowtz, and D. Towsley. Multcastbased nference of network-nternal loss characterstcs. IEEE Transactons on Informaton Theory, 45(7): , [3] R. Cáceres, N.G. Duffeld, S.B. Moon, and D. Towsley. Inference of Internal Loss Rates n the MBone. In IEEE/ISOC Global Internet 99, Vol. 3, , [4] R. Cáceres, N.G. Duffeld, S.B. Moon, and D. Towsley. Inferrng lnklevel performance from end-to-end multcast measurements. Techncal report, Unversty of Massachusetts, [5] K. Harfoush, A. Bestavros, and J. Byers. Robust dentfcaton of shared losses usng end-to-end uncast probes. In Techncal Report BUCS , Boston Unversty, [6] M. Coates and R. Nowak. Uncast network tomography usng EM algorthms. Techncal Report TR-0004, Rce Unversty, September [7] M. Coates and R. Nowak. Network loss nference usng uncast end-to-end measurement. In the ITC Specalst Semnar on IP Traffc Measurement, Modelng and Management, [8] T. Bu, N. Duffeld, F.L. Prest, and D. Towsley. Network Tomography on General Topologes. In Proc. of ACM SIGMETRICS vol. 30, no. 1, 21-30, [9] F. LoPrest, D.Towsley, N.G. Duffeld, and J. Horowtz. Multcast topology nference from measured end-to-end loss. IEEE Transactons on Informaton Theory, 48(1): 26-45, [10] N. Duffeld, J. Horowtz, D. Towsley, W. We, and T. Fredman. Multcast-based loss nference wth mssng data. IEEE Journal of Selected Areas of Communcaton, 20(4): [11] W. Zhu and Z. Geng. A fast method to estmate loss rates. Informaton Networkng. Networkng Technologes for Broadband and Moble Networks, , [12] W. Zhu and Z. Geng. A bottom up nference of loss rate. Computer Communcatons, 28: , [13] N. Duffeld, J. Horowtz, F. Prest, and D. Towsley. Explct loss nference n multcast tomography. IEEE Transactons on Informaton Theory, 52(8): , [14] N.G. Duffeld, J. Horowtz, F.L. Prest, and D. Towsley. Network delay tomography from end-to-end uncast measurements. Lecture Notes n Computer Scence, 2170: , [15] F.L. Prest, N.G. Duffeld, J. Horwtz, and D. Towsley. Multcast-based nference of network-nternal delay dstrbuton. IEEE/ACM Transactons on Networkng, 10(6): , 2002.

22 LOSS TOMOGRAPHY FOR NETWORK OF GENERAL TOPOLOGY 21 [16] G. Lang and B. Yu. Maxmum pseudo lkelhood estmaton n network tomography. IEEE Transactonson on Sgnal Processng, 51(8): , [17] Y. Tsang, M. Coates, and R. Nowak. Network delay tomography. IEEE Transactonson on Sgnal Processng, 51(8): , [18] M.F. Shh and A.O. Hero III. Uncast-based nference of network lnk delay dstrbutons wth fnte mxture models. IEEE Transactonson on Sgnal Processng, 51(8): , [19] E. Lawrence, G. Mchalds, and V. Nar. Network delay tomography usng flexcast experments. Journal of the Royal Statstcal Socety, Seres B, 68(5): , [20] V. Arya, N.G. Duffeld, and D. Vetch. Temporal delay tomography. The 27th Conference on Computer Communcatons. IEEE. 2008: , [21] I.H. Dnwoode and E.A. Vance. Moment estmaton n delay tomography wth spatal dependence. Performance Evaluaton Archve, Volume 64, Issue 7-8, , [22] A. Chen, J. Cao, and T. Bu. Network Tomography: Identfablty and Fourer Doman Estmaton. IEEE Transcaton on Sgnal Processng, 58(12): , [23] K. Deng, Y. L, W. Zhu, Z. Geng, and J.S. Lu. On Delay Tomography: Fast Algorthms and Spatally Dependent Models. IEEE Transactons on Sgnal Processng, 60(11): , [24] M.G.H. Bell. The estmaton of orgn-destnaton matrces by constraned generalzed least squares. Transportaton Research, Seres B 25B(1): 13-22, [25] D.D Ortuzar and L.G. Wllumsen. Modellng transport. John Wley and Sons, [26] E. Cascetta and S. Nguyen. A unfed framework for estmatng or updatng orgn/destnaton matrces from traffc counts. Transportaton Research Part B: Methodologcal, 22(6): , [27] H. Yang, T. Sasak, Y. Ida and Y. Asakura. Estmaton of orgn-destnaton matrces from lnk traffc counts on congested networks. Transportaton Research Part B: Methodologcal, 26(6): , [28] H.P. Lo, N. Zhang and W.H.K. Lam. Estmaton of an orgn-destnaton matrx wth random lnk choce proportons: a statstcal approach. Transportaton Research Part B: Methodologcal, 30(4): , [29] L. Banco, G. Confessore, and P. Reverber. A network based model for traffc sensor locaton wth mplcatons on O/D matrx estmates. Transportaton Scence, 35(1): 50-60, [30] C. Tebald and M. West. Bayesan nference on network traffc usng lnk count data. Journal of the Amercan Statstcal Assocaton, 93(442): , [31] J. Cao, D. Davs, S. Van Der Vel, B. Yu, and Z. Zu. A scalable method for estmatng network traffc matrces from lnk counts. Techncal report, Bell Labs, 41, [32] J. Cao, W. S. Cleveland, D. Ln, and D. X. Sun. The effect of statstcal multplexng on the long range dependence of nternet packet traffc. Techncal report, Bell Labs [33] A. Medna, N. Taft, K. Salamatan, S. Bhattacharyya, and C. Dot. Traffc matrx estmaton: exstng technques and new drectons. SIGCOMM Computer Communcaton Revew, 32(4): , [34] Y. Zhang, M. Roughan, C. Lund, and D. Donoho. An nformaton-theoretc approach to traffc matrx estmaton. In Proceedngs of SIGCOMM, , [35] R. Castro, M. Coates, G. Lang, R. Nowak, and B. Yu. Network tomography: Recent developments. Statstcal Scence, 19(3): , [36] G. Lang, N. Taft, and B. Yu. A fast lghtweght approach to orgn-destnaton IP traffc estmaton usng partal measurements. IEEE Transactons on Informaton Theory, 52(6): , 2006.

23 22 K. DENG ET AL. [37] J. Fang, Y. Vard, and C. Zhang. An teratve tomogravty algorthm for the estmaton of network traffc. In R. Lu, W. Strawderman, and C. Zhang (Eds.), Complex Datasets and Inverse Problems: Tomography, Networks and Beyond, Volume 54 of Lecture Notes-Monograph Seres. IMS , [38] E. M. Arold, and A. W. Blocker, Estmatng latent processes on a network from ndrect measurements. Journal of the Amercan Statstcal Assocaton, 108(501): , [39] V. Arya, N.G. Duffeld, and D. Vetch. Multcast nference of temporal loss characterstcs. Performance Evaluaton, 64(9): , [40] M. Rabbat, R. Nowak, and M. Coates. Multple Source, Multple Destnaton Network Tomography. In Proc. of IEEE Infocom 04, , [41] A. Dempster, N. Lard, and D. Rubn. Maxmum lkelhood from ncomplete data va the EM algorthm. Journal of the Royal Statstcal Socety, Seres B, Volume 34, 1-38, [42] E.L. Lehmann and G Casella. Theory of Pont Estmaton. Sprnger, 2nd edton, [43] M.J. Todd. The computaton of fxed ponts and applcatons. Sprnger-Verlag, New York, [44] The network smulator 2 (ns-2). Address of the Frst author Yau Mathematcal Scences Center & Center for Statstcal Scence, Tsnghua Unversty, Bejng , Chna E-mal: kdeng@math.tsnghua.edu.cn Address of the Second author Department of Statstcs, Harvard Unversty, Cambrdge, MA 02138, USA E-mal: yl01@fas.harvard.edu Address of the Thrd author Department of ITEE, ADFA@Unversty of New South Wales, Canberra ACT 2600, Australa E-mal: w.zhu@adfa.edu.au Address of the Fourth author Department of Statstcs, Harvard Unversty, Cambrdge, MA 02138, USA E-mal: jlu@stat.harvard.edu

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1

On an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1 On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline

Outline. Bayesian Networks: Maximum Likelihood Estimation and Tree Structure Learning. Our Model and Data. Outline Outlne Bayesan Networks: Maxmum Lkelhood Estmaton and Tree Structure Learnng Huzhen Yu janey.yu@cs.helsnk.f Dept. Computer Scence, Unv. of Helsnk Probablstc Models, Sprng, 200 Notces: I corrected a number

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Hidden Markov Models & The Multivariate Gaussian (10/26/04)

Hidden Markov Models & The Multivariate Gaussian (10/26/04) CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors

Stat260: Bayesian Modeling and Inference Lecture Date: February 22, Reference Priors Stat60: Bayesan Modelng and Inference Lecture Date: February, 00 Reference Prors Lecturer: Mchael I. Jordan Scrbe: Steven Troxler and Wayne Lee In ths lecture, we assume that θ R; n hgher-dmensons, reference

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper

Games of Threats. Elon Kohlberg Abraham Neyman. Working Paper Games of Threats Elon Kohlberg Abraham Neyman Workng Paper 18-023 Games of Threats Elon Kohlberg Harvard Busness School Abraham Neyman The Hebrew Unversty of Jerusalem Workng Paper 18-023 Copyrght 2017

More information

Lecture 5 Decoding Binary BCH Codes

Lecture 5 Decoding Binary BCH Codes Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture

More information

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF

8 : Learning in Fully Observed Markov Networks. 1 Why We Need to Learn Undirected Graphical Models. 2 Structural Learning for Completely Observed MRF 10-708: Probablstc Graphcal Models 10-708, Sprng 2014 8 : Learnng n Fully Observed Markov Networks Lecturer: Erc P. Xng Scrbes: Meng Song, L Zhou 1 Why We Need to Learn Undrected Graphcal Models In the

More information

ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE

ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE P a g e ANOMALIES OF THE MAGNITUDE OF THE BIAS OF THE MAXIMUM LIKELIHOOD ESTIMATOR OF THE REGRESSION SLOPE Darmud O Drscoll ¹, Donald E. Ramrez ² ¹ Head of Department of Mathematcs and Computer Studes

More information

Computing MLE Bias Empirically

Computing MLE Bias Empirically Computng MLE Bas Emprcally Kar Wa Lm Australan atonal Unversty January 3, 27 Abstract Ths note studes the bas arses from the MLE estmate of the rate parameter and the mean parameter of an exponental dstrbuton.

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence

Remarks on the Properties of a Quasi-Fibonacci-like Polynomial Sequence Remarks on the Propertes of a Quas-Fbonacc-lke Polynomal Sequence Brce Merwne LIU Brooklyn Ilan Wenschelbaum Wesleyan Unversty Abstract Consder the Quas-Fbonacc-lke Polynomal Sequence gven by F 0 = 1,

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,

FREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced, FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then

More information

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION

ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1

Estimating the Fundamental Matrix by Transforming Image Points in Projective Space 1 Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com

More information

Hidden Markov Models

Hidden Markov Models CM229S: Machne Learnng for Bonformatcs Lecture 12-05/05/2016 Hdden Markov Models Lecturer: Srram Sankararaman Scrbe: Akshay Dattatray Shnde Edted by: TBD 1 Introducton For a drected graph G we can wrte

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Section 8.3 Polar Form of Complex Numbers

Section 8.3 Polar Form of Complex Numbers 80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling

Real-Time Systems. Multiprocessor scheduling. Multiprocessor scheduling. Multiprocessor scheduling Real-Tme Systems Multprocessor schedulng Specfcaton Implementaton Verfcaton Multprocessor schedulng -- -- Global schedulng How are tasks assgned to processors? Statc assgnment The processor(s) used for

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

General theory of fuzzy connectedness segmentations: reconciliation of two tracks of FC theory

General theory of fuzzy connectedness segmentations: reconciliation of two tracks of FC theory General theory of fuzzy connectedness segmentatons: reconclaton of two tracks of FC theory Krzysztof Chrs Ceselsk Department of Mathematcs, West Vrgna Unversty and MIPG, Department of Radology, Unversty

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

An (almost) unbiased estimator for the S-Gini index

An (almost) unbiased estimator for the S-Gini index An (almost unbased estmator for the S-Gn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute S-Gn and an almost unbased estmator for the relatve S-Gn for

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Hidden Markov Models

Hidden Markov Models Hdden Markov Models Namrata Vaswan, Iowa State Unversty Aprl 24, 204 Hdden Markov Model Defntons and Examples Defntons:. A hdden Markov model (HMM) refers to a set of hdden states X 0, X,..., X t,...,

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Foundations of Arithmetic

Foundations of Arithmetic Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an

More information

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)

MATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2) 1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons

More information

Lecture 4: November 17, Part 1 Single Buffer Management

Lecture 4: November 17, Part 1 Single Buffer Management Lecturer: Ad Rosén Algorthms for the anagement of Networs Fall 2003-2004 Lecture 4: November 7, 2003 Scrbe: Guy Grebla Part Sngle Buffer anagement In the prevous lecture we taled about the Combned Input

More information

The Two-scale Finite Element Errors Analysis for One Class of Thermoelastic Problem in Periodic Composites

The Two-scale Finite Element Errors Analysis for One Class of Thermoelastic Problem in Periodic Composites 7 Asa-Pacfc Engneerng Technology Conference (APETC 7) ISBN: 978--6595-443- The Two-scale Fnte Element Errors Analyss for One Class of Thermoelastc Problem n Perodc Compostes Xaoun Deng Mngxang Deng ABSTRACT

More information

Parametric fractional imputation for missing data analysis

Parametric fractional imputation for missing data analysis Secton on Survey Research Methods JSM 2008 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Wayne Fuller Abstract Under a parametrc model for mssng data, the EM algorthm s a popular tool

More information

Bayesian predictive Configural Frequency Analysis

Bayesian predictive Configural Frequency Analysis Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse

More information

Exercises. 18 Algorithms

Exercises. 18 Algorithms 18 Algorthms Exercses 0.1. In each of the followng stuatons, ndcate whether f = O(g), or f = Ω(g), or both (n whch case f = Θ(g)). f(n) g(n) (a) n 100 n 200 (b) n 1/2 n 2/3 (c) 100n + log n n + (log n)

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Multiple Sound Source Location in 3D Space with a Synchronized Neural System

Multiple Sound Source Location in 3D Space with a Synchronized Neural System Multple Sound Source Locaton n D Space wth a Synchronzed Neural System Yum Takzawa and Atsush Fukasawa Insttute of Statstcal Mathematcs Research Organzaton of Informaton and Systems 0- Mdor-cho, Tachkawa,

More information

On the correction of the h-index for career length

On the correction of the h-index for career length 1 On the correcton of the h-ndex for career length by L. Egghe Unverstet Hasselt (UHasselt), Campus Depenbeek, Agoralaan, B-3590 Depenbeek, Belgum 1 and Unverstet Antwerpen (UA), IBW, Stadscampus, Venusstraat

More information

Uncertainty as the Overlap of Alternate Conditional Distributions

Uncertainty as the Overlap of Alternate Conditional Distributions Uncertanty as the Overlap of Alternate Condtonal Dstrbutons Olena Babak and Clayton V. Deutsch Centre for Computatonal Geostatstcs Department of Cvl & Envronmental Engneerng Unversty of Alberta An mportant

More information

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS

A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,

More information

arxiv:cs.cv/ Jun 2000

arxiv:cs.cv/ Jun 2000 Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São

More information

Lecture 3. Ax x i a i. i i

Lecture 3. Ax x i a i. i i 18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

= z 20 z n. (k 20) + 4 z k = 4

= z 20 z n. (k 20) + 4 z k = 4 Problem Set #7 solutons 7.2.. (a Fnd the coeffcent of z k n (z + z 5 + z 6 + z 7 + 5, k 20. We use the known seres expanson ( n+l ( z l l z n below: (z + z 5 + z 6 + z 7 + 5 (z 5 ( + z + z 2 + z + 5 5

More information

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics

ECONOMICS 351*-A Mid-Term Exam -- Fall Term 2000 Page 1 of 13 pages. QUEEN'S UNIVERSITY AT KINGSTON Department of Economics ECOOMICS 35*-A Md-Term Exam -- Fall Term 000 Page of 3 pages QUEE'S UIVERSITY AT KIGSTO Department of Economcs ECOOMICS 35* - Secton A Introductory Econometrcs Fall Term 000 MID-TERM EAM ASWERS MG Abbott

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

arxiv: v2 [stat.me] 26 Jun 2012

arxiv: v2 [stat.me] 26 Jun 2012 The Two-Way Lkelhood Rato (G Test and Comparson to Two-Way χ Test Jesse Hoey June 7, 01 arxv:106.4881v [stat.me] 6 Jun 01 1 One-Way Lkelhood Rato or χ test Suppose we have a set of data x and two hypotheses

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals

Statistical inference for generalized Pareto distribution based on progressive Type-II censored data with random removals Internatonal Journal of Scentfc World, 2 1) 2014) 1-9 c Scence Publshng Corporaton www.scencepubco.com/ndex.php/ijsw do: 10.14419/jsw.v21.1780 Research Paper Statstcal nference for generalzed Pareto dstrbuton

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem

Speeding up Computation of Scalar Multiplication in Elliptic Curve Cryptosystem H.K. Pathak et. al. / (IJCSE) Internatonal Journal on Computer Scence and Engneerng Speedng up Computaton of Scalar Multplcaton n Ellptc Curve Cryptosystem H. K. Pathak Manju Sangh S.o.S n Computer scence

More information

Conjugacy and the Exponential Family

Conjugacy and the Exponential Family CS281B/Stat241B: Advanced Topcs n Learnng & Decson Makng Conjugacy and the Exponental Famly Lecturer: Mchael I. Jordan Scrbes: Bran Mlch 1 Conjugacy In the prevous lecture, we saw conjugate prors for the

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

UNR Joint Economics Working Paper Series Working Paper No Further Analysis of the Zipf Law: Does the Rank-Size Rule Really Exist?

UNR Joint Economics Working Paper Series Working Paper No Further Analysis of the Zipf Law: Does the Rank-Size Rule Really Exist? UNR Jont Economcs Workng Paper Seres Workng Paper No. 08-005 Further Analyss of the Zpf Law: Does the Rank-Sze Rule Really Exst? Fungsa Nota and Shunfeng Song Department of Economcs /030 Unversty of Nevada,

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

The optimal delay of the second test is therefore approximately 210 hours earlier than =2.

The optimal delay of the second test is therefore approximately 210 hours earlier than =2. THE IEC 61508 FORMULAS 223 The optmal delay of the second test s therefore approxmately 210 hours earler than =2. 8.4 The IEC 61508 Formulas IEC 61508-6 provdes approxmaton formulas for the PF for smple

More information

Formulas for the Determinant

Formulas for the Determinant page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2].

Volume 18 Figure 1. Notation 1. Notation 2. Observation 1. Remark 1. Remark 2. Remark 3. Remark 4. Remark 5. Remark 6. Theorem A [2]. Theorem B [2]. Bulletn of Mathematcal Scences and Applcatons Submtted: 016-04-07 ISSN: 78-9634, Vol. 18, pp 1-10 Revsed: 016-09-08 do:10.1805/www.scpress.com/bmsa.18.1 Accepted: 016-10-13 017 ScPress Ltd., Swtzerland

More information