Sensitivity Analysis in Markov Networks
|
|
- Gavin Blankenship
- 5 years ago
- Views:
Transcription
1 Sensitivity Analysis in Markov Networks Hei Chan and Adnan Darwihe Computer Siene Department University of California, Los Angeles Los Angeles, CA Abstrat This paper explores the topi of sensitivity analysis in Markov networks, by takling questions similar to those arising in the ontext of Bayesian networks: the tuning of parameters to satisfy query onstraints, and the bounding of query hanges when perturbing network parameters. Even though the distribution indued by a Markov network orresponds to ratios of multi-linear funtions, whereas the distribution indued by a Bayesian network orresponds to multi-linear funtions, the results we obtain for Markov networks are as effetive omputationally as those obtained for Bayesian networks. This similarity is due to the fat that onditional probabilities have the same funtional form in both Bayesian and Markov networks, whih turns out to be the more influential fator. The major differene we found, however, is in how hanges in parameter values should be quantified, as suh parameters are interpreted differently in Bayesian networks and Markov networks. 1 Introdution In the domain of unertainty reasoning, graphial models are used to embed and represent probabilisti dependenies between random variables. There are two major types of graphial models: Bayesian networks [Pearl, 1988; Jensen, 2001] and Markov networks [Pearl, 1988]. In both models, there are two omponents, a qualitative part and a quantitative part. In the qualitative part, a graph is used to represent the interations between variables, suh that variables with diret interations are onneted by edges, and information of independene relationships between variables an be inferred from the network struture. In the quantitative part, a set of parameters are speified to quantify the strengths of influenes between related variables. The joint probability distribution an then be indued from the two omponents. Bayesian networks and Markov networks differ in both the qualitative and quantitative parts of the models. For the qualitative part, direted edges are used in a Bayesian network to represent influenes from one variable to another, and no yles are allowed in the graph, while undireted edges are used in a Markov network to represent symmetrial probabilisti dependenies between variables. For the quantitative part, the parameters in a Bayesian network are speified in onditional probability tables (CPTs) of the variables, where eah CPT onsists of the onditional probability distributions of a variable given instantiations of its parents, whih are the variables that diretly influene this variable. Beause these parameters are onditional probabilities, they are more intuitive, but have to obey ertain properties. For example, the probabilities in eah onditional distribution must be normalized and sum to 1. On the other hand, the parameters in a Markov network are given in potentials, defined over instantiations of liques, whih are maximal sets of variables suh that every pair of variables in a lique are onneted by an edge. These parameters do not need to be normalized and an be hanged more freely. The topi of sensitivity analysis is onerned with understanding and haraterizing the relationship between the loal parameters quantifying a network and the global queries omputed based on the network. Sensitivity analysis has been investigated quite omprehensively in Bayesian networks [Laskey, 1995; Castillo et al., 1997; Jensen, 1999; Kjærulff and van der Gaag, 2000; Chan and Darwihe, 2002; 2004; 2005]. Earlier work on the subjet has observed that the distribution indued by a Bayesian network orresponds to multi-linear funtions, providing effetive omputational methods for omputing P r(e)/ θ x u : the partial derivative of a joint probability with respet to a network parameter. These earlier results were reently pursued further, providing effetive methods for solving the following problems: What is the neessary parameter hange we need to apply suh that a given query onstraint is satisfied? For example, we may want some query value P r(z e) to be at least p by hanging network parameters. Effiient solutions were given to this problem for omputing minimum hanges for single parameters [Chan and Darwihe, 2002], and multiple parameters in a single CPT [Chan and Darwihe, 2004]. What is the bound on the hange in some query value if we apply an arbitrary parameter hange? Here, we are interested in finding a general bound for any query and any parameter of any network. It has been shown that the log-odds hange in any onditional query in a
2 Bayesian network is bounded by the log-odds hange in any single parameter [Chan and Darwihe, 2002]. What is the bound on the differene between some query value indued by two networks that have the same struture but different parameter values? The solution to this problem is based on a newly proposed distane measure [Chan and Darwihe, 2005], whih allows one to bound the differene between the query values under two distributions. Based on this distane measure, and given two Bayesian networks that differ by only a single CPT, the global distane measure between the distributions indued by the networks an be omputed from the loal distane measure between the CPTs, thereby allowing one to provide a bound on the differene between the query values. Moreover, if we are given multiple CPTs that do not interset with eah other, the global distane measure an still be omputed as the sum of the loal distane measures between the individual CPTs. In this paper, we address these key questions of sensitivity analysis but with respet to Markov networks. The main question of interest is the extent to whih these promising results hold in Markov networks as well. There is indeed a key differene between Bayesian networks and Markov networks whih appears to suggest a lak of parallels in this ase: whereas the joint distribution indued by a Bayesian network orresponds to multi-linear funtions, the joint distribution indued by a Markov network orresponds to ratios of multi-linear funtions. As it turns out, however, the onditional probability funtion has the same funtional form in both Bayesian and Markov networks, as ratios of multi-linear funtions. This similarity turns out to be the key fator here, allowing us to derive similarly effetive results for Markov networks. This is greatly benefiial beause we an answer the previous three questions in the ontext of Markov networks as well, with the same omputational omplexity. For example, we an go through eah parameter in a Markov network, ompute the minimum single parameter hanges neessary to enfore a query onstraint, and find the one that perturbs the network the least. Alternatively, we an hange all parameters in a single lique table, and find the hange that minimizes network perturbation. Afterwards, we an ompute a bound on any query hange using the distane measure inurred by the parameter hange. Our results, however, point to a main semantial differene between Bayesian networks and Markov networks, relating to how one should quantify and measure parameter hanges. That is, how should one quantify a parameter hange from.3 to.4? In a Bayesian network, parameters are interpreted as onditional probabilities, and the measure whih quantifies the hange is the relative odds hange in the parameter value. This means query values are muh more sensitive to hanges in extreme probability values, whether lose to 0 or 1. On the other hand, in a Markov network, parameters are interpreted as ompatibility ratios, and the measure whih quantifies the hange is the relative hange in the parameter value. This differene stems from how the parameters in the two models are interpreted and will be explained in more depth later. This paper is strutured as follows. Setion 2 is dediated A 1 B 1 B 2 A 2 Figure 1: A sample Markov network struture in terms of an undireted graph G. to the definition and parameterization of Markov networks. Setion 3 presents a proedure whih tunes parameters in a Markov network in order to satisfy a query onstraint. Setion 4 introdues a distane measure that an be used to bound query hange between distributions indued by two Markov networks. Setion 5 onludes the paper. 2 Markov networks A Markov network M = (G, Θ) onsists of two parts: the network struture in terms of an undireted graph G, and the parameterization Θ. Eah node in the undireted graph G represents a random variable, and two variables are onneted by an edge if there exists a diret interation between them. The absene of an edge between two variables means any potential interation between them is indiret and onditional upon other variables. As an example, onsider four individuals, A 1, A 2, B 1, B 2, and we wish to represent the possible transmission of a ontagious disease among them. A network struture whih onsists of these variables is shown in Figure 1. Eah pair of variables {A i, B j } is onneted by an edge, meaning these pairs of individuals are in diret ontat of eah other. However, A 1 and A 2 are not onneted with an edge, meaning they do not interat diretly with eah other, although diseases an still be transmitted between them indiretly through B 1 or B 2. To quantify the edges in a Markov network, we first define liques in the network struture. A lique C is a maximal set of variables where eah pair of variables in the set C is onneted by an edge. In the network struture in Figure 1, there are four liques: {A 1, B 1 }, {A 1, B 2 }, {A 2, B 1 }, {A 2, B 2 }. For eah lique C, we define a table Θ C that assigns a nonnegative number to eah instantiation of variables C, whih measures the relative degree of ompatibility assoiated with eah instantiation. The numbers given indiate the level of ompatibility between the values of the variables in the lique, where diret interations exist between every pair of variables. For example, in our example network, we define the same lique table Θ Ai,B j shown in Table 1 for eah lique {A i, B j }. For example, it an be viewed that both A i and B j not having the disease is four times more ompatible than both A i and B j having the disease, sine θāi, b j = 4θ ai,b j. 1 1 Parameters may be estimated on an ad ho basis [Geman and Geman, 1984] or by statistial models. It an be diffiult to assign meanings and numbers to Markov network parameters intuitively [Pearl, 1988, pages ].
3 A i B j Θ Ai,B j a i b j 1 a i bj 2 ā i b j 3 ā i bj 4 Table 1: The lique table of {A i, B j } of the Markov network whose struture is shown in Figure Joint and onditional probabilities indued by Markov networks We now proeed to find the distribution indued by a Markov network. If X is the set of all variables in the Markov network M = (G, Θ), the joint potential ψ over X indued by M is defined as: ψ(x) def =. (1) x That is, ψ(x) is the produt of parameters in eah lique C, suh that instantiation is onsistent with x. The joint distribution P r M indued by Markov network M is then defined as its normalized joint potential: P r M (x) def = ψ(x) = Kψ(x), ψ(x) x where K = 1/ x ψ(x) is the normalizing onstant. From this equation, we an easily verify that the speifi parameter values in a lique are not important, but their ratios are. This is a major departure from Bayesian networks, where speifi parameter values in CPTs are important. Given the network struture in Figure 1 with the parameterization in Table 1, we an ompute the joint probability of any full instantiation of variables indued by our example network. For example, the joint probability of a 1, ā 2, b 1, b 2 is equal to Kθ a1,b 1 θ a1, b 2 θā2,b 1 θā2, b 2 = K , where K is the normalizing onstant. For an instantiation e of variables E X, we an express the joint probability P r M (e) as: P r M (e) = x e P r M (x) = x e Kψ(x) = Kφ(e), where the funtion φ(e) is defined as the sum of the potentials of x whih are onsistent with e: φ(e) def = ψ(x) =, (2) x e x e x and any onditional probability P r M (z e) as: P r M (z e) = P rm (z, e) P r M (e) = Kφ(z, e) Kφ(e) = φ(z, e) φ(e). (3) From Equation 2, we an see that both φ(z, e) and φ(e) are sums of produts of the network parameters. Therefore, the onditional probability P r M (z e) indued by a Markov network an be expressed as the ratio of two multi-linear funtions of the network parameters. As a omparison, the parameterization Θ of a direted ayli graph N for a Bayesian network onsists of a set of CPTs, one for eah variable X, whih are onditional probabilities in the form of θ x u = P r(x u), where U are the parents of X. The joint distribution P r B indued by Bayesian network B = (N, Θ) is: P r B (x) def = θ x u. x,u x The joint probability P r B (e) an be expressed as: P r B (e) = θ x u, x e x,u x and any onditional probability P r B (z e) as: P r B (z e) = P rb (z, e) x z,e P r B = (e) x e x,u x θ x u x,u x θ x u. (4) We an easily see the similarities between the expressions of P r M (z e) and P r B (z e) in Equations 3 and 4 lie in the fat that both an be expressed as ratios of two multilinear funtions of the network parameters. This is different for joint probabilties, where in a Bayesian network they are multi-linear funtions of the network parameters, while in a Markov network they are ratios of multi-linear funtions of the network parameters, sine the normalizing onstant an be expressed as K = 1/ x ψ(x) = 1/φ(true). This similarity in terms of onditional probabilities will be the basis of our results in the following setion. 3 Tuning of parameters in Markov networks We now answer the following question in the ontext of Markov networks: what is the neessary hange we an apply to ertain parameter(s) suh that a query onstraint is satisfied? For example, if P r is the probability distribution indued by a Markov network, we may want to satisfy the query onstraint P r(z e) k, for some k [0, 1]. 2 Notie that given all the parameters in the lique C, φ(e) an be expressed as a multi-linear funtion of these parameters, as shown in Equation 2. Sine φ(e) is linear in eah parameter, and no two parameters in the same lique are multiplied together, if we apply a hange of to eah parameter in the lique C, the hange in φ(e) is: φ(e) = φ(e). θ To ensure the query onstraint P r(z e) k, from Equation 3, this is equivalent to ensuring the inequality φ(z, e) k φ(e). If the urrent φ values of z, e and e are ϕ(z, e) and ϕ(e) respetively, this inequality beomes: ϕ(z, e) + φ(z, e) k(ϕ(e) + φ(e)), or: ( φ(z, e) k ϕ(e) + ) φ(e). ϕ(z, e) + Rearranging the terms, we get the following theorem and orollary. 2 We an easily expand our results in this setion into other forms of onstraints, suh as P r(z e) k, P r(z 1 e) P r(z 2 e) k, or P r(z 1 e)/p r(z 2 e) k.
4 Theorem 1 To ensure the probability distribution P r indued by a Markov network satisfies the query onstraint P r(z e) k, we must hange eah parameter in the lique C by suh that: α( ) (ϕ(z, e) k ϕ(e)), (5) where ϕ(z, e) and ϕ(e) are the urrent φ values of z, e and e respetively, and: α( ) = φ(z, e) k φ(e). (6) Corollary 1 If instead of hanging all parameters in the lique C, we are only allowed to hange a single parameter by, the solution of Theorem 1 beomes: α( ) (ϕ(z, e) k ϕ(e)), (7) whih returns a solution interval of possible. Therefore, to solve for in Inequality 7 for all parameters in the Markov network, we need to ompute the original values ϕ(z, e) and ϕ(e), whih should already be known when omputing the original probability of z e, and the partial derivatives φ(z, e)/ and φ(e)/, to find α( ) in Equation 6 for all parameters. To do this, we an use a proedure in time omplexity O(n exp(w)) where n is the number of variables and w is the treewidth of the Markov network, similar to the one proposed to ompute partial derivatives in a Bayesian network [Darwihe, 2003]. This an greatly help users debug their network when they are faed with query results that do not math their expetations. We now use our example network to illustrate this proedure. The probability distribution P r indued by the urrent parameterization gives us the onditional query value P r(ā 2 a 1 ) =.789. Assume that we would like to hange a single parameter in the lique {A 2, B 1 } to ensure the onstraint P r(ā 2 a 1 ).9. We need to ompute the α values for eah parameter in the lique using Equation 6, and then use Inequality 7 to solve for the minimal. The solutions are: θ a2,b ; θ a2, b ; θā2,b 1 12; θā2, b However, notie that sine the parameter values have to be non-negative, the solution for θ a2,b 1 is impossible to ahieve. Therefore, no possible single parameter hange on θ a2,b 1 is possible to ensure our query onstraint. However, we an derease the parameter θ a2, b 1 from 2 to.533 to ensure our query onstraint. If we are interested in hanging all the parameters in the lique {A 2, B 1 } to satisfy our query onstraint, we need to find a solution that satisfies Inequality 5. As a onsequene, we are now faed with multiple solutions of hanging the parameters, and we want to ommit to one whih disturbs the network the least. We will disuss this in the next setion using a distane measure whih measures network perturbation. 4 Bounding belief hange between Markov networks using a distane measure We now answer the following question in the ontext of Markov networks: what is the bound on the differene between some query value indued by two networks that have the same struture but different parameter values? We will answer it by using a previously proposed distane measure. Let P r and P r be two probability distributions over the same set of variables X. We use a distane measure D(P r, P r ) defined as follows [Chan and Darwihe, 2005]: D(P r, P r ) def P r (x) P r (x) ln min x P r(x) x P r(x), where we will define 0/0 def = 1 and / def = 1. 3 The signifiane of this distane measure is that if the distane measure is D(P r, P r ) = d, the hange in the probability of any onditional query z e is bounded by: e d O (z e) O(z e) ed, where O (z e) and O(z e) are the odds of z e under distributions P r and P r respetively. 4 Given p = P r (z e), this result an also be expressed in terms of probabilities: p e d p(e d 1) + 1 P p e d r (z e) p(e d 1) + 1. (8) The advantage of this distane measure is that it an be omputed using loal information for Bayesian networks. Given distributions P r B and P r B indued by two Bayesian networks B and B that differ by only the CPT of a single variable X, if P r(u) > 0 for every u of parents U, the distane measure between P r B and P r B an be omputed as: D(P r B, P r B ) = D(Θ X U, Θ X U ) x,u θ x u θ x u ln min. (9) θ x u x,u θ x u This is useful beause we an ompute the bound on any query hange using Inequality 8 by omputing the loal distane between the CPTs Θ X U and Θ X U in B and B. For example, if X is binary, and we only hange a single parameter θ x u while applying a omplementary hange on θ x u, the bound on the hange in query z e is given by: O(θ x u ) O(θ x u ) OB (z e) O B (z e) O(θ x u ) O(θ x u ), (10) where O(θ x u ) and O(θ x u) are the odds of parameters θ x u and θ x u respetively. Intuitively this means the relative hange in the query odds is bounded by the relative hange in the parameter odds. 3 This measure is a distane measure beause it satisfies positiveness, symmetry and the triangle inequality. 4 The odds of z e under distribution P r is defined as: O(z e) def = P r(z e)/(1 P r(z e)).
5 We an get a similar result for Markov networks, where the distane measure between distributions indued by two Markov networks that differ by only a single lique table an be omputed by the distane measure between the tables. Theorem 2 Given distributions P r M and P r M indued by two Markov networks M and M whih differ by only the parameters in a single lique C, suh that the lique tables are Θ C and Θ C respetively, the distane measure between P r M and P r M is given by: D(P r M, P r M ) = D(Θ C, Θ C) if ψ(x)/ 0 for all x. 5 ln min, (11) Proof Given instantiation suh that x, the joint potential ψ of x is linear in the parameter, and the ratio of ψ (x) and ψ(x) indued by M and M respetively is: if ψ(x)/ 0. We have: ψ (x) ψ(x) = θ, P r M (x) P r M (x) = K ψ (x) Kψ(x) = K θ K. Note that sine the parameters have hanged, the normalizing onstants K and K for networks M and M respetively are different. Therefore, the distane measure between P r M and P r M is given by: D(P r M, P r M ) x P r M (x) P r M (x) K K θ = D(Θ C, Θ C), ln min x ln min ln min θ K θ K P r M (x) P r M (x) if ψ(x)/ 0 for all x. Therefore, the global distane measure between the distributions indued is equal to the loal distane measure between the individual lique tables. This is useful for omputing the bound on query hange after hanging the parameters in a lique table. For example, what is the bound on the hange in some query value if we apply an arbitrary hange on the single parameter? The distane measure in this ase is: D(P r M, P r M ) = ln θ, 5 This ondition is satisfied when the network parameters are all stritly positive. However, this is a suffiient ondition, not a neessary ondition. The neessary ondition is there exists some x suh that ψ(x)/ 0 for all. This means that hanging any parameter will have some impat on the joint potential ψ. δ Figure 2: The amount of parameter hange δ that would guarantee the query P r(z e) =.75 to stay within the interval [.667,.818], as a funtion of the urrent Bayesian network parameter value p. and the hange in query z e is bounded by: OM (z e) O M (z e) θ. (12) This means for Markov networks, the relative hange in query odds is bounded by the relative hange in the parameter itself, not the relative hange in the parameter odds as for Bayesian networks. This is an important distintion between Markov networks and Bayesian networks. As an example, suppose we have a network and we want to ensure the robustness of the query P r(z e) after we apply a parameter hange. Assume that we define robustness as the relative hange in any query odds to be less than 1.5. For example, if urrently P r(z e) =.75, the new query value must stay in the interval [.667,.818] after the parameter hange. We may ask, how muh hange an we apply to a parameter if we want to ensure robustness? The answers for Bayesian networks and Markov networks are different due to our previous results, as we will show next. For a Bayesian network, the amount of permissible hange is determined by Inequality 10, and is plotted in Figure 2. We an see that the amount of permissible hange is small when the parameters have extreme values lose to 0 or 1, beause the relative odds hange is large when even a very small absolute hange is applied. On the other hand, for a Markov network, the amount of permissible hange is determined by Inequality 12, and is plotted in Figure 3. We an see that the amount of permissible hange is proportional to the parameter values, beause relative hange is used instead of relative odds hange. Therefore, for a Bayesian network, the sensitivity of the network with respet to a parameter is largest for extreme values lose to 0 or 1, and beomes smaller as the parameter approahes.5, while for a Markov network, the sensitivity of the network with respet to a parameter is proportional to its value, and inreases as it grows larger. The distane measure is useful in many aspets of sensitivity analysis. For example, given the possible single parameter hanges in our example in the previous setion, we an hoose the one whih perturbs the network the least aording to the distane measure. In this ase, the most preferred p
6 δ p Figure 3: The amount of parameter hange δ that would guarantee the query P r(z e) =.75 to stay within the interval [.667,.818], as a funtion of the urrent Markov network parameter value p. single parameter hange is to derease the parameter θ a2, b 1 from 2 to.533, inurring a distane measure of Moreover, we an also use the distane measure to find the optimal solution for hanging all parameters in a lique table, whih is the solution that satisfies Inequality 5 and minimizes the distane measure. As the distane measure D(Θ C, Θ C ) is omputed using the maximum relative hange in the parameters, the relative hanges in all parameters must be the same for the optimal solution, beause to obtain another solution that satisfies the onstraint, we must inrease the relative hange in one parameter while dereasing the relative hange in another, thereby inurring a larger distane measure. For example, in our example from the previous setion, to ensure our query onstraint, we would like to derease the parameters θ a2,b 1 and θ a2, b 1 and inrease the parameters θā2,b 1 and θā2, b 1, suh that the relative hanges in all parameters are the same. However, sine only the ratios between the parameters are important, we an keep the first two parameters onstant and only inrease the last two parameters. The optimal solution of Inequality 5 is: θ a2,b 1 = 0; θ a2, b 1 = 0; θā2,b 1 = 6.257; θā2, b 1 = This parameter hange inurs a distane measure of.350. It involves all parameters in the lique table and inurs a smaller distane measure than any of the single parameter hanges omputed earlier. Finally, if we hange the parameters in different lique tables whih do not share any variables, the distane measure an be omputed as the sum of the loal distane measures between the lique tables. This result is also similar to the ase of Bayesian networks [Chan and Darwihe, 2004]. 5 Conlusion In this paper, we expanded some of the main results in the topi of sensitivity analysis from Bayesian networks to Markov networks. We were able to find many parallels between the results in both models even given the differenes in how their parameters are interpreted. Our results allow for the effetive debugging of Markov networks by hanging parameters in a single potential for ensuring query onstraints. We also showed how to ompute a bound on any query hange between two Markov networks that have the same struture, but differ in the parameters of a single potential, and to hoose parameter hanges whih minimize network disturbane. Finally, we identified a key semantial differene between Bayesian networks and Markov networks, relating to how parameter hanges should be quantified. Referenes [Castillo et al., 1997] Enrique Castillo, José Manuel Gutiérrez, and Ali S. Hadi. Sensitivity analysis in disrete Bayesian networks. IEEE Transations on Systems, Man, and Cybernetis, Part A (Systems and Humans), 27: , [Chan and Darwihe, 2002] Hei Chan and Adnan Darwihe. When do numbers really matter? Journal of Artifiial Intelligene Researh, 17: , [Chan and Darwihe, 2004] Hei Chan and Adnan Darwihe. Sensitivity analysis in Bayesian networks: From single to multiple parameters. In Proeedings of the Twentieth Conferene on Unertainty in Artifiial Intelligene (UAI), pages 67 75, Arlington, Virginia, AUAI Press. [Chan and Darwihe, 2005] Hei Chan and Adnan Darwihe. A distane measure for bounding probabilisti belief hange. International Journal of Approximate Reasoning, 38: , [Darwihe, 2003] Adnan Darwihe. A differential approah to inferene in Bayesian networks. Journal of the ACM, 50: , [Geman and Geman, 1984] Stuart Geman and Donald Geman. Stohasti relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transations in Pattern Analysis and Mahine Intelligene, 6: , [Jensen, 1999] Finn V. Jensen. Gradient desent training of Bayesian networks. In Proeedings of the Fifth European Conferene on Symboli and Quantitative Approahes to Reasoning with Unertainty (ECSQARU), pages , Berlin, Germany, Springer-Verlag. [Jensen, 2001] Finn V. Jensen. Bayesian Networks and Deision Graphs. Springer-Verlag, New York, [Kjærulff and van der Gaag, 2000] Uffe Kjærulff and Linda C. van der Gaag. Making sensitivity analysis omputationally effiient. In Proeedings of the Sixteenth Conferene on Unertainty in Artifiial Intelligene (UAI), pages , San Franiso, California, Morgan Kaufmann Publishers. [Laskey, 1995] Kathryn B. Laskey. Sensitivity analysis for probability assessments in Bayesian networks. IEEE Transations on Systems, Man, and Cybernetis, 25: , [Pearl, 1988] Judea Pearl. Probabilisti Reasoning in Intelligent Systems: Networks of Plausible Inferene. Morgan Kaufmann Publishers, San Franiso, California, 1988.
Probabilistic Graphical Models
Probabilisti Graphial Models David Sontag New York University Leture 12, April 19, 2012 Aknowledgement: Partially based on slides by Eri Xing at CMU and Andrew MCallum at UMass Amherst David Sontag (NYU)
More informationComplexity of Regularization RBF Networks
Complexity of Regularization RBF Networks Mark A Kon Department of Mathematis and Statistis Boston University Boston, MA 02215 mkon@buedu Leszek Plaskota Institute of Applied Mathematis University of Warsaw
More informationNonreversibility of Multiple Unicast Networks
Nonreversibility of Multiple Uniast Networks Randall Dougherty and Kenneth Zeger September 27, 2005 Abstrat We prove that for any finite direted ayli network, there exists a orresponding multiple uniast
More informationMillennium Relativity Acceleration Composition. The Relativistic Relationship between Acceleration and Uniform Motion
Millennium Relativity Aeleration Composition he Relativisti Relationship between Aeleration and niform Motion Copyright 003 Joseph A. Rybzyk Abstrat he relativisti priniples developed throughout the six
More informationDanielle Maddix AA238 Final Project December 9, 2016
Struture and Parameter Learning in Bayesian Networks with Appliations to Prediting Breast Caner Tumor Malignany in a Lower Dimension Feature Spae Danielle Maddix AA238 Final Projet Deember 9, 2016 Abstrat
More informationMethods of evaluating tests
Methods of evaluating tests Let X,, 1 Xn be i.i.d. Bernoulli( p ). Then 5 j= 1 j ( 5, ) T = X Binomial p. We test 1 H : p vs. 1 1 H : p>. We saw that a LRT is 1 if t k* φ ( x ) =. otherwise (t is the observed
More informationAssessing the Performance of a BCI: A Task-Oriented Approach
Assessing the Performane of a BCI: A Task-Oriented Approah B. Dal Seno, L. Mainardi 2, M. Matteui Department of Eletronis and Information, IIT-Unit, Politenio di Milano, Italy 2 Department of Bioengineering,
More information7 Max-Flow Problems. Business Computing and Operations Research 608
7 Max-Flow Problems Business Computing and Operations Researh 68 7. Max-Flow Problems In what follows, we onsider a somewhat modified problem onstellation Instead of osts of transmission, vetor now indiates
More informationMaximum Entropy and Exponential Families
Maximum Entropy and Exponential Families April 9, 209 Abstrat The goal of this note is to derive the exponential form of probability distribution from more basi onsiderations, in partiular Entropy. It
More informationProbabilistic Graphical Models
Probabilisti Graphial Models 0-708 Undireted Graphial Models Eri Xing Leture, Ot 7, 2005 Reading: MJ-Chap. 2,4, and KF-hap5 Review: independene properties of DAGs Defn: let I l (G) be the set of loal independene
More informationChapter 8 Hypothesis Testing
Leture 5 for BST 63: Statistial Theory II Kui Zhang, Spring Chapter 8 Hypothesis Testing Setion 8 Introdution Definition 8 A hypothesis is a statement about a population parameter Definition 8 The two
More informationSensitivity analysis for linear optimization problem with fuzzy data in the objective function
Sensitivity analysis for linear optimization problem with fuzzy data in the objetive funtion Stephan Dempe, Tatiana Starostina May 5, 2004 Abstrat Linear programming problems with fuzzy oeffiients in the
More informationOn Component Order Edge Reliability and the Existence of Uniformly Most Reliable Unicycles
Daniel Gross, Lakshmi Iswara, L. William Kazmierzak, Kristi Luttrell, John T. Saoman, Charles Suffel On Component Order Edge Reliability and the Existene of Uniformly Most Reliable Uniyles DANIEL GROSS
More informationRemark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the function V ( x ) to be positive definite.
Leture Remark 4.1 Unlike Lyapunov theorems, LaSalle s theorem does not require the funtion V ( x ) to be positive definite. ost often, our interest will be to show that x( t) as t. For that we will need
More informationSufficient Conditions for a Flexible Manufacturing System to be Deadlocked
Paper 0, INT 0 Suffiient Conditions for a Flexile Manufaturing System to e Deadloked Paul E Deering, PhD Department of Engineering Tehnology and Management Ohio University deering@ohioedu Astrat In reent
More informationOptimization of Statistical Decisions for Age Replacement Problems via a New Pivotal Quantity Averaging Approach
Amerian Journal of heoretial and Applied tatistis 6; 5(-): -8 Published online January 7, 6 (http://www.sienepublishinggroup.om/j/ajtas) doi:.648/j.ajtas.s.65.4 IN: 36-8999 (Print); IN: 36-96 (Online)
More informationUniversity of Groningen
University of Groningen Port Hamiltonian Formulation of Infinite Dimensional Systems II. Boundary Control by Interonnetion Mahelli, Alessandro; van der Shaft, Abraham; Melhiorri, Claudio Published in:
More informationTaste for variety and optimum product diversity in an open economy
Taste for variety and optimum produt diversity in an open eonomy Javier Coto-Martínez City University Paul Levine University of Surrey Otober 0, 005 María D.C. Garía-Alonso University of Kent Abstrat We
More informationA Spatiotemporal Approach to Passive Sound Source Localization
A Spatiotemporal Approah Passive Sound Soure Loalization Pasi Pertilä, Mikko Parviainen, Teemu Korhonen and Ari Visa Institute of Signal Proessing Tampere University of Tehnology, P.O.Box 553, FIN-330,
More informationPacking Plane Spanning Trees into a Point Set
Paking Plane Spanning Trees into a Point Set Ahmad Biniaz Alfredo Garía Abstrat Let P be a set of n points in the plane in general position. We show that at least n/3 plane spanning trees an be paked into
More informationError Bounds for Context Reduction and Feature Omission
Error Bounds for Context Redution and Feature Omission Eugen Bek, Ralf Shlüter, Hermann Ney,2 Human Language Tehnology and Pattern Reognition, Computer Siene Department RWTH Aahen University, Ahornstr.
More informationA NETWORK SIMPLEX ALGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM
NETWORK SIMPLEX LGORITHM FOR THE MINIMUM COST-BENEFIT NETWORK FLOW PROBLEM Cen Çalışan, Utah Valley University, 800 W. University Parway, Orem, UT 84058, 801-863-6487, en.alisan@uvu.edu BSTRCT The minimum
More informationStress triaxiality to evaluate the effective distance in the volumetric approach in fracture mechanics
IOSR Journal of ehanial and Civil Engineering (IOSR-JCE) e-issn: 78-1684,p-ISSN: 30-334X, Volume 11, Issue 6 Ver. IV (Nov- De. 014), PP 1-6 Stress triaxiality to evaluate the effetive distane in the volumetri
More informationDirectional Coupler. 4-port Network
Diretional Coupler 4-port Network 3 4 A diretional oupler is a 4-port network exhibiting: All ports mathed on the referene load (i.e. S =S =S 33 =S 44 =0) Two pair of ports unoupled (i.e. the orresponding
More informationWord of Mass: The Relationship between Mass Media and Word-of-Mouth
Word of Mass: The Relationship between Mass Media and Word-of-Mouth Roman Chuhay Preliminary version Marh 6, 015 Abstrat This paper studies the optimal priing and advertising strategies of a firm in the
More informationA Functional Representation of Fuzzy Preferences
Theoretial Eonomis Letters, 017, 7, 13- http://wwwsirporg/journal/tel ISSN Online: 16-086 ISSN Print: 16-078 A Funtional Representation of Fuzzy Preferenes Susheng Wang Department of Eonomis, Hong Kong
More informationThe Effectiveness of the Linear Hull Effect
The Effetiveness of the Linear Hull Effet S. Murphy Tehnial Report RHUL MA 009 9 6 Otober 009 Department of Mathematis Royal Holloway, University of London Egham, Surrey TW0 0EX, England http://www.rhul.a.uk/mathematis/tehreports
More informationA Characterization of Wavelet Convergence in Sobolev Spaces
A Charaterization of Wavelet Convergene in Sobolev Spaes Mark A. Kon 1 oston University Louise Arakelian Raphael Howard University Dediated to Prof. Robert Carroll on the oasion of his 70th birthday. Abstrat
More informationDIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS
CHAPTER 4 DIGITAL DISTANCE RELAYING SCHEME FOR PARALLEL TRANSMISSION LINES DURING INTER-CIRCUIT FAULTS 4.1 INTRODUCTION Around the world, environmental and ost onsiousness are foring utilities to install
More informationDevelopment of Fuzzy Extreme Value Theory. Populations
Applied Mathematial Sienes, Vol. 6, 0, no. 7, 58 5834 Development of Fuzzy Extreme Value Theory Control Charts Using α -uts for Sewed Populations Rungsarit Intaramo Department of Mathematis, Faulty of
More informationModel-based mixture discriminant analysis an experimental study
Model-based mixture disriminant analysis an experimental study Zohar Halbe and Mayer Aladjem Department of Eletrial and Computer Engineering, Ben-Gurion University of the Negev P.O.Box 653, Beer-Sheva,
More informationSome Properties on Nano Topology Induced by Graphs
AASCIT Journal of anosiene 2017; 3(4): 19-23 http://wwwaasitorg/journal/nanosiene ISS: 2381-1234 (Print); ISS: 2381-1242 (Online) Some Properties on ano Topology Indued by Graphs Arafa asef 1 Abd El Fattah
More informationFailure Assessment Diagram Analysis of Creep Crack Initiation in 316H Stainless Steel
Failure Assessment Diagram Analysis of Creep Crak Initiation in 316H Stainless Steel C. M. Davies *, N. P. O Dowd, D. W. Dean, K. M. Nikbin, R. A. Ainsworth Department of Mehanial Engineering, Imperial
More informationEvaluation of effect of blade internal modes on sensitivity of Advanced LIGO
Evaluation of effet of blade internal modes on sensitivity of Advaned LIGO T0074-00-R Norna A Robertson 5 th Otober 00. Introdution The urrent model used to estimate the isolation ahieved by the quadruple
More informationDiscrete Bessel functions and partial difference equations
Disrete Bessel funtions and partial differene equations Antonín Slavík Charles University, Faulty of Mathematis and Physis, Sokolovská 83, 186 75 Praha 8, Czeh Republi E-mail: slavik@karlin.mff.uni.z Abstrat
More informationThe Second Postulate of Euclid and the Hyperbolic Geometry
1 The Seond Postulate of Eulid and the Hyperboli Geometry Yuriy N. Zayko Department of Applied Informatis, Faulty of Publi Administration, Russian Presidential Aademy of National Eonomy and Publi Administration,
More informationLOGISTIC REGRESSION IN DEPRESSION CLASSIFICATION
LOGISIC REGRESSIO I DEPRESSIO CLASSIFICAIO J. Kual,. V. ran, M. Bareš KSE, FJFI, CVU v Praze PCP, CS, 3LF UK v Praze Abstrat Well nown logisti regression and the other binary response models an be used
More informationResolving RIPS Measurement Ambiguity in Maximum Likelihood Estimation
14th International Conferene on Information Fusion Chiago, Illinois, USA, July 5-8, 011 Resolving RIPS Measurement Ambiguity in Maximum Likelihood Estimation Wenhao Li, Xuezhi Wang, and Bill Moran Shool
More informationWhen do Numbers Really Matter?
When do Numbers Really Matter? Hei Chan and Adnan Darwiche Computer Science Department University of California Los Angeles, CA 90095 {hei,darwiche}@cs.ucla.edu Abstract Common wisdom has it that small
More informationAn I-Vector Backend for Speaker Verification
An I-Vetor Bakend for Speaker Verifiation Patrik Kenny, 1 Themos Stafylakis, 1 Jahangir Alam, 1 and Marel Kokmann 2 1 CRIM, Canada, {patrik.kenny, themos.stafylakis, jahangir.alam}@rim.a 2 VoieTrust, Canada,
More informationThe Impact of Information on the Performance of an M/M/1 Queueing System
The Impat of Information on the Performane of an M/M/1 Queueing System by Mojgan Nasiri A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master
More informationModeling Probabilistic Measurement Correlations for Problem Determination in Large-Scale Distributed Systems
009 9th IEEE International Conferene on Distributed Computing Systems Modeling Probabilisti Measurement Correlations for Problem Determination in Large-Sale Distributed Systems Jing Gao Guofei Jiang Haifeng
More informationWhen do Numbers Really Matter?
Journal of Artificial Intelligence Research 17 (2002) 265 287 Submitted 10/01; published 9/02 When do Numbers Really Matter? Hei Chan Adnan Darwiche Computer Science Department University of California,
More informationHILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES
HILLE-KNESER TYPE CRITERIA FOR SECOND-ORDER DYNAMIC EQUATIONS ON TIME SCALES L ERBE, A PETERSON AND S H SAKER Abstrat In this paper, we onsider the pair of seond-order dynami equations rt)x ) ) + pt)x
More informationA Queueing Model for Call Blending in Call Centers
A Queueing Model for Call Blending in Call Centers Sandjai Bhulai and Ger Koole Vrije Universiteit Amsterdam Faulty of Sienes De Boelelaan 1081a 1081 HV Amsterdam The Netherlands E-mail: {sbhulai, koole}@s.vu.nl
More informationNormative and descriptive approaches to multiattribute decision making
De. 009, Volume 8, No. (Serial No.78) China-USA Business Review, ISSN 57-54, USA Normative and desriptive approahes to multiattribute deision making Milan Terek (Department of Statistis, University of
More informationNo Time to Observe: Adaptive Influence Maximization with Partial Feedback
Proeedings of the Twenty-Sixth International Joint Conferene on Artifiial Intelligene IJCAI-17 No Time to Observe: Adaptive Influene Maximization with Partial Feedbak Jing Yuan Department of Computer Siene
More informationThe Hanging Chain. John McCuan. January 19, 2006
The Hanging Chain John MCuan January 19, 2006 1 Introdution We onsider a hain of length L attahed to two points (a, u a and (b, u b in the plane. It is assumed that the hain hangs in the plane under a
More informationProduct Policy in Markets with Word-of-Mouth Communication. Technical Appendix
rodut oliy in Markets with Word-of-Mouth Communiation Tehnial Appendix August 05 Miro-Model for Inreasing Awareness In the paper, we make the assumption that awareness is inreasing in ustomer type. I.e.,
More informationQCLAS Sensor for Purity Monitoring in Medical Gas Supply Lines
DOI.56/sensoren6/P3. QLAS Sensor for Purity Monitoring in Medial Gas Supply Lines Henrik Zimmermann, Mathias Wiese, Alessandro Ragnoni neoplas ontrol GmbH, Walther-Rathenau-Str. 49a, 7489 Greifswald, Germany
More informationModeling of Threading Dislocation Density Reduction in Heteroepitaxial Layers
A. E. Romanov et al.: Threading Disloation Density Redution in Layers (II) 33 phys. stat. sol. (b) 99, 33 (997) Subjet lassifiation: 6.72.C; 68.55.Ln; S5.; S5.2; S7.; S7.2 Modeling of Threading Disloation
More informationLecture 3 - Lorentz Transformations
Leture - Lorentz Transformations A Puzzle... Example A ruler is positioned perpendiular to a wall. A stik of length L flies by at speed v. It travels in front of the ruler, so that it obsures part of the
More informationthe following action R of T on T n+1 : for each θ T, R θ : T n+1 T n+1 is defined by stated, we assume that all the curves in this paper are defined
How should a snake turn on ie: A ase study of the asymptoti isoholonomi problem Jianghai Hu, Slobodan N. Simić, and Shankar Sastry Department of Eletrial Engineering and Computer Sienes University of California
More informationComputer Science 786S - Statistical Methods in Natural Language Processing and Data Analysis Page 1
Computer Siene 786S - Statistial Methods in Natural Language Proessing and Data Analysis Page 1 Hypothesis Testing A statistial hypothesis is a statement about the nature of the distribution of a random
More informationBAYESIAN NETWORK FOR DECISION AID IN MAINTENANCE
THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY Series A OF THE ROMANIAN ACADEMY Volume 3 Number 4/0 pp. 387 394 BAYESIAN NETWORK FOR DECISION AID IN MAINTENANCE Luminita DUTA Valahia University
More informationLikelihood-confidence intervals for quantiles in Extreme Value Distributions
Likelihood-onfidene intervals for quantiles in Extreme Value Distributions A. Bolívar, E. Díaz-Franés, J. Ortega, and E. Vilhis. Centro de Investigaión en Matemátias; A.P. 42, Guanajuato, Gto. 36; Méxio
More informationSensor management for PRF selection in the track-before-detect context
Sensor management for PRF seletion in the tra-before-detet ontext Fotios Katsilieris, Yvo Boers, and Hans Driessen Thales Nederland B.V. Haasbergerstraat 49, 7554 PA Hengelo, the Netherlands Email: {Fotios.Katsilieris,
More informationConformal Mapping among Orthogonal, Symmetric, and Skew-Symmetric Matrices
AAS 03-190 Conformal Mapping among Orthogonal, Symmetri, and Skew-Symmetri Matries Daniele Mortari Department of Aerospae Engineering, Texas A&M University, College Station, TX 77843-3141 Abstrat This
More informationVariation Based Online Travel Time Prediction Using Clustered Neural Networks
Variation Based Online Travel Time Predition Using lustered Neural Networks Jie Yu, Gang-Len hang, H.W. Ho and Yue Liu Abstrat-This paper proposes a variation-based online travel time predition approah
More informationGeneralized Dimensional Analysis
#HUTP-92/A036 7/92 Generalized Dimensional Analysis arxiv:hep-ph/9207278v1 31 Jul 1992 Howard Georgi Lyman Laboratory of Physis Harvard University Cambridge, MA 02138 Abstrat I desribe a version of so-alled
More informationCase I: 2 users In case of 2 users, the probability of error for user 1 was earlier derived to be 2 A1
MUTLIUSER DETECTION (Letures 9 and 0) 6:33:546 Wireless Communiations Tehnologies Instrutor: Dr. Narayan Mandayam Summary By Shweta Shrivastava (shwetash@winlab.rutgers.edu) bstrat This artile ontinues
More information10.5 Unsupervised Bayesian Learning
The Bayes Classifier Maximum-likelihood methods: Li Yu Hongda Mao Joan Wang parameter vetor is a fixed but unknown value Bayes methods: parameter vetor is a random variable with known prior distribution
More informationCritical Reflections on the Hafele and Keating Experiment
Critial Refletions on the Hafele and Keating Experiment W.Nawrot In 1971 Hafele and Keating performed their famous experiment whih onfirmed the time dilation predited by SRT by use of marosopi loks. As
More informationTight bounds for selfish and greedy load balancing
Tight bounds for selfish and greedy load balaning Ioannis Caragiannis Mihele Flammini Christos Kaklamanis Panagiotis Kanellopoulos Lua Mosardelli Deember, 009 Abstrat We study the load balaning problem
More information10.2 The Occurrence of Critical Flow; Controls
10. The Ourrene of Critial Flow; Controls In addition to the type of problem in whih both q and E are initially presribed; there is a problem whih is of pratial interest: Given a value of q, what fators
More informationOn Industry Structure and Firm Conduct in Long Run Equilibrium
www.siedu.a/jms Journal of Management and Strategy Vol., No. ; Deember On Industry Struture and Firm Condut in Long Run Equilibrium Prof. Jean-Paul Chavas Department of Agriultural and Applied Eonomis
More informationFig Review of Granta-gravel
0 Conlusion 0. Sope We have introdued the new ritial state onept among older onepts of lassial soil mehanis, but it would be wrong to leave any impression at the end of this book that the new onept merely
More informationSimplified Buckling Analysis of Skeletal Structures
Simplified Bukling Analysis of Skeletal Strutures B.A. Izzuddin 1 ABSRAC A simplified approah is proposed for bukling analysis of skeletal strutures, whih employs a rotational spring analogy for the formulation
More information22.54 Neutron Interactions and Applications (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E')
22.54 Neutron Interations and Appliations (Spring 2004) Chapter 6 (2/24/04) Energy Transfer Kernel F(E E') Referenes -- J. R. Lamarsh, Introdution to Nulear Reator Theory (Addison-Wesley, Reading, 1966),
More informationeappendix for: SAS macro for causal mediation analysis with survival data
eappendix for: SAS maro for ausal mediation analysis with survival data Linda Valeri and Tyler J. VanderWeele 1 Causal effets under the ounterfatual framework and their estimators We let T a and M a denote
More informationPhysical Laws, Absolutes, Relative Absolutes and Relativistic Time Phenomena
Page 1 of 10 Physial Laws, Absolutes, Relative Absolutes and Relativisti Time Phenomena Antonio Ruggeri modexp@iafria.om Sine in the field of knowledge we deal with absolutes, there are absolute laws that
More informationConvergence of reinforcement learning with general function approximators
Convergene of reinforement learning with general funtion approximators assilis A. Papavassiliou and Stuart Russell Computer Siene Division, U. of California, Berkeley, CA 94720-1776 fvassilis,russellg@s.berkeley.edu
More informationControl Theory association of mathematics and engineering
Control Theory assoiation of mathematis and engineering Wojieh Mitkowski Krzysztof Oprzedkiewiz Department of Automatis AGH Univ. of Siene & Tehnology, Craow, Poland, Abstrat In this paper a methodology
More informationOn the Bit Error Probability of Noisy Channel Networks With Intermediate Node Encoding I. INTRODUCTION
5188 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 54, NO. 11, NOVEMBER 2008 [8] A. P. Dempster, N. M. Laird, and D. B. Rubin, Maximum likelihood estimation from inomplete data via the EM algorithm, J.
More informationAdvanced Computational Fluid Dynamics AA215A Lecture 4
Advaned Computational Fluid Dynamis AA5A Leture 4 Antony Jameson Winter Quarter,, Stanford, CA Abstrat Leture 4 overs analysis of the equations of gas dynamis Contents Analysis of the equations of gas
More informationLightpath routing for maximum reliability in optical mesh networks
Vol. 7, No. 5 / May 2008 / JOURNAL OF OPTICAL NETWORKING 449 Lightpath routing for maximum reliability in optial mesh networks Shengli Yuan, 1, * Saket Varma, 2 and Jason P. Jue 2 1 Department of Computer
More informationREFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS. 1. Introduction
Version of 5/2/2003 To appear in Advanes in Applied Mathematis REFINED UPPER BOUNDS FOR THE LINEAR DIOPHANTINE PROBLEM OF FROBENIUS MATTHIAS BECK AND SHELEMYAHU ZACKS Abstrat We study the Frobenius problem:
More informationEinstein s Three Mistakes in Special Relativity Revealed. Copyright Joseph A. Rybczyk
Einstein s Three Mistakes in Speial Relativity Revealed Copyright Joseph A. Rybzyk Abstrat When the evidene supported priniples of eletromagneti propagation are properly applied, the derived theory is
More informationCoding for Random Projections and Approximate Near Neighbor Search
Coding for Random Projetions and Approximate Near Neighbor Searh Ping Li Department of Statistis & Biostatistis Department of Computer Siene Rutgers University Pisataay, NJ 8854, USA pingli@stat.rutgers.edu
More informationWave Propagation through Random Media
Chapter 3. Wave Propagation through Random Media 3. Charateristis of Wave Behavior Sound propagation through random media is the entral part of this investigation. This hapter presents a frame of referene
More informationCounting Idempotent Relations
Counting Idempotent Relations Beriht-Nr. 2008-15 Florian Kammüller ISSN 1436-9915 2 Abstrat This artile introdues and motivates idempotent relations. It summarizes haraterizations of idempotents and their
More informationSubject: Introduction to Component Matching and Off-Design Operation % % ( (1) R T % (
16.50 Leture 0 Subjet: Introdution to Component Mathing and Off-Design Operation At this point it is well to reflet on whih of the many parameters we have introdued (like M, τ, τ t, ϑ t, f, et.) are free
More informationSINCE Zadeh s compositional rule of fuzzy inference
IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 14, NO. 6, DECEMBER 2006 709 Error Estimation of Perturbations Under CRI Guosheng Cheng Yuxi Fu Abstrat The analysis of stability robustness of fuzzy reasoning
More informationBattery Sizing for Grid Connected PV Systems with Fixed Minimum Charging/Discharging Time
Battery Sizing for Grid Conneted PV Systems with Fixed Minimum Charging/Disharging Time Yu Ru, Jan Kleissl, and Sonia Martinez Abstrat In this paper, we study a battery sizing problem for grid-onneted
More informationA new method of measuring similarity between two neutrosophic soft sets and its application in pattern recognition problems
Neutrosophi Sets and Systems, Vol. 8, 05 63 A new method of measuring similarity between two neutrosophi soft sets and its appliation in pattern reognition problems Anjan Mukherjee, Sadhan Sarkar, Department
More informationMultiPhysics Analysis of Trapped Field in Multi-Layer YBCO Plates
Exerpt from the Proeedings of the COMSOL Conferene 9 Boston MultiPhysis Analysis of Trapped Field in Multi-Layer YBCO Plates Philippe. Masson Advaned Magnet Lab *7 Main Street, Bldg. #4, Palm Bay, Fl-95,
More information23.1 Tuning controllers, in the large view Quoting from Section 16.7:
Lesson 23. Tuning a real ontroller - modeling, proess identifiation, fine tuning 23.0 Context We have learned to view proesses as dynami systems, taking are to identify their input, intermediate, and output
More informationCMSC 451: Lecture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017
CMSC 451: Leture 9 Greedy Approximation: Set Cover Thursday, Sep 28, 2017 Reading: Chapt 11 of KT and Set 54 of DPV Set Cover: An important lass of optimization problems involves overing a ertain domain,
More informationSimplification of Network Dynamics in Large Systems
Simplifiation of Network Dynamis in Large Systems Xiaojun Lin and Ness B. Shroff Shool of Eletrial and Computer Engineering Purdue University, West Lafayette, IN 47906, U.S.A. Email: {linx, shroff}@en.purdue.edu
More informationAfter the completion of this section the student should recall
Chapter I MTH FUNDMENTLS I. Sets, Numbers, Coordinates, Funtions ugust 30, 08 3 I. SETS, NUMERS, COORDINTES, FUNCTIONS Objetives: fter the ompletion of this setion the student should reall - the definition
More informationMath 220A - Fall 2002 Homework 8 Solutions
Math A - Fall Homework 8 Solutions 1. Consider u tt u = x R 3, t > u(x, ) = φ(x) u t (x, ) = ψ(x). Suppose φ, ψ are supported in the annular region a < x < b. (a) Find the time T 1 > suh that u(x, t) is
More informationWhere as discussed previously we interpret solutions to this partial differential equation in the weak sense: b
Consider the pure initial value problem for a homogeneous system of onservation laws with no soure terms in one spae dimension: Where as disussed previously we interpret solutions to this partial differential
More informationMicroeconomic Theory I Assignment #7 - Answer key
Miroeonomi Theory I Assignment #7 - Answer key. [Menu priing in monopoly] Consider the example on seond-degree prie disrimination (see slides 9-93). To failitate your alulations, assume H = 5, L =, and
More informationLECTURE NOTES FOR , FALL 2004
LECTURE NOTES FOR 18.155, FALL 2004 83 12. Cone support and wavefront set In disussing the singular support of a tempered distibution above, notie that singsupp(u) = only implies that u C (R n ), not as
More informationCoding for Noisy Write-Efficient Memories
Coding for oisy Write-Effiient Memories Qing Li Computer Si. & Eng. Dept. Texas A & M University College Station, TX 77843 qingli@se.tamu.edu Anxiao (Andrew) Jiang CSE and ECE Departments Texas A & M University
More informationAn iterative least-square method suitable for solving large sparse matrices
An iteratie least-square method suitable for soling large sparse matries By I. M. Khabaza The purpose of this paper is to report on the results of numerial experiments with an iteratie least-square method
More informationGeneral probability weighted moments for the three-parameter Weibull Distribution and their application in S-N curves modelling
General probability weighted moments for the three-parameter Weibull Distribution and their appliation in S-N urves modelling Paul Dario Toasa Caiza a,, Thomas Ummenhofer a a KIT Stahl- und Leihtbau, Versuhsanstalt
More informationCombined Electric and Magnetic Dipoles for Mesoband Radiation, Part 2
Sensor and Simulation Notes Note 53 3 May 8 Combined Eletri and Magneti Dipoles for Mesoband Radiation, Part Carl E. Baum University of New Mexio Department of Eletrial and Computer Engineering Albuquerque
More informationEDGE-DISJOINT CLIQUES IN GRAPHS WITH HIGH MINIMUM DEGREE
EDGE-DISJOINT CLIQUES IN GRAPHS WITH HIGH MINIMUM DEGREE RAPHAEL YUSTER Abstrat For a graph G and a fixed integer k 3, let ν k G) denote the maximum number of pairwise edge-disjoint opies of K k in G For
More informationCalibration of Piping Assessment Models in the Netherlands
ISGSR 2011 - Vogt, Shuppener, Straub & Bräu (eds) - 2011 Bundesanstalt für Wasserbau ISBN 978-3-939230-01-4 Calibration of Piping Assessment Models in the Netherlands J. Lopez de la Cruz & E.O.F. Calle
More information