Improving Search-Based Inference in Bayesian Networks

Similar documents
Stochastic Sampling and Search in Belief Updating Algorithms for Very Large Bayesian Networks

Average-case analysis of a search algorithm for estimating prior and posterior probabilities in Bayesian networks with extreme probabilities

Dynamic importance sampling in Bayesian networks using factorisation of probability trees

Dynamic Importance Sampling in Bayesian Networks Based on Probability Trees

Belief Update in CLG Bayesian Networks With Lazy Propagation

Applying Bayesian networks in the game of Minesweeper

SIMULATION APPROACHES TO GENERAL PROBABILISTIC INFERENCE ON BELIEF NETWORKS

Bayesian Networks. B. Inference / B.3 Approximate Inference / Sampling

Abductive Inference in Bayesian Belief Networks Using Swarm Intelligence

Bayesian Networks BY: MOHAMAD ALSABBAGH

Fractional Belief Propagation

The Parameterized Complexity of Approximate Inference in Bayesian Networks

Published in: Tenth Tbilisi Symposium on Language, Logic and Computation: Gudauri, Georgia, September 2013

An Empirical Study of Probability Elicitation under Noisy-OR Assumption

Tractable Inference in Hybrid Bayesian Networks with Deterministic Conditionals using Re-approximations

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), Institute BW/WI & Institute for Computer Science, University of Hildesheim

Bayesian Networks. 7. Approximate Inference / Sampling

6.867 Machine learning, lecture 23 (Jaakkola)

CS Lecture 3. More Bayesian Networks

Recall from last time: Conditional probabilities. Lecture 2: Belief (Bayesian) networks. Bayes ball. Example (continued) Example: Inference problem

Inference in Hybrid Bayesian Networks Using Mixtures of Gaussians

Bayesian Knowledge Fusion

Bayesian belief networks. Inference.

Complexity of probabilistic reasoning in directed-path singly-connected Bayes networks

p L yi z n m x N n xi

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016

Mixtures of Polynomials in Hybrid Bayesian Networks with Deterministic Variables

Generalized Loopy 2U: A New Algorithm for Approximate Inference in Credal Networks

Bayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Probabilistic Independence of Causal Influences

Bayesian Networks. Motivation

Exact model averaging with naive Bayesian classifiers

Bayesian network modeling. 1

2 : Directed GMs: Bayesian Networks

Chapter 04: Exact Inference in Bayesian Networks

CUADERNOS DE TRABAJO ESCUELA UNIVERSITARIA DE ESTADÍSTICA

The computational power of dynamic bayesian networks

STATISTICAL METHODS IN AI/ML Vibhav Gogate The University of Texas at Dallas. Bayesian networks: Representation

4 : Exact Inference: Variable Elimination

Zhang and Poole ([23]) made the observation that ICI enables one to factorize a conditional probability table into smaller pieces and showed how the V

Directed Graphical Models or Bayesian Networks

Bayesian Networks: Belief Propagation in Singly Connected Networks

Introduction to Artificial Intelligence. Unit # 11

Using Bayesian Network Representations for Effective Sampling from Generative Network Models

Some Practical Issues in Inference in Hybrid Bayesian Networks with Deterministic Conditionals

Sampling Algorithms for Probabilistic Graphical models

STAT 598L Probabilistic Graphical Models. Instructor: Sergey Kirshner. Bayesian Networks

Piecewise Linear Approximations of Nonlinear Deterministic Conditionals in Continuous Bayesian Networks

Integrating Correlated Bayesian Networks Using Maximum Entropy

Machine Learning for Data Science (CS4786) Lecture 24

Improving Bound Propagation

Respecting Markov Equivalence in Computing Posterior Probabilities of Causal Graphical Features

Part I. C. M. Bishop PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 8: GRAPHICAL MODELS

CSEP 573: Artificial Intelligence

The Limitation of Bayesianism

Using Bayesian Network Representations for Effective Sampling from Generative Network Models

Index. admissible matrix, 82, 83 admissible perturbation, 72 argument, 23, 24, 49, 57, 98

Approximating the Noisy-Or model by Naive Bayes

Quantifying uncertainty & Bayesian networks

Branch and Bound for Regular Bayesian Network Structure Learning

The Necessity of Bounded Treewidth for Efficient Inference in Bayesian Networks

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

Announcements. CS 188: Artificial Intelligence Fall Causality? Example: Traffic. Topology Limits Distributions. Example: Reverse Traffic

Generalized Evidence Pre-propagated Importance Sampling for Hybrid Bayesian Networks

Introduction to Probabilistic Graphical Models

Bayesian Networks: Representation, Variable Elimination

Properties of Bayesian Belief Network Learning Algorithms

Learning Bayesian Networks (part 1) Goals for the lecture

Tensor rank-one decomposition of probability tables

12 : Variational Inference I

Reasoning Under Uncertainty: Conditioning, Bayes Rule & the Chain Rule

Local Characterizations of Causal Bayesian Networks

CS839: Probabilistic Graphical Models. Lecture 2: Directed Graphical Models. Theo Rekatsinas

Linear Dynamical Systems

Bayesian Machine Learning

Learning Conditional Probabilities from Incomplete Data: An Experimental Comparison Marco Ramoni Knowledge Media Institute Paola Sebastiani Statistics

PROBABILISTIC LOGIC. J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering Copyright c 1999 John Wiley & Sons, Inc.

Bayesian Network Inference Using Marginal Trees

Bayesian Networks Inference with Probabilistic Graphical Models

Graphical models: parameter learning

Decision-Theoretic Specification of Credal Networks: A Unified Language for Uncertain Modeling with Sets of Bayesian Networks

Learning Semi-Markovian Causal Models using Experiments

CS 188: Artificial Intelligence Fall Recap: Inference Example

Using Sensitivity Analysis for Selective Parameter Update in Bayesian Network Learning

Incremental Tradeoff Resolution in Qualitative Probabilistic Networks

Probabilistic Graphical Models

Weighting Expert Opinions in Group Decision Making for the Influential Effects between Variables in a Bayesian Network Model

Learning in Bayesian Networks

Intelligent Systems:

K. Nishijima. Definition and use of Bayesian probabilistic networks 1/32

Learning Causal Bayesian Networks from Observations and Experiments: A Decision Theoretic Approach

A Decision Theoretic View on Choosing Heuristics for Discovery of Graphical Models

Identifiability in Causal Bayesian Networks: A Gentle Introduction

9 Forward-backward algorithm, sum-product on factor graphs

Recall from last time. Lecture 3: Conditional independence and graph structure. Example: A Bayesian (belief) network.

Intelligent Systems: Reasoning and Recognition. Reasoning with Bayesian Networks

THE VINE COPULA METHOD FOR REPRESENTING HIGH DIMENSIONAL DEPENDENT DISTRIBUTIONS: APPLICATION TO CONTINUOUS BELIEF NETS

Dynamic Weighting A* Search-based MAP Algorithm for Bayesian Networks

Decisiveness in Loopy Propagation

Transcription:

From: Proceedings of the Eleventh International FLAIRS Conference. Copyright 1998, AAAI (www.aaai.org). All rights reserved. Improving Search-Based Inference in Bayesian Networks E. Castillo, J. M. Guti~rrez Department of Applied Mathematics University of Cantabria, SPAIN castie~ccaix3.unican.es gutierjm@ccaix3.unican.es Ali S. Hadi Department of Statistical Sciences Cornell University, USA ali-hadi@corneu.edu Abstract A method for improving search-based inference techniques in Bayesian networks by obtaining a prior estimation of the error is presented. The method is based on a recently introduced algorithm for calculating the contribution of a given set of instantiations to the total probability mass. If a certain accuracy for the solution is desired, the method provides us with the number of replications (i.e., the sample size) needed for obtaining the approximated values with the desired accuracy. In addition to providing a prior stopping rule, the method substantially reduces the structure of the search tree and, hence, the computer time required for the process. Important savings are obtained in the case of Bayesian networks with extreme probabilities, as it is shown with the examples reported in the paper. As an example of several possible applications of the method, the problem of finding a maximal posteriori (MAP) instantiation of the Bayesia network variables, given a partial value assignment an initial constraint, is presented. Introduction In recent years probabilistic networks, mainly Bayesian and Markov networks, have emerged as effective tools both, for graphical representation of the dependence relationships among a set of variables, and for exploiting the resulting graph to easily define a consistent joint probability distribution (see, e.g., Pearl (1988) and Castillo, Guti6rrez and Hadi (1997)). Among these models, Bayesian networks have captured the interest of many scientists in several fields, from medicine to engineering, due to their simplicity and soundness. A Bayesian network on a set of variables X -- {X1,..., X,,} is a pair (D, C), where D is a directed acyclic graph over X, which represents the dependence relationships among the variables in X, and C = {p(xt[irl),...,p(x~[tr~)} is a set of n conditional probability distributions (CPD) determined the topology of the graph, one for each ~riable X~, where the conditioning set, II~, is the set of parents of node X~ in D. Then, using the chain rule, a Bayesian Copyright 19, American Association for Adifcial Intelligence (www.aaai.org). All rights reserved. network defines a joint probability distribution (JPD) on X in a simple way: n = = (1) i--=l Inference, or evidence propagation, is one of the most important tasks in the area of probabilistic reasoning. It consists of calculating the prior and posterior distributions of some event of interest (usually, associated with a single variable). Whe no evidence is observed, inference consists of calculating the prior probabilities p(x~). When a set of evidential nodes E have shown to take the values E = e., inference consists of calculating the conditional probabilities p(x~ [e). Several exact and simulation-based approximated methods have been developed to efficiently calculate these probabilities (see, for example, Lauritzen and Spiegelhalter (1988), Pearl (1988), Henrion (1988), and and Castillo, Guti~rrez and Hadi (1997)). Recently, a new type of propagation algorithms which search for instantiations of the variables with high probability through the space of all possible instantiations, {(xl,., x~) I x~ is a feasible value for Xi}, have been introduced. These methods have been shown to be efficient alternatives of the above exact and simulation algorithms in some cases, such as in presence of extreme probabilities (Poole (1993) Bouckaert, Castillo and Guti6rrez (1995)). In this paper we present a method both for estimating the error produced when using these algorithms, and for improving their efficiency in cases where a certain accuracy is required. The savings are shown to be very important in the presence of extreme probabilities, where the computation time can be reduced substantially. The proposed method is based on a recently introduced algorithm (Castillo et al. 1995) for calculating the contribution of a given set of instantiations to the total probability mass. The rest of the paper is organized as follows. We start by giving a formal statement of the problem. Then, we introduce a simple method for solving this problem. As an example of application, the proposed method is used to improve one of the search-based in- Uncertainty Reasoning 45

ference methods. Finall); we analyze tile MAP problem comparing both the standard and the irnproved search algorithms. Statement of the Problem A natural way of approximating marginal, or conditional, probabilities consists of summing only on the (small) set of instantiations I, which includes all instantiations with associated probability larger than a given value q. Then, the total error produced is given by ~ ~x~zp(x) = ~-~-x:~(z)<q p(x). Thus, for estimating error bounds, we need to determine the contribution of all instantiations with probability lower than q to the total probability mass. To do this, we consider the error fimction s(q)= p( x). (2) z:rcx)<q We are also interested in calculating the ratio of the number of instantiations x included in the set (x : p(x) < q} to the total number of instantiations, since these are the instantiations we carl skip in the inference process by paying a total error s(q). This ratio is defined by f(q) : < q}l (3) IIxl where Ix is the set of all possible instantiations x = (xl,..., x,~) of the variables in the set X and IAI stands for the cardinal of the set A. Thus, for a given value q, f(q) gives the ratio of instantiations with probability lower than q and s(q) gives the contribution of all these instantiations to the total probability mass. Figure 1 shows both f(q) and s(q), for a ten-node Bayesian network with random probabilities taken from (, 1). This figure shows, for example, that 8% of the instantiations contribute only 16% to the total probability mass. In Bayesian networks with extreme probabilities the contribution is much smaller. For example for a ten-node Bayesian network with random extreme probabilities taken from (,.1) (J (.9, 1) found that 85% of the instantiations contribute less than.2% to the total probability mass. Therefore, it is possible to obtain substantial savings in the computation time by paying only a small error. Moreover, the more extreme the probabilities, the smaller the error produced. The main problem for implementing the method consists of estimating the left tail of s(q) in an efficient way. This tail contains the instantiations that contribute the least to the total probability. Since. we need to estimate not the central part, but the left tail of the distribution, we need to use extreme value theory (see Castillo (1988) for a general introduction extreme value theory). 1-,.8.6.4.2 f(q).1.2.3.4.5 Figure 1: Distributions f(q) and s(q) for a ten-node Bayesian network with probabilities taken from (: 1). The Proposed Method Castillo et al (1995) show that the function s(q) the Lorenz curve of p, which is a CDF that can be associated with a new random variable q with the sarne domain as p. Given a threshold value u, they showed that the distribution s(q), for q < u, carl be approximated by the product of a function s(u) and the CDF of q - u, g(q - u), in the following way: s(q) = s(u)g(q (4) Since it is the left tail of a distribution, g(q - u) can be approximated by the Reversed Generalized Pareto Distribution (RGPD) when u is reasonably small. The RGPD U(z;6, ta), introduced by Pickands (1975), defined by 1 = ; 1-- >, (5) a where tr and t~ are the scale and shape parameters, respectively. Then, given a threshold value u, the proposed model for the left tail of s(q) is s(q) = s( )U(q - o); (6) which depends on s(u) and two parameters, a and a, for each threshold value u. The problem for estimating s(u) and U(q -u, ~, a) is that the associated random variable q is not directly observable. However, we can observe p and obtain an ordered sample (pl,..., Pro) from p(x). Then, a natural e.utimator for s(u) based on this sample is.~(u) I{P, : P,< u}l (7) m There is a vast literature on estimating the parameters 6 and a for a RGPD (see Castillo and Hadi (1994), and the references therein). Any of these methods can be used to estimate the parameters. For illustrative purpose, we use here the conceptually simple method 46 Castillo

of moments (MOM) estimates are given by of the parameters which &=~ ~-~-1, ----- 3 ~-1, where and s 2 are the sample mean and the sample variance, respectively. Given a sample size m and a threshold value for the accumulated probability e (the maximum error bound), we can estimate s(q) using (6) for those values of the function lower than e. The threshold value u for the probabilities for the left tail of the distribution is chosen to be the me-th lower probability in the sample. In this way the values s(q) lower than e will correspond toq <u. We have performed several experiments to analyze the quality of the estimation given by this method. For example, Figure 2(a) shows the exact value of 8(q) together with its approximation ~(q) for a 2-node network with probability tables selected from (, 1). The sample size taken to obtain this estimation is = 1 and e --.5. Figure 2(b) corresponds to a 2-node network with extreme probabilities taken from (,.1) U (.9, 1). in both cases, the value u tained by considering the 5-th lower probability in the sample, corresponding to the threshold error value s(u) = ~ =.5. o.os.-_... ~-=... F"... T~::~ - :(a) [! :... ~ o.o,... i...!...,... _?"i... --: _..~.~..~..La-.~..L.A--.~.~.,,..L.~..,.~.--..L..~--L~..:. :].4 1 -/ 6.8 1.7 1.2 1 "6 1.36 "~ 1.7 "6 1 q.5 _.... ~., _ - (b) ~~_:,4... ~...... ~. o.o3--... ~. -... i... i... I - : i i I :.1! [! ~ : 3.6 1 "s 7.2 1 "~ 1.8 1 4 1.44 1 "4 1.8 ~ 1 q Figure 2: Error estimation for a 2-node (a) non extreme and (b) extreme Bayesian network with m 1 and e ---.5. Improving Search-Based Inference The algorithm introduced in the previous section can be applied to estimate the error produced when approximating the probabilities of an event using one of the existing deterministic approximation methods. On the one hand, the deterministic stratified method (see Boucksert et al. (1995)) works by dividing sample space into several regions and then choosing an optimum number of instantiations from each region. In this way, rare instantiations can be avoided. A sample of size ra obtained with this algorithm will contain all the instantiatious with probabilities larger than I Therefore, the error estimation method can be used to calculate the number of replications needed to obtain an approximation with any desired accuracy. For example, if we wish to obtain an approximation of the probabilities of any event in the above Bayesian network associated with Figure 2(b), with error lower than.2, then the estimation of the left tail of the distribution shown in this figure would allow us to obtain the necessary number of replications, n < 1/(3.6 x 1 -s) < l s. On the other hand, search-based propagation methods work by creating a search tree whose branches are associated with partial instantiations of the variables. In every iteration step, the search process chooses one of the branches of the tree associated with an instantiation (z~,... ~, k)" If the associated instantiation is complete, that is, if k = n, then the branch is pruned from the tree and the instantiation is included in the sample. Otherwise, the tree is augmented with as many new branches as values of the next variable, xkl. Thus, the original branch, (x~,...,x~), is replaced by the branches (x~,..., z~, xkl) for all possible values of Xk 1. Several search methods have been proposed in the literature (see, for example, Henrion (1991), Poole (1993), and Santos and Shimony (1994)). The main difference among them is the selected criterion for choosing the branches in each iteration step. In the next two sections we describe one of these methods and introduce a modified algorithm based on the error estimation method presented in Section. However, the same ideas can be applied to improve the efficiency of any other search method. Maximum Probability Search Algorithm The algorithm of maximum probability search (Poole (1993)) uses the criterion of maximum probability to choose the branches in every iteration step. In this case, besides providing a stopping criteria, the above error estimation method reduces substantially the computational complexity of this algorithm since branches with lower probabilities than the threshold value can be pruned from the search tree. The maximum probability criterion for choosing the branches used in this algorithm makes it suitable for solving the MAP problem, since it obtains the instantiation with highest probability in each step of the pro- Uncertainty Reasoning 47

cess. Thus, this algorithm combines both inference and abductive inference in an intuitive way. If a given accuracy e is required for the solution, then the structure of the search tree carl be reduced by skipping those instantiations in the left tail of the distribution that contribute less than e to the total probability mass. Although the probability associated with this set is very low, it carl contain a large number of instantiations. This modification leads to important reductions in both, the structure of the tree and the computation time. We perform some experiments to evaluate the performance of the modified maximum probability search algorithm as compared to the performa~me of the standard algorithm. A Ba)~sian network consisting of ten binary variables is randomly generated. In the first experiment, the random numbers associated with the CPDs are selected from the unit interval and in the second experiment the numbers are uniformly selected from [,.1] U [.9, 1] to obtain a JPD with extreme probabilities. The experiments are performed by running the algorithms with values of the minimal accumulated probability 6 from the values {.8,.9,.95,.975,.99}. The performance is measured by (a) the number of complete generated instantiations, (b) the maximum size of the queue, and (c) the time to execute the approximation. Figure 3 shows the results for the case where the probability tables are selected from the unit interval. Figure 4 shows the results for the case of Bayesian networks with extreme probabilities chosen in the interval [,.1] U [.9, 1]. As could be expected, the execution time and the maximum size of the queue rises when the accumulated probability rises. Note that the number of instantiations for a given accumulated probability is the same for the standard and modified algorithms. When Figures 3 and 4 are compared, one sees that all measured criteria are larger for the distributions that have their probabilities from the unit interval, confirming that the maximum probability search method works better for cases where extreme probabilities are involved (see Poole (1993)). This is caused by the large size of the largest intervals that appear when distributions contain extreme probabilities. Figures 3 and 4 show that the modified algorithm produces important savings both in the computation time and in the complexity of the search tree (the maximum number of instantiations in the queue) as compared with the standard method. These figures also show that the savings increase when extreme probabilities are involved. The MAP Problem In this section we analyze the problem of finding a maximal posteriori (MAP) instantiation of the Bayesian network variables. Several exact and approximate methods have been proposed in the literature for this task (see Shimony (1994) and the references therein). 6 5 -- 4o --: - 3-- 2-- e 1-- : I... I... I... I... - Modified I D Siz~ of queue Siz~ of queue l ~. CPU time X CPU time [ O h~stantiations I ~ : = I... I,, =, I,,. i..., :.8.85.9.95 1 Figure 3: A scatter plot of three perfi)rmance measures versus 6 for the cases in which probability tables are chosen from (: l). 14 12-- 1-- 8: 6~ 4: 2--,i,o,.i,,..i, Modified t Size of queue Size of queue CPU time X CPU time ~ Instantiations I I " " " ~... @ -.7-- : ---" -I- II..~. ~ - X."2- I.,, I I,,,, I,, =,.8.85.9.95 g Figure 4: A scatter plot of three performance measures versus 6 for the cases in which probability tables are ch(xsen from (,.1) U (.9, As we already mentioned, tile maximum probat)ility search algorithm provides an intuitive and elficient method for finding the first m instantiations in the network with highest probabilities. If we are only interested in the most probable explanation, then we can run the method until the first branch in the tree is completed. To improve the efficiency of this method, we are interested in obtaining an estimation of the probability u associated with the (m l)-th instantiation with highest probability, since we can skip all the instantiations with lower probability in the search process. The reduction of the complexity of the search tree will be very important, since we are neglecting not only the left tail of the distribution, but also most of the instantiations. Thus, in this ease we are interested in estimating f(q). Following the ideas used in Section we cart obtain a uniform sample {Pl,.-.,P~} and take the (m 1)-th value with highest probability as a natural estimation of u. 48 CasSio

Figure 5 compares both the standard and the modified algorithms by using a randomly generated Bayesian network consisting of twenty binary variables with the CPDs selected from the unit interval. The experiments are performed by running the algorithms to find the rn most probable instantiations for different values of rn. The performance is measured by (a) the maximum size of the queue and (b) the time to execute the approximation. It can be shown that both the structure and the computation time are substantially reduced in the modified algorithm. 12. 1 --" Size of queue 8----" ci, u time 6 ~ 4-- -..1-2 -- :1t tt I... I " I " I Modified [ Size of queue ] = x C~tim~ I ~- = I I I I I.,, I.., I,,. I - 5 1 15 2 m Figure 5: A scatter plot of two performance measures versus m for the MPA problem. Summary and Conclusions A method for improving search based inference procedures in Bayesian networks is presented. The method consists of determining a threshold value for the probabilities associated with the instantiations, below which they can be ignored without influencing the required error for the estimates. A reversed generalized Pareto distribution is used to estimate the tall of the distribution of the instantiation probabilities, combined with MOM estimates of its parameters. To this end a sample is simulated and its tail values used for the estimation. Once the tail is known, the desired percentiles are obtained and used as threshold values. Several examples of Bayesian networks are used to illustrate the method. In particular Bayesian networks with associated extreme conditional probabilities are shown to lead to substantial savings in both the required memory (to store the search tree) and the computation time. The method is able to improve several well known techniques such as the MAP problem. Acknowledgments This work is partially supported by the University of Cantabria, the Direcci6n General de Investigaci6n Cientffica y T~cnica (DGICYT)(project TIC96-58), Iberdrola, and NATO Research Office. x= References Bouckaert, R,. R., Castillo, E. and Guti~rrez, J. M. (1995), "A Modified Simulation Scheme for Inference in Bayesian Networks," International Journal of Approximate Reasoning, Voi. 2, 1-26. Castillo, E. (1988), Extreme Value theory in Engineering, Academic Press, New York. Castillo, E. and Hadi, A. S. (1994), "Parameter and Quantile estimation for the generalized extreme value distribution," Environmetries, 5, 417-432. Castillo, E., Bouckaert, R. R., Sarabia, J. M., and Solares, C. (1995), "Error Estimation in Approximate Bayesian Belief Network inference," in Uncertainty in Artificial Intelligence 11, (P. Besnard and S. Hanks, Fxis.), North Holland, Amsterdam, 55--62. Castillo, E., Guti6rrez, J. M., and Hadi, A. S. (1997), Expert Systems and Probabilistic Network Models, Springer-Verlag, New York. Henrion, M. (1988), "Propagating Uncertainty Bayesian Networks by Probabilistic Logic Sampling," in Uncertainty in Artificial Intelligence ~, (J.F. Lemmer and L. N. Kanal, Eds.), North Holland, Amsterdaxn, 317-324. Henrion, M. (1991), "Search-Based Methods Bound Diagnostic Probabilities in Very Large Belief Nets," in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers, San Mateo, CA, 142-15. Lauritzen, S. L. and Spiegelhalter, D. J. (1988), "Local Computations with Probabilities on Graphical Structures and Their Application to Expert Systems," Journal of the Royal Statistical Society (B), 5, 157-224. Pearl, J. (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, Morgan Kanfmann, San Mateo, CA. Piekands III, J. (1975), "Statistical Inference Using Extreme Order Statistics." The Annals of Statistics, 75:1,119-131. Poole, D. (1993), "Average-case Analysis of a Search Algorithm for Estimating Prior and Posterior Probabilities in Bayesian Networks with Extreme Probabilities," in Proceedings of the I3th International Joint Conference on Artificial Intelligence, 13, 1, 66-612. Santos, E., and Shimony S. E. (1994), "Belief Updating by Enumerating High-Probability Independence- Based Assignments," in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, 56-513. Morgan Kaufmann Publishers, San Francisco. Shimony, S. E. (1994), "Cost-Based Abduction and MAP Explanation," Artificial Intelligence, 66, 345-374. Uncertainty Reasoning 49