Identifying Independence in Relational Models
|
|
- Marsha Payne
- 5 years ago
- Views:
Transcription
1 University of Massachusetts Amherst From the SelectedWorks of David Jensen June 15, 2012 Identifying Independence in Relational Models Marc Maier David Jensen, University of Massachusetts - Amherst Available at:
2 Identifying Independence in Relational Models Marc Maier Computer Science Department University of Massachusetts Amherst Amherst, MA David Jensen Computer Science Department University of Massachusetts Amherst Amherst, MA arxiv: v2 [cs.ai] 3 Oct 2012 Abstract The rules of d-separation provide a framework for deriving conditional independence facts from model structure. However, this theory only applies to simple directed graphical models. We introduce relational d-separation, a theory for deriving conditional independence in relational models. We provide a sound, complete, and computationally efficient method for relational d-separation, and we present empirical results that demonstrate effectiveness. 1 INTRODUCTION The rules of d-separation are the foundation for algorithmic derivation of the conditional independence facts implied by the structure of a directed graphical model (Geiger et al., 1990). Accurate reasoning about such conditional independence facts is the basis for constraint-based algorithms, such as PC, FCI, and MMHC, that are widely used to learn the structure of Bayesian networks (Spirtes et al., 2000; Tsamardinos et al., 2006). Bayesian networks assume that data instances are independent and identically distributed, but many real-world systems are characterized by interacting heterogeneous entities. Over the past 15 years, researchers have devised more expressive classes of directed graphical models, such as probabilistic relational models (PRMs), that remove this assumption (Getoor and Taskar, 2007). Many practical applications have benefited from learning and reasoning with these models. Examples include analysis of gene regulatory interactions (Segal et al., 2001), scholarly citations (Taskar et al., 2001), and biological cellular networks (Friedman, 2004). Expertise USER EDITS Quality PAGE FORMS Views CATEGORY (a) Example relational model of Wikipedia consisting of users editing pages, grouped by categories. Expertise of editors causes page quality, which in turn influences the number of views a category receives. (Edges in relational models have specifications see body of text for details.) Expertise Expertise Roger Sally EDITS EDITS EDITS Quality Quality Football Baseball FORMS FORMS Views Sports (b) Example fragment of a ground graph. The quality of the Football page is influenced by the expertise of both Roger and Sally. The views of the Sports category is caused by the quality of both pages in the category. Figure 1: An example relational model and small portion of a ground graph for the Wikipedia domain. In this paper, we show that d-separation does not correctly produce conditional independence facts when applied directly to relational models. We introduce an alternative representation that enables an algorithm for deriving conditional independence facts from relational models. We show that this algorithm is sound, complete, and computationally efficient, and we provide an empirical demonstration of the effectiveness of our approach across synthetic causal structures of relational domains. 2 EXAMPLE Consider the common problem among social media developers of attracting and retaining readers. For example, an administrator of Wikipedia may be interested in increasing the visibility of certain categories of articles. The administrator may believe
3 that Wikipedia operates under the model depicted in Figure 1(a) and needs to verify the model structure in order to effectively determine next actions. Naïvely applying d-separation to the model in Figure 1(a) suggests that user expertise in writing Wikipedia pages is conditionally independent of category views given the quality of edited pages. However, as we show below, d-separation does not apply directly to relational models. A necessary precondition for inference is to apply a model to a data instantiation. This process yields a ground graph, to which d-separation can be applied. For a Bayesian network, a ground graph consists of replicates of the model structure for each data instance. In contrast, a relational model defines a template for how dependencies apply to a data instantiation, resulting in a ground graph with varying structure. Figure 1(b) shows a small fragment of a ground graph for the relational model in Figure 1(a). This ground graph illustrates that simply conditioning on page quality can activate a path through the expertise of other users who edit the same pages we call this a relational d-connecting path. Checking d- separation on the ground graph indicates that to d- separate user expertise from category views, we must not only condition on the quality of edited pages, but also on the expertise of other users who edit those pages (e.g., Roger.Expertise Sports.Views {Football.Quality, Sally.Expertise}). This example highlights important concepts that drive our formalization and solution for relational d- separation. Since the conditional independence facts derived from d-separation hold for all faithful distributions a model can represent, the implications of relational d-separation should analogously hold for all faithful distributions of variables for the space of all possible ground graphs. It is simple to show that d- separation holds for any ground graph of a Bayesian network every ground graph is a set of independent, identical instances of the model. However, relational models are templates for ground graphs that vary by the relational structure of the underlying data (e.g., different pages are edited by varying numbers of users). Furthermore, d-separation only applies directly to the ground graphs of relational models, but the all-ground-graphs semantics prohibits reasoning about a single model instantiation. Therefore, relational d-separation queries must be answered without respect to ground graphs. Additionally, the example illustrates how relational dependencies can exhibit d-connecting paths that are only manifest in ground graphs, not the model representation. Below, we describe a new representation that can be used to reason about d-separation for relational models. 3 RELATIONAL DATA In this section, we formally define the concepts of relational data and models that provide the basis for the theoretical framework for relational d- separation. A relational schema is a top-level description of what data exist in a particular domain. Specifically (adapted from Heckerman et al. (2007)): Definition 1 (Relational schema) A relational schema S = (E, R, A) consists of a set of entity classes E = {E 1,..., E m }; relationship classes R = {R 1,..., R n }, where each R i = {E 1,..., E j } with E j E; attribute classes A(I) for each item I E R; and cardinality function card(r, E) = {one, many} for each R R and each E R. The schema for the example in Figure 1 consists of entities E = {User, Page, Category}; relationships R = {Edits, Forms}, where Edits = {User, Page}, Forms = {Page, Category} and all cardinalities are many (e.g., card(edits,user) = many); and attributes A(User) = {Expertise}, A(Page) = {Quality}, and A(Category) = {Views}. A schema is a template for the underlying skeleton, a specific instantiation of entities and relationships. Specifically (adapted from Heckerman et al. (2007)): Definition 2 (Relational skeleton) A relational skeleton σ ER is an instantiation of entity sets σ(e) for each E E and relationship sets σ(r) for each R R, adhering to its cardinality constraints. Let r σ(r) where R = {E 1,..., E j } be denoted as r(e 1,..., e j ) where e i σ(e i ) and E i E. An example skeleton (in gray) can be seen underlying the ground graph of Figure 1(b). In order to specify a model over a relational domain, we must define a space of possible variables and dependencies. For relational data, not only do we consider intrinsic entity and relationship attributes, but also variables reachable via the relational skeleton. Definition 3 (Relational path) A relational path [I 1,..., I k ] for relational schema S is an alternating sequence of entity and relationship classes I 1,..., I k E R such that for all j > 1 (1) if I j E, then I j 1 R with I j I j 1, (2) if
4 I j R, then I j 1 E with I j 1 I j, and (3) for each ordered triple I j 1, I j, I j+1 in [I 1,..., I k ], if I j R, then I j 1 I j+1 and if I j E, then either I j 1 I j+1 or I e I j 1 such that I j I e and card(i j 1, I e ) = many. I 1 is called the base item, or perspective, of the relational path. This definition generalizes the notion of slot chains from the PRM framework (Getoor et al., 2007) by including cardinality constraints. Since relational paths may become arbitrarily long, we limit the path length by a hop threshold. Items reachable by a relational path are defined by: Definition 4 (Terminal set) For any skeleton σ ER and any i 1 σ(i 1 ), a terminal set P i1 for relational path P = [I 1,..., I k ] can be defined inductively as [I 1 ] i1 = {i 1 } [I 1,..., I k 1, I k ] i1 = i k 1 [I 1,...,I k 1 ] i1 {i k ((i k 1 i k if I k R) (i k i k 1 if I k E)) i k / [I 1,..., I j ] i1 for j = 1 to k 1} A terminal set consists of reachable instances of class I k, the terminal item on the path. To produce a terminal set, traverse the skeleton by beginning at a single base item i 1 σ(i 1 ), follow instances of the items in the relational path, and reach a target set of I k instances. The definition implies a bridge burning semantics under which no instantiated items are revisited. This enforces, for example, that Roger is not included in the set of other editors of the Football page in the terminal set [User, Edits, Page, Edits, User] Roger = {Sally}. It is common for terminal sets of two relational paths originating at the same base item instance to overlap. If two relational paths with the same base and target items diverge in the middle of the path, then for some skeleton, their terminal sets will intersect. Lemma 1 For any schema S and any two relational paths P 1 = [I 1,..., I m,..., I k ] and P 2 = [I 1,..., I n,..., I k ] with I m I n, there exists a skeleton σ ER such that P 1 i1 P 2 i1 for some i 1 σ(i 1 ). Proof. Proof by construction. Let S be an arbitrary schema with two arbitrary relational paths P 1 = [I 1,..., I m,..., I k ] and P 2 = [I 1,..., I n,..., I k ] where I m I n. Construct a skeleton σ ER with the following procedure: First, for entity classes (skipping relationship classes), simultaneously traverse P 1 and P 2 from I 1 until the paths diverge. For each I j E reached, add a unique i j to σ(i j ). Repeat, traversing P 1 and P 2 backwards from I k until they diverge. Then, for both P 1 and P 2, add unique instances for items in the divergent subpaths. Repeat for relationship classes. For each I j R reached, add a unique relationship instance connecting the entity instances created above that follow P 1 and P 2 and add unique instances for entity classes not on P 1 and P 2. This process constructs an admissible skeleton all instances are unique and assumes no cardinality constraints aside from those required by Definition 3. By construction, i 1 σ(i 1 ) such that P 1 i1 P 2 i1 = {i k }. For the example skeleton in Figure 1(b), [User, Edits, Page, Edits, User, Edits, Page] Roger = {Baseball} = [User, Edits, Page, Forms, Category, Forms, Page] Roger. As we show below, the intersection is crucial for relational d-separation because individual variable instances can belong to multiple relational variables, and we must consider all paths of dependence among them. Given the definition for relational paths, it is simple to define relational variables and their instances. Definition 5 (Relational variable) A relational variable [I 1,..., I k ].V for relational schema S consists of a relational path [I 1,..., I k ] and an attribute class V A(I k ). Definition 6 (Relational variable instance) For any skeleton σ ER and any i 1 σ(i 1 ), a relational variable instance P.V i1 for relational variable P.V = [I 1,..., I k ].V is the set of variables {i k.v V A(i k ) i k P i1 }. Definition 4 implies that relational variable instances are frequently sets of more than one value, and Lemma 1 provides the conditions under which we can expect overlap to occur. Given the formal definitions for relational variables, we can now define relational dependencies. Definition 7 (Relational dependency) A relational dependency D = [I 1,..., I k ].V 1 [I 1 ].V 2 consists of two relational variables with a common base item and corresponds to a directed probabilistic dependence from [I 1,..., I k ].V 1 to [I 1 ].V 2. The example dependencies displayed in Figure 1(a) can be specified as [Page, Edits, User].Expertise [Page].Quality and [Category, Forms, Page].Quality [Category].V iews. Depending on the context, V 1 and V 2 can be referred to as treatment and outcome, cause and effect, or parent
5 and child. Without loss of generality, Definition 7 provides a canonical specification for dependencies, with the outcome relational variable restricted to singleton paths, thus ensuring that outcomes consist of a single value. Relational models are simply a collection of relational dependencies, defined as: Definition 8 (Relational model) The structure of a relational model M = (S, D) consists of a relational schema S paired with a set of relational dependencies D defined over S. This definition is consistent with and expressible as DAPER models (Heckerman et al., 2007). A parameterized relational model would also contain local probability distributions for every attribute class A(I) for each I E R in order to represent a joint probability distribution. Note that without existence variables on entity and relationship classes, relational models are not truly generative as the skeleton must be generated prior to the attributes. We can choose simple processes for generating skeletons, allowing us to focus on relational models of attributes and leaving structural causes and effects as future work. Just as the relational schema is a template for skeletons, a relational model can be viewed as a template for ground graphs (i.e., how dependencies apply to skeletons). Definition 9 (Ground graph) The ground graph GG MσER = (V, E) for relational model M = (S, D) and skeleton σ ER is a directed graph with nodes V = A(σ ER ) = {i.x I E R X A(I) i σ(i)} and edges E = {i k.y i j.x i k.y, i j.x V i k.y [I j,..., I k ].Y ij [I j,..., I k ].Y [I j ].X D}. By Lemma 1 and Definition 9, we can see that the same canonical dependency involving i k.y and i j.x can connect many other relational variables for which i k and i j are elements. These additional, implied dependencies form the crux of the challenge of identifying independence in relational models, a solution for which is presented in the following section. 4 RELATIONAL D-SEPARATION Conditional independence can be entailed by the rules of d-separation, but only for simple directed acyclic graphs. For Bayesian networks, the model structure corresponds exactly to ground graphs. In contrast, relational models are templates for ground graphs that vary with underlying skeletons. Since conditional independence facts must hold across all model instantiations, reasoning about d-separation for relational models is inherently more challenging. Definition 10 (Relational d-separation) Let X, Y, and Z be three sets of distinct relational variables for perspective B E R defined over relational schema S. Then, for relational model M, X and Y are d-separated by Z if and only if, for any skeleton σ ER, X b and Y b are d-separated by Z b in ground graph GG MσER for all b σ(b). In other words, for X and Y to be d-separated by Z for relational model M, d-separation must hold for all instantiations of those relational variables for any possible skeleton. This is a conservative definition, but it is consistent with the semantics of d-separation on Bayesian networks it only guarantees independence. Answering relational d-separation queries is challenging for the following reasons: All-ground-graphs semantics: Although possible to verify d-separation on a single ground graph, the conclusion may not generalize (as required by definition) and ground graphs can be arbitrarily large. Implicitly, d-separation on Bayesian networks makes the same claim, but all ground graphs are identical to the structure of the model. Relational models are templates: Relational models may be directed acyclic graphs, but they are templates for ground graphs. The rules of d- separation do not directly apply to relational models, only to ground graphs. Relational variables may overlap: Relational variables frequently consist of sets of values that may overlap, as described by Lemma 1. Consequently, there exist non-intuitive implications of dependencies that must be accounted for, such as relational d-connecting paths (see the example in Figure 1). Relational dependency specification: Relational models are defined with respect to canonical dependencies, each specified from a single perspective. However, variables in a ground graph may belong to multiple relational variable instances, each defined from different perspectives. Thus, to determine which dependencies exist between arbitrary relational variables, we need methods to translate and extend the canonically specified dependencies. 4.1 SOLUTION The definition of relational d-separation and its challenges suggests a solution that abstracts over all possible ground graphs and explicitly represents the overlap between pairs of relational variables. We developed a new representation, called an abstract
6 ground graph, that captures all dependencies among relational variables for any ground graph, using knowledge of only the schema and the model. Definition 11 (Abstract ground graph) An abstract ground graph AGG MBh = (V, E) for relational model M = (S, D), perspective B E R, and hop threshold h N 0 is an abstraction of the dependencies D for all possible ground graphs GG MσER of M on arbitrary skeletons σ ER. The set of nodes in AGG MBh, V = RV IV, is { the union of all relational variables RV = [B,..., Ij ].V length([b,..., I j ]) h + 1 } and the intersections between pairs of relational variables that could intersect IV = { X Y X, Y RV X = [B,...,} I k,..., I j ].V Y = [B,..., I l,..., I j ].V I k I l. The set of edges in AGG MBh is E = RV E IV E, where RV E RV RV and IV E IV RV RV IV. RV E is the set of edges between pairs of relational variables: RV E = { [B,..., I k ].V 1 [B,..., I j ].V 2 [I j,..., I k ].V 1 [I j ].V 2 D [B,..., I k ] extend([b,..., I j ], [I j,..., I k ]) }. IV E is the set of edges inherited by both relational variable sources of every intersection variable: IV E = { X [B,..., I j ].V 2 X = P 1.V 1 P 2.V 1 IV (P 1.V 1 [B,..., I j ].V 2 RV E P 2.V 1 [B,..., I j ].V 2 RV E) } { [B,..., I j ].V 1 X X = P 1.V 2 P 2.V 2 IV ([B,..., I j ].V 1 P 1.V 1 RV E [B,..., I j ].V 1 P 2.V 1 RV E) }. The extend method is defined below. Essentially, an abstract ground graph for relational model M, perspective B E R, and hop threshold h follows three simple steps: (1) add a node for all relational variables limited by h; (2) add edges for every direct cause of every relational variable; and (3) for each pair of intersecting relational variables, add a new intersection node that inherits the direct causes and effects from both of its sources. Then, answer queries of the form Are X and Y d-separated by Z by (1) augmenting X, Y, and Z with their corresponding intersection variables and (2) using the rules of d-separation on the abstract ground graph for the common perspective of the relational variables in X, Y, and Z. Figure 2 shows the abstract ground graph for the Wikipedia example from the User perspective with hop threshold h = 6. The abstract ground graph illustrates why it is necessary to condition on both edited page quality ([User, Edits, Page].Quality) and the expertise of other users edit- [USER, EDITS, PAGE, FORMS, CATEGORY, FORMS, PAGE].Quality [USER].Expertise [USER, EDITS, PAGE].Quality [USER, EDITS, PAGE, FORMS, CATEGORY].Views [USER, EDITS, PAGE, EDITS, USER].Expertise [USER, EDITS, PAGE, EDITS, USER, EDITS, PAGE].Quality [USER, EDITS, PAGE, FORMS, CATEGORY, FORMS, PAGE].Quality [USER, EDITS, PAGE, EDITS, USER, EDITS, PAGE].Quality Figure 2: The abstract ground graph for the example in Figure 1 from the User perspective with hop threshold h = 6. ing the same pages ([User, Edits, Page, Edits, User].Expertise) in order to d-separate individual user expertise ([User].Expertise) from the number of category views of edited pages ([User, Edits, Page, Forms, Category].Views). Using the algorithm devised by Geiger et al. (1990), relational d-separation queries can be answered in O( E ) time with respect to the number of edges in the abstract ground graph. In practice, the size of an abstract ground graph depends on the relational schema (i.e., number of entities, relationships, cardinalities, and attributes), as well as the hop threshold limiting the length of relational paths. For the example in Figure 2, the abstract ground graph has 7 nodes and 7 edges (including 1 intersection node with 2 edges); for h = 8, it would have 15 nodes and 25 edges (including 5 intersection nodes with 16 edges). Furthermore, abstract ground graphs are invariant to the size of ground graphs, even though ground graphs can be arbitrarily large (i.e., relational databases have no maximum size). Next, we formally define the method for translating canonically specified dependencies to dependencies between arbitrary relational variables. Definition 12 (Extending relational paths) Let P orig = [I 1,..., I j ] and P ext = [I j,..., I k ] be two relational paths for schema S. The following three functions extend P orig with P ext : extend(p orig, P ext ) = { truncate(concat(p orig [0 : length(p orig ) i + 1], P ext [i : length(p ext )])) i pivots(reverse(p orig ), P ext ) } ; pivots(p 1, P 2 ) = {i P 1 [0 : i] = P 2 [0 : i]}; truncate(p ) = if I j 1, I j, I j+1 P (I j R I j 1 = I j+1 ) (I j E I j 1 = I j+1 I e I j 1 I j I e card(i j 1, I e ) = one), then truncate(p [I j, I j+1 ]); else P ; where concat, length, reverse, and [i : j] inclusiveexclusive sublist are standard functions of lists.
7 For example, extend([user, Edits, Page], [Page, Edits, User]) = {[User, Edits, Page, Edits, User]} and truncate([user, Edits, User]) = [User]. Truncating a relational path preserves the set of reachable instances, removing only redundant items along the path. For the following lemma, we define candidate relational paths as those produced internally to the extend method. Lemma 2 For any skeleton σ ER and candidate relational path P = [I 1,..., I k ], i 1 σ(i 1 ) P i1 = truncate(p ) i1. Proof. Let σ ER be an arbitrary skeleton, let P = [I 1,..., I k ] be an arbitrary relational path, and let i 1 σ(i 1 ) be arbitrary. There are three cases: (1) P = truncate(p ). Then, P i1 = truncate(p ) i1. (2) Let I 1, I 2, I 3 be an ordered triple in P with I 2 R and I 1 = I 3. Then, [I 1, I 2, I 3 ] = [I 1, I 2, I 1 ] and, by definition, i 2 [I 1,I 2] i1 [I 2, I 1 ] i2 = {i 1 }. So, [I 1, I 2, I 1 ] i1 = = [I 1, I 2, I 3 ] i1 because instances cannot be revisited. Removing [I 2, I 3 ] from P does not change which instances are reached. So, P i1 = truncate(p ) i1. (3) Let I 1, I 2, I 3 be an ordered triple in P with I 2 E, I 1 = I 3, and I e I 1 card(i 1, I e ) = one if I e I 2. Then, for all i 2 σ(i 2 ) there is at most one i 1 with i 2 i 1. So, [I 1, I 2, I 3 ] = [I 1, I 2, I 1 ] and, by definition, i 2 [I 1,I 2] i1 [I 2, I 1 ] i2 = {i 1 }. So, [I 1, I 2, I 1 ] i1 = = [I 1, I 2, I 3 ] i1 and P i1 = truncate(p ) i1 as in case (2). This method for extending relational paths invariably produces a set of reachable items that are also reachable by the two original paths. Lemma 3 For any skeleton σ ER and relational paths P orig = [I 1,..., I j ] and P ext = [I j,..., I k ] with P = extend(p orig, P ext ), i 1 σ(i 1 ) P P i k P i1 i j P orig i1 such that i k P ext ij. Proof. Proof by contradiction. Let σ ER be an arbitrary skeleton, let i 1 σ(i 1 ) be arbitrary, and let i k P i1 be arbitrary for some P P. Assume that i j P orig i1 i k / P ext ij. Let c pivots(reverse(p orig ), P ext ) such that P = truncate(concat(p orig [0 : length(p orig ) c + 1], P ext [c : length(p ext )])). By Lemma 2, we can ignore truncation. There are two subcases: (a) c = 1. Then, P = [I 1,..., I j,..., I k ], where i k is reached by traversing σ ER from i 1 via some i j to i k. But the path from i 1 to i j implies that i j [I 1,..., I j ] i1 = P orig i1, and the path from i j to i k implies that i k [I j,..., I k ] ij = P ext ij. So, there must exist an i j P orig i1 such that i k P ext ij. (b) c > 1. Then, P = [I 1,..., I m,..., I k ], where i k is reached by traversing σ ER from i 1 via some i m to i k. The path from i 1 to i m implies that i m [I 1,..., I m ] i1 = P orig [0 : length(p orig ) c + 1] i1, and the path from i m to i k implies that i k [I m,..., I k ] im = P ext [c 1 : length(p ext )] im. But i j [I m,..., I j ] im = P orig [length(p orig ) c : length(p orig )] im with i m [I j,..., I m ] ij = P ext [0 : c + 1] ij. So, i k [I j,..., I m,..., I k ] ij = P ext ij. Because the set of relational paths produced by extend yields a subset of the items reachable via both paths, it is necessary to consider the instances not reached. There exists an alternative relational path P orig that intersects with P orig that, when using extend, catches the remaining instances. Lemma 4 For any skeleton σ ER and two relational paths P orig = [I 1,..., I j ] and P ext = [I j,..., I k ] with P = extend(p orig, P ext ), i 1 σ(i 1 ) i j P orig i1 i k P ext ij if P P i k / P i1, then P orig such that i j P orig i1 P orig i 1 and i k P i1 for some P extend(p orig, P ext). Proof. Let σ ER be an arbitrary skeleton, and let i 1 σ(i 1 ), i j P orig i1, and i k P ext ij be arbitrary instances such that i k / P i1 for any P P. Since there exists no pivot that yields a common subsequence in P orig and P ext that reaches i k, there must be paths in the skeleton from i 1 to i j via i m and i j to i k via i m such that the traversals from i m to i j is via some i l and i j to i m is via some i n, where i l i n. So, P orig = [I 1,..., I m,..., I l,..., I j ] and P ext = [I j,..., I n,..., I m,..., I k ] with I l I n. Let P orig = [I 1,..., I m,..., I n,..., I j ], which captures the traversal from i 1 to i m to i n to i j. So, i j P orig i1 P orig i 1. Let P = [I 1,..., I m,..., I k ] extend(p orig, P ext) with pivot at I m. Then, i k P i PROOF OF CORRECTNESS The correctness of our approach to relational d- separation relies on several facts: (1) d-separation is valid for directed acyclic graphs (DAGs); (2) ground graphs are DAGs; and (3) abstract ground graphs are DAGs and represent all edges in all possible ground graphs. It would follow that d-separation on abstract ground graphs, augmented by intersection variables, holds for all ground graphs. Using the previous definitions and lemmas, the following sequence of results proves the correctness of our approach to identifying independence in relational models.
8 Theorem 1 The rules of d-separation are sound and complete for directed acyclic graphs. Proof. Due to Verma and Pearl (1988) for soundness and Geiger and Pearl (1988) for completeness. Lemma 5 For any acyclic relational model M and skeleton σ ER, the ground graph GG MσER is a directed acyclic graph. Proof. Due to both Heckerman et al. (2007) for DAPER models and Getoor (2001) for PRMs. Theorem 2 For any acyclic relational model M, perspective B E R, and hop threshold h N 0, AGG MBh abstracts GG MσER for all skeletons σ ER. Proof. Let M = (S, D) be an arbitrary acyclic relational model, let B E R be arbitrary, and let h N 0 be an arbitrary hop threshold. Assume that all relational paths in the proof have length less than h+2; otherwise, reject the path by assumption that dependence does not travel farther than h hops. There are two facts to prove that AGG MBh is a valid abstraction of GG MσER for all skeletons σ ER : (1) Every edge in AGG MBh corresponds to an edge in GG MσER for some σ ER. There are three subcases, one for each edge type in an abstract ground graph: (a) Let [B,..., I k ].V 1 [B,..., I j ].V 2 RV E be arbitrary. Assume by contradiction that b σ(b) i k [B,..., I k ] b i j [B,..., I j ] b i k.v 1 i j.v 2 / GG MσER for any skeleton σ ER. By Definition 11, [I j,..., I k ].V 1 [I j ].V 2 D and [B,..., I k ] extend([b,..., I j ], [I j,..., I k ]). So, by Definition 9, i j σ(i j ) i k [I j,..., I k ] ij i k.v 1 i j.v 2 GG MσER for any skeleton σ ER. Let σ ER be an arbitrary skeleton, and let b σ(b) be arbitrary. By Lemma 3, i k [B,..., I k ] b i j [B,..., I j ] b such that i k [I j,..., I k ] ij. So, b σ(b) i k [B,..., I k ] b i j [B,..., I j ] b such that i k.v 1 i j.v 2 GG MσER for any skeleton σ ER. (b) Let P 1.V 1 P 2.V 1 [B,..., I j ].V 2 IV E be arbitrary, where P 1 = [B,..., I m,..., I k ] and P 2 = [B,..., I n,..., I k ] with I m I n. By Lemma 1, there exists a skeleton σ ER such that P 1 b P 2 b for some b σ(b). Let i k P 1 b P 2 b for such a b σ(b) for σ ER. Assume by contradiction that i j [B,..., I j ] b i k.v 1 i j.v 2 / GG MσER. By Definition 11, either P 1.V 1 [B,..., I j ].V 2 RV E or P 2.V 1 [B,..., I j ].V 2 RV E. Then, as shown in case (a), i j [B,..., I j ] b such that i k.v 1 i j.v 2 GG MσER. (c) Let [B,..., I k ].V 1 P 1.V 2 P 2.V 2 IV E be arbitrary, where P 1 = [B,..., I m,..., I j ] and P 2 = [B,..., I n,..., I j ] with I m I n. The proof follows case (b) to show that i k [B,..., I k ] b i j P 1.V 2 P 2.V 2 b such that i k.v 1 i j.v 2 GG MσER for some skeleton σ ER and b σ(b) for which P 1 b P 2 b. (2) For any skeleton σ ER, every edge in GG MσER is represented by some edge in AGG MBh. Let σ ER be an arbitrary skeleton, and let i k.v 1 i j.v 2 GG MσER be an arbitrary edge drawn from [I j,..., I k ].V 1 [I j ].V 2 D where b σ(b) such that P k.v 1 = {P k.v 1 i k.v 1 P k.v 1 b P k.v 1 AGG MBh } and P j.v 2 = {P j.v 2 i j.v 2 P j.v 2 b P j.v 2 AGG MBh }. Then, P k.v 1 P k.v 1 P j.v 2 P j.v 2 either (a) P k.v 1 P j.v 2 AGG MBh, (b) P k.v 1 P k.v 1 P j.v 2 AGG MBh, where P k.v 1 P k.v 1, or (c) P k.v 1 P j.v 2 P j.v 2 AGG MBh, where P j.v 2 P j.v 2. Let P k.v 1 P k.v 1, P j.v 2 P j.v 2 be arbitrary. (a) If P k extend(p j, [I j,..., I k ]), then P k.v 1 P j.v 2 AGG MBh by Definition 11. (b) If P k / extend(p j, [I j,..., I k ]), but P k extend(p j, [I j,..., I k ]) such that i k P k b, then P k.v 1 P k.v 1, P k.v 1 P j.v 2 AGG MBh, and P k.v 1 P k.v 1 P j.v 2 AGG MBh by Definition 11. (c) If P extend(p j, [I j,..., I k ])i k / P b, then, by Lemma 4, P j such that i j P j b and P k extend(p j, [I j,..., I k ]). So, P j.v 2 P j.v 2, P k.v 1 P j.v 2 AGG MBh, and P k.v 1 P j.v 2 P j.v 2 AGG MBh by Definition 11. Theorem 2 guarantees that, up to the hop threshold, abstract ground graphs capture all possible paths of dependence between any pair of variables in any ground graph. This also provides the reason why explicitly representing the intersection between pairs of relational variables is necessary and sufficient. Corollary 1 For any acyclic relational model M, perspective B E R, and hop threshold h N 0, AGG MBh is a directed acylic graph. Proof. Let M be an arbitrary acyclic relational model, let B E R be arbitrary, and let h N 0 be arbitrary. Theorem 2 implies that every edge in AGG MBh corresponds to an edge in GG MσER for some σ ER. So a cycle in AGG MBh could only be the result of a cycle in GG MσER, but by Lemma 5, GG MσER is a directed acyclic graph. Corollary 1 ensures that d-separation applies directly to abstract ground graphs because they are DAGs. In the following theorem, let W be the set of augmented nodes in an abstract ground graph
9 W = W W W {W W W W AGG MBh } for the set of relational variables W. Theorem 3 For any relational model M and skeleton σ ER, X and Y are d-separated by Z on GG MσER if X and Ȳ are d-separated by Z on AGG MBh up to hop threshold h and the common perspective B. Proof. Let M be an arbitrary relational model, let σ ER be an arbitrary skeleton, and let X and Ȳ be d-separated given Z on AGG MBh for three distinct arbitrary sets of relational variables from perspective B up to hop threshold h. Assume by contradiction that b σ(b) such that X b and Y b are not d- separated by Z b in GG MσER. Then, there exists a d-connecting path p from some x X b to some y Y b given all z Z b. By Theorem 2, AGG MBh abstracts GG MσER, so all edges in GG MσER are captured by AGG MBh. So, path p must be represented from all nodes in {n x n b } to all nodes in {n y n b } in AGG MBh. If p is d-connecting in GG MσER, then it is d-connecting in AGG MBh, implying that X and Ȳ are not d-separated by Z. So, X b and Y b must be d-separated by Z b, and, by Definition 10, X and Y are d-separated by Z on GG MσER. Theorem 4 Relational d-separation is sound and complete for abstract ground graphs up to a specified hop threshold. Proof. By Theorem 1, Corollary 1, and Theorem 3. 5 EXPERIMENTS To complement the theoretical results, we present a series of experiments on synthetic data. We implemented relational d-separation, as well as random generators of schemas, models, and queries. 5.1 ABSTRACT GROUND GRAPH SIZE Abstract ground graphs (AGGs) explicitly represent the intersection among relational variables and extend the canonically specified dependencies of relational models. Consequently, it is important to quantify how large an AGG can be (i.e., how many nodes and edges are created) and determine which factors influence its size. We ran 500 trials for each combination of number of entities (1 4), relationships (ranging from one less than the number of entities to pairwise relationships with randomly selected cardinalities), attributes for each entity and relationship ( P ois(1.0) + 1), and dependencies (1 10). AGG size # Nodes (ent. persp.) # Edges (ent. persp.) # Nodes (rel. persp.) # Edges (rel. persp.) Number of many cardinalities AGG size Number of dependencies Figure 3: AGG size variation as (left) the number of many cardinalities in the schema increases (dependencies fixed at 10) and (right) the number of dependencies increases. We discovered the following facts: (1) as the number of entities, relationships, attributes, and many cardinalities increases, the AGG grows with respect to both nodes and edges; (2) as the number of dependencies in the model increases, the number of edges increases, but the number of nodes is invariant; and (3) AGGs with relationship perspectives are larger than entity perspectives because more relational variables can be defined. Figure 3 depicts how AGG size (measured as the average number of nodes and edges) varies with respect to the number of many cardinalities in the schema and the number of dependencies in the model. Note that for a single entity, AGGs are equivalent to Bayesian networks. 5.2 MINIMAL SEPARATING SET SIZE Because AGGs can become large, one might expect that separating sets 1 would also grow to impractical sizes. Fortunately, relational d-separation produces minimal separating sets that are empirically observed to be small. We ran 100 trials for each setting of number of entities (1 4), relationships (one less than the number of entities with randomly selected cardinalities), total number of attributes fixed to 10, and dependencies (1 10). For each relational model, we identified one minimal separating set for up to 100 randomly chosen pairs of conditionally independent relational variables. To discover a minimal separating set between relational variables X and Y, we modified Algorithm 4 devised by Tian et al. (1998) by starting with all parents of X and Ȳ, augmented with the set of nodes they subsume in the AGG. Note that while the discovered separating sets are minimal (i.e., no proper subset is a separating set), they are not necessarily of minimum 1 If X and Y are d-separated given Z, then Z is a separating set for X and Y.
10 Size of minimal separating set Dependencies: Entities: Figure 4: Minimal separating sets have reasonable sizes, growing only with respect to model density. size. Figure 4 shows the frequency of separating set size as both the number of entities and dependencies vary. The experimental results indicate that separating set size is strongly influenced by model density, primarily because the number of potential d-connecting paths increases as the number of dependencies increases. 5.3 EMPIRICAL VALIDITY As a practical demonstration, we examine how the expectations of the relational d-separation theory match the results of statistical tests on actual data. We parameterize relational models using additive linear equations, the average aggregate for relational variables, and uniformly distributed error terms. If Y has no parents, then Y ɛ, and Y 0.9 avg(x) +.1ɛ otherwise. To par(y ) X par(y ) test a query X Y Z, we use linear regression, testing the coefficient of avg(x) in the equation Y β 0 +β 1 avg(x)+ +β i avg(z i ) for each Z i Z. For 100 trials, we randomly generated a schema and model for varying numbers of entities (1 4), relationships (one less than the number of entities), and attributes for each entity and relationship P ois(1.0) + 1. We then tested up to 100 true and false relational d-separation queries across 100 skeletons (i.e., instantiated relational databases) with 1,000 instances of each entity. For each query, we measured the average strength of effect (measured as the proportion of remaining variance) and proportion of trials for which each query was significant (α = 0.01 adjusted with Bonferroni correction with the number of queries per trial). Figure 5 depicts the distribution of the average strength of effect and proportion of significant trials across both true and false queries for varying numbers of entities. In the vast majority of cases, relational d-separation is consistent with tests on actual data. For approximately 17,000 true queries, 0.8% have an average effect size greater than 0.01, 3.7% are significant in more than one trial, and only 0.7% cross both TRUE QUERIES Avg. strength of effect FALSE QUERIES Avg. strength of effect entity 2 entities 3 entities 4 entities 1 entity 2 entities 3 entities 4 entities Prop. significant Prop. significant entity 2 entities 3 entities 4 entities 1 entity 2 entities 3 entities 4 entities Figure 5: The relational d-separation theory closely matches the results of statistical tests on actual data. thresholds. Aside from Type I error, a small number of cases exhibit an interaction between aggregation and relational structure (i.e., the cardinality of relational variables). Simple linear regression does not account for these interaction effects, suggesting the need for more accurate statistical tests of conditional independence for relational data. 6 SUMMARY AND DIRECTIONS In this paper, we extend the theory of d-separation to models of relational data. We formally define relational d-separation and offer a sound, complete, and computationally efficient approach to deriving conditional independence facts from relational models. We also provide an empirical evaluation of relational d-separation on synthetic data. The results of this paper imply flaws in the design and analysis of some real-world studies. If researchers of social or economic systems choose inappropriate data and model representations, then their analyses may omit important classes of dependencies (i.e., they may conclude causal dependence where conditional independence was not detected). Our work indicates that researchers should carefully consider how to represent their domains in order to accurately reason about conditional independence. Our experiments also suggest that more accurate tests of conditional independence for relational data need to be developed, specifically those that can address the interaction of relational structure and aggregation. Additionally, this work focuses on relational models of attributes; future work should con-
11 sider models of relationship and entity existence. Finally, the theory could also be extended to incorporate functional or deterministic dependencies, as D-separation does for Bayesian networks. Acknowledgements The authors wish to thank Cindy Loiselle for her editing expertise and Katerina Marazopoulou for reviewing the theoretical contributions. This effort is supported by the Intelligence Advanced Research Project Agency (IARPA) via Department of Interior National Business Center Contract number D11PC20152, Air Force Research Lab under agreement number FA and Science Applications International Corporation (SAIC) and DARPA under contract number P The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation hereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements either expressed or implied, of IARPA, DoI/NBC, AFRL, SAIC, DARPA or the U.S. Government. References N. Friedman. Inferring cellular networks using probabilistic graphical models. Science, 303: , D. Geiger and J. Pearl. On the logic of causal models. In Proceedings of the Fourth Annual Conference on Uncertainty in Artificial Intelligence, pages , D. Geiger, T. Verma, and J. Pearl. Identifying independence in Bayesian networks. Networks, 20: , L. Getoor. Learning Statistical Models from Relational Data. PhD thesis, Stanford University, L. Getoor and B. Taskar, editors. Introduction to Statistical Relational Learning. MIT Press, Cambridge, MA, L. Getoor, N. Friedman, D. Koller, A. Pfeffer, and B. Taskar. Probabilistic relational models. In L. Getoor and B. Taskar, editors, Introduction to Statistical Relational Learning, pages MIT Press, Cambridge, MA, D. Heckerman, C. Meek, and D. Koller. Probabilistic entity-relationship models, PRMs, and plate models. In L. Getoor and B. Taskar, editors, Introduction to Statistical Relational Learning, pages MIT Press, Cambridge, MA, E. Segal, B. Taskar, A. Gasch, N. Friedman, and D. Koller. Rich probabilistic models for gene expression. Bioinformatics, 17(suppl 1):S243 S252, P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction and Search. MIT Press, Cambridge, MA, 2nd edition, B. Taskar, E. Segal, and D. Koller. Probabilistic classification and clustering in relational data. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pages , J. Tian, A. Paz, and J. Pearl. Finding Minimal D- separators. Technical Report R-254, UCLA Computer Science Department, February I. Tsamardinos, L. E. Brown, and C. F. Aliferis. The max-min hill-climbing Bayesian network structure learning algorithm. Machine Learning, 65(1):31 78, October T. Verma and J. Pearl. Causal networks: Semantics and expressiveness. In Proceedings of the Fourth Annual Conference on Uncertainty in Artificial Intelligence, pages , 1988.
On Learning Causal Models from Relational Data
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) On Learning Causal Models from Relational Data Sanghack Lee and Vasant Honavar Artificial Intelligence Research Laboratory
More informationProf. Dr. Ralf Möller Dr. Özgür L. Özçep Universität zu Lübeck Institut für Informationssysteme. Tanya Braun (Exercises)
Prof. Dr. Ralf Möller Dr. Özgür L. Özçep Universität zu Lübeck Institut für Informationssysteme Tanya Braun (Exercises) Slides taken from the presentation (subset only) Learning Statistical Models From
More informationRespecting Markov Equivalence in Computing Posterior Probabilities of Causal Graphical Features
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) Respecting Markov Equivalence in Computing Posterior Probabilities of Causal Graphical Features Eun Yong Kang Department
More informationCOMP538: Introduction to Bayesian Networks
COMP538: Introduction to Bayesian Networks Lecture 9: Optimal Structure Learning Nevin L. Zhang lzhang@cse.ust.hk Department of Computer Science and Engineering Hong Kong University of Science and Technology
More informationArrowhead completeness from minimal conditional independencies
Arrowhead completeness from minimal conditional independencies Tom Claassen, Tom Heskes Radboud University Nijmegen The Netherlands {tomc,tomh}@cs.ru.nl Abstract We present two inference rules, based on
More informationIntroduction to Probabilistic Graphical Models
Introduction to Probabilistic Graphical Models Kyu-Baek Hwang and Byoung-Tak Zhang Biointelligence Lab School of Computer Science and Engineering Seoul National University Seoul 151-742 Korea E-mail: kbhwang@bi.snu.ac.kr
More informationLearning in Bayesian Networks
Learning in Bayesian Networks Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Berlin: 20.06.2002 1 Overview 1. Bayesian Networks Stochastic Networks
More informationIdentification and Overidentification of Linear Structural Equation Models
In D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett (Eds.), Advances in Neural Information Processing Systems 29, pre-preceedings 2016. TECHNICAL REPORT R-444 October 2016 Identification
More informationLecture 4 October 18th
Directed and undirected graphical models Fall 2017 Lecture 4 October 18th Lecturer: Guillaume Obozinski Scribe: In this lecture, we will assume that all random variables are discrete, to keep notations
More informationSupplementary material to Structure Learning of Linear Gaussian Structural Equation Models with Weak Edges
Supplementary material to Structure Learning of Linear Gaussian Structural Equation Models with Weak Edges 1 PRELIMINARIES Two vertices X i and X j are adjacent if there is an edge between them. A path
More informationTowards an extension of the PC algorithm to local context-specific independencies detection
Towards an extension of the PC algorithm to local context-specific independencies detection Feb-09-2016 Outline Background: Bayesian Networks The PC algorithm Context-specific independence: from DAGs to
More informationLars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), Institute BW/WI & Institute for Computer Science, University of Hildesheim
Course on Bayesian Networks, summer term 2010 0/42 Bayesian Networks Bayesian Networks 11. Structure Learning / Constrained-based Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL)
More informationLearning Semi-Markovian Causal Models using Experiments
Learning Semi-Markovian Causal Models using Experiments Stijn Meganck 1, Sam Maes 2, Philippe Leray 2 and Bernard Manderick 1 1 CoMo Vrije Universiteit Brussel Brussels, Belgium 2 LITIS INSA Rouen St.
More informationFast and Accurate Causal Inference from Time Series Data
Fast and Accurate Causal Inference from Time Series Data Yuxiao Huang and Samantha Kleinberg Stevens Institute of Technology Hoboken, NJ {yuxiao.huang, samantha.kleinberg}@stevens.edu Abstract Causal inference
More informationAbstract. Three Methods and Their Limitations. N-1 Experiments Suffice to Determine the Causal Relations Among N Variables
N-1 Experiments Suffice to Determine the Causal Relations Among N Variables Frederick Eberhardt Clark Glymour 1 Richard Scheines Carnegie Mellon University Abstract By combining experimental interventions
More informationCausality in Econometrics (3)
Graphical Causal Models References Causality in Econometrics (3) Alessio Moneta Max Planck Institute of Economics Jena moneta@econ.mpg.de 26 April 2011 GSBC Lecture Friedrich-Schiller-Universität Jena
More informationBounding the False Discovery Rate in Local Bayesian Network Learning
Bounding the False Discovery Rate in Local Bayesian Network Learning Ioannis Tsamardinos Dept. of Computer Science, Univ. of Crete, Greece BMI, ICS, Foundation for Research and Technology, Hellas Dept.
More informationOn the errors introduced by the naive Bayes independence assumption
On the errors introduced by the naive Bayes independence assumption Author Matthijs de Wachter 3671100 Utrecht University Master Thesis Artificial Intelligence Supervisor Dr. Silja Renooij Department of
More informationEquivalence in Non-Recursive Structural Equation Models
Equivalence in Non-Recursive Structural Equation Models Thomas Richardson 1 Philosophy Department, Carnegie-Mellon University Pittsburgh, P 15213, US thomas.richardson@andrew.cmu.edu Introduction In the
More informationLearning causal network structure from multiple (in)dependence models
Learning causal network structure from multiple (in)dependence models Tom Claassen Radboud University, Nijmegen tomc@cs.ru.nl Abstract Tom Heskes Radboud University, Nijmegen tomh@cs.ru.nl We tackle the
More informationRecovering Probability Distributions from Missing Data
Proceedings of Machine Learning Research 77:574 589, 2017 ACML 2017 Recovering Probability Distributions from Missing Data Jin Tian Iowa State University jtian@iastate.edu Editors: Yung-Kyun Noh and Min-Ling
More informationA Kernel Conditional Independence Test for Relational Data
A Kernel Conditional Independence Test for Relational Data Sanghack Lee and Vasant Honavar Artificial Intelligence Research Laboratory College of Information Sciences and Technology The Pennsylvania State
More informationLocal Structure Discovery in Bayesian Networks
1 / 45 HELSINGIN YLIOPISTO HELSINGFORS UNIVERSITET UNIVERSITY OF HELSINKI Local Structure Discovery in Bayesian Networks Teppo Niinimäki, Pekka Parviainen August 18, 2012 University of Helsinki Department
More informationFaithfulness of Probability Distributions and Graphs
Journal of Machine Learning Research 18 (2017) 1-29 Submitted 5/17; Revised 11/17; Published 12/17 Faithfulness of Probability Distributions and Graphs Kayvan Sadeghi Statistical Laboratory University
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 24, 2016 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationLearning Multivariate Regression Chain Graphs under Faithfulness
Sixth European Workshop on Probabilistic Graphical Models, Granada, Spain, 2012 Learning Multivariate Regression Chain Graphs under Faithfulness Dag Sonntag ADIT, IDA, Linköping University, Sweden dag.sonntag@liu.se
More informationIdentifiability assumptions for directed graphical models with feedback
Biometrika, pp. 1 26 C 212 Biometrika Trust Printed in Great Britain Identifiability assumptions for directed graphical models with feedback BY GUNWOONG PARK Department of Statistics, University of Wisconsin-Madison,
More informationarxiv: v1 [stat.ml] 23 Oct 2016
Formulas for counting the sizes of Markov Equivalence Classes Formulas for Counting the Sizes of Markov Equivalence Classes of Directed Acyclic Graphs arxiv:1610.07921v1 [stat.ml] 23 Oct 2016 Yangbo He
More informationLocal Characterizations of Causal Bayesian Networks
In M. Croitoru, S. Rudolph, N. Wilson, J. Howse, and O. Corby (Eds.), GKR 2011, LNAI 7205, Berlin Heidelberg: Springer-Verlag, pp. 1-17, 2012. TECHNICAL REPORT R-384 May 2011 Local Characterizations of
More informationA Decision Theoretic View on Choosing Heuristics for Discovery of Graphical Models
A Decision Theoretic View on Choosing Heuristics for Discovery of Graphical Models Y. Xiang University of Guelph, Canada Abstract Discovery of graphical models is NP-hard in general, which justifies using
More informationOutline. CSE 573: Artificial Intelligence Autumn Bayes Nets: Big Picture. Bayes Net Semantics. Hidden Markov Models. Example Bayes Net: Car
CSE 573: Artificial Intelligence Autumn 2012 Bayesian Networks Dan Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Outline Probabilistic models (and inference)
More informationIdentifying the irreducible disjoint factors of a multivariate probability distribution
JMLR: Workshop and Conference Proceedings vol 52, 183-194, 2016 PGM 2016 Identifying the irreducible disjoint factors of a multivariate probability distribution Maxime Gasse Alex Aussem University of Lyon,
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Bayes Nets: Independence Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationIntelligent Systems (AI-2)
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 19 Oct, 23, 2015 Slide Sources Raymond J. Mooney University of Texas at Austin D. Koller, Stanford CS - Probabilistic Graphical Models D. Page,
More informationFractional Belief Propagation
Fractional Belief Propagation im iegerinck and Tom Heskes S, niversity of ijmegen Geert Grooteplein 21, 6525 EZ, ijmegen, the etherlands wimw,tom @snn.kun.nl Abstract e consider loopy belief propagation
More informationDirected Graphical Models or Bayesian Networks
Directed Graphical Models or Bayesian Networks Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Bayesian Networks One of the most exciting recent advancements in statistical AI Compact
More informationInferring the Causal Decomposition under the Presence of Deterministic Relations.
Inferring the Causal Decomposition under the Presence of Deterministic Relations. Jan Lemeire 1,2, Stijn Meganck 1,2, Francesco Cartella 1, Tingting Liu 1 and Alexander Statnikov 3 1-ETRO Department, Vrije
More informationCausal Effect Identification in Alternative Acyclic Directed Mixed Graphs
Proceedings of Machine Learning Research vol 73:21-32, 2017 AMBN 2017 Causal Effect Identification in Alternative Acyclic Directed Mixed Graphs Jose M. Peña Linköping University Linköping (Sweden) jose.m.pena@liu.se
More informationOn the Identification of a Class of Linear Models
On the Identification of a Class of Linear Models Jin Tian Department of Computer Science Iowa State University Ames, IA 50011 jtian@cs.iastate.edu Abstract This paper deals with the problem of identifying
More informationLearning Marginal AMP Chain Graphs under Faithfulness
Learning Marginal AMP Chain Graphs under Faithfulness Jose M. Peña ADIT, IDA, Linköping University, SE-58183 Linköping, Sweden jose.m.pena@liu.se Abstract. Marginal AMP chain graphs are a recently introduced
More informationarxiv: v2 [cs.lg] 9 Mar 2017
Journal of Machine Learning Research? (????)??-?? Submitted?/??; Published?/?? Joint Causal Inference from Observational and Experimental Datasets arxiv:1611.10351v2 [cs.lg] 9 Mar 2017 Sara Magliacane
More informationPCSI-labeled Directed Acyclic Graphs
Date of acceptance 7.4.2014 Grade Instructor Eximia cum laude approbatur Jukka Corander PCSI-labeled Directed Acyclic Graphs Jarno Lintusaari Helsinki April 8, 2014 UNIVERSITY OF HELSINKI Department of
More informationProbabilistic Models. Models describe how (a portion of) the world works
Probabilistic Models Models describe how (a portion of) the world works Models are always simplifications May not account for every variable May not account for all interactions between variables All models
More informationPDF hosted at the Radboud Repository of the Radboud University Nijmegen
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is a preprint version which may differ from the publisher's version. For additional information about this
More informationNoisy-OR Models with Latent Confounding
Noisy-OR Models with Latent Confounding Antti Hyttinen HIIT & Dept. of Computer Science University of Helsinki Finland Frederick Eberhardt Dept. of Philosophy Washington University in St. Louis Missouri,
More informationIntroduction. next up previous Next: Problem Description Up: Bayesian Networks Previous: Bayesian Networks
Next: Problem Description Up: Bayesian Networks Previous: Bayesian Networks Introduction Autonomous agents often find themselves facing a great deal of uncertainty. It decides how to act based on its perceived
More informationSeparable and Transitive Graphoids
Separable and Transitive Graphoids Dan Geiger Northrop Research and Technology Center One Research Park Palos Verdes, CA 90274 David Heckerman Medical Computer Science Group Knowledge Systems Laboratory
More informationAn Efficient Bayesian Network Structure Learning Algorithm in the Presence of Deterministic Relations
An Efficient Bayesian Network Structure Learning Algorithm in the Presence of Deterministic Relations Ahmed Mabrouk 1 and Christophe Gonzales 2 and Karine Jabet-Chevalier 1 and Eric Chojnaki 1 Abstract.
More informationCSE 473: Artificial Intelligence Autumn 2011
CSE 473: Artificial Intelligence Autumn 2011 Bayesian Networks Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Outline Probabilistic models
More informationIntelligent Systems: Reasoning and Recognition. Reasoning with Bayesian Networks
Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIMAG 2 / MoSIG M1 Second Semester 2016/2017 Lesson 13 24 march 2017 Reasoning with Bayesian Networks Naïve Bayesian Systems...2 Example
More informationProduct rule. Chain rule
Probability Recap CS 188: Artificial Intelligence ayes Nets: Independence Conditional probability Product rule Chain rule, independent if and only if: and are conditionally independent given if and only
More informationGeneralized Measurement Models
Generalized Measurement Models Ricardo Silva and Richard Scheines Center for Automated Learning and Discovery rbas@cs.cmu.edu, scheines@andrew.cmu.edu May 16, 2005 CMU-CALD-04-101 School of Computer Science
More informationRealization Plans for Extensive Form Games without Perfect Recall
Realization Plans for Extensive Form Games without Perfect Recall Richard E. Stearns Department of Computer Science University at Albany - SUNY Albany, NY 12222 April 13, 2015 Abstract Given a game in
More informationUsing Descendants as Instrumental Variables for the Identification of Direct Causal Effects in Linear SEMs
Using Descendants as Instrumental Variables for the Identification of Direct Causal Effects in Linear SEMs Hei Chan Institute of Statistical Mathematics, 10-3 Midori-cho, Tachikawa, Tokyo 190-8562, Japan
More informationArtificial Intelligence Bayes Nets: Independence
Artificial Intelligence Bayes Nets: Independence Instructors: David Suter and Qince Li Course Delivered @ Harbin Institute of Technology [Many slides adapted from those created by Dan Klein and Pieter
More informationAnnouncements. CS 188: Artificial Intelligence Spring Probability recap. Outline. Bayes Nets: Big Picture. Graphical Model Notation
CS 188: Artificial Intelligence Spring 2010 Lecture 15: Bayes Nets II Independence 3/9/2010 Pieter Abbeel UC Berkeley Many slides over the course adapted from Dan Klein, Stuart Russell, Andrew Moore Current
More informationGraphical models and causality: Directed acyclic graphs (DAGs) and conditional (in)dependence
Graphical models and causality: Directed acyclic graphs (DAGs) and conditional (in)dependence General overview Introduction Directed acyclic graphs (DAGs) and conditional independence DAGs and causal effects
More informationProbability and Information Theory. Sargur N. Srihari
Probability and Information Theory Sargur N. srihari@cedar.buffalo.edu 1 Topics in Probability and Information Theory Overview 1. Why Probability? 2. Random Variables 3. Probability Distributions 4. Marginal
More informationApproximating the Partition Function by Deleting and then Correcting for Model Edges (Extended Abstract)
Approximating the Partition Function by Deleting and then Correcting for Model Edges (Extended Abstract) Arthur Choi and Adnan Darwiche Computer Science Department University of California, Los Angeles
More informationCS 5522: Artificial Intelligence II
CS 5522: Artificial Intelligence II Bayes Nets Instructor: Alan Ritter Ohio State University [These slides were adapted from CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]
More informationIdentifying Linear Causal Effects
Identifying Linear Causal Effects Jin Tian Department of Computer Science Iowa State University Ames, IA 50011 jtian@cs.iastate.edu Abstract This paper concerns the assessment of linear cause-effect relationships
More informationBayesian network modeling. 1
Bayesian network modeling http://springuniversity.bc3research.org/ 1 Probabilistic vs. deterministic modeling approaches Probabilistic Explanatory power (e.g., r 2 ) Explanation why Based on inductive
More informationComputational Genomics. Systems biology. Putting it together: Data integration using graphical models
02-710 Computational Genomics Systems biology Putting it together: Data integration using graphical models High throughput data So far in this class we discussed several different types of high throughput
More informationAn Empirical Study of Probability Elicitation under Noisy-OR Assumption
An Empirical Study of Probability Elicitation under Noisy-OR Assumption Adam Zagorecki and Marek Druzdzel Decision Systems Laboratory School of Information Science University of Pittsburgh, Pittsburgh,
More informationAn Empirical-Bayes Score for Discrete Bayesian Networks
JMLR: Workshop and Conference Proceedings vol 52, 438-448, 2016 PGM 2016 An Empirical-Bayes Score for Discrete Bayesian Networks Marco Scutari Department of Statistics University of Oxford Oxford, United
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Bayes Nets Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188
More informationCausal Inference on Data Containing Deterministic Relations.
Causal Inference on Data Containing Deterministic Relations. Jan Lemeire Kris Steenhaut Sam Maes COMO lab, ETRO Department Vrije Universiteit Brussel, Belgium {jan.lemeire, kris.steenhaut}@vub.ac.be, sammaes@gmail.com
More informationDiffusion of Innovations in Social Networks
Daron Acemoglu Massachusetts Institute of Technology, Department of Economics, Cambridge, MA, 02139, daron@mit.edu Diffusion of Innovations in Social Networks Asuman Ozdaglar Massachusetts Institute of
More informationUncertainty and Bayesian Networks
Uncertainty and Bayesian Networks Tutorial 3 Tutorial 3 1 Outline Uncertainty Probability Syntax and Semantics for Uncertainty Inference Independence and Bayes Rule Syntax and Semantics for Bayesian Networks
More informationDirected Graphical Models
CS 2750: Machine Learning Directed Graphical Models Prof. Adriana Kovashka University of Pittsburgh March 28, 2017 Graphical Models If no assumption of independence is made, must estimate an exponential
More informationStructure learning in human causal induction
Structure learning in human causal induction Joshua B. Tenenbaum & Thomas L. Griffiths Department of Psychology Stanford University, Stanford, CA 94305 jbt,gruffydd @psych.stanford.edu Abstract We use
More informationExact model averaging with naive Bayesian classifiers
Exact model averaging with naive Bayesian classifiers Denver Dash ddash@sispittedu Decision Systems Laboratory, Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA 15213 USA Gregory F
More informationConditional Independence and Factorization
Conditional Independence and Factorization Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr
More informationarxiv: v1 [stat.ml] 15 Nov 2016
Recoverability of Joint Distribution from Missing Data arxiv:1611.04709v1 [stat.ml] 15 Nov 2016 Jin Tian Department of Computer Science Iowa State University Ames, IA 50014 jtian@iastate.edu Abstract A
More informationA Relational Knowledge System
Relational Knowledge System by Shik Kam Michael Wong, Cory James utz, and Dan Wu Technical Report CS-01-04 January 2001 Department of Computer Science University of Regina Regina, Saskatchewan Canada S4S
More informationLeveraging Relational Representations for Causal Discovery
University of Massachusetts Amherst ScholarWorks@UMass Amherst Open Access Dissertations 9-2012 Leveraging Relational Representations for Causal Discovery Matthew John Hale Rattigan University of Massachusetts
More informationBayes Nets: Independence
Bayes Nets: Independence [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] Bayes Nets A Bayes
More informationPDF hosted at the Radboud Repository of the Radboud University Nijmegen
PDF hosted at the Radboud Repository of the Radboud University Nijmegen The following full text is an author's version which may differ from the publisher's version. For additional information about this
More informationBayesian Networks. Vibhav Gogate The University of Texas at Dallas
Bayesian Networks Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 4365) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1 Outline
More informationComparing Bayesian Networks and Structure Learning Algorithms
Comparing Bayesian Networks and Structure Learning Algorithms (and other graphical models) marco.scutari@stat.unipd.it Department of Statistical Sciences October 20, 2009 Introduction Introduction Graphical
More informationRepresentation of undirected GM. Kayhan Batmanghelich
Representation of undirected GM Kayhan Batmanghelich Review Review: Directed Graphical Model Represent distribution of the form ny p(x 1,,X n = p(x i (X i i=1 Factorizes in terms of local conditional probabilities
More informationCAUSAL MODELS: THE MEANINGFUL INFORMATION OF PROBABILITY DISTRIBUTIONS
CAUSAL MODELS: THE MEANINGFUL INFORMATION OF PROBABILITY DISTRIBUTIONS Jan Lemeire, Erik Dirkx ETRO Dept., Vrije Universiteit Brussel Pleinlaan 2, 1050 Brussels, Belgium jan.lemeire@vub.ac.be ABSTRACT
More informationIntegrating Correlated Bayesian Networks Using Maximum Entropy
Applied Mathematical Sciences, Vol. 5, 2011, no. 48, 2361-2371 Integrating Correlated Bayesian Networks Using Maximum Entropy Kenneth D. Jarman Pacific Northwest National Laboratory PO Box 999, MSIN K7-90
More informationNon-impeding Noisy-AND Tree Causal Models Over Multi-valued Variables
Non-impeding Noisy-AND Tree Causal Models Over Multi-valued Variables Yang Xiang School of Computer Science, University of Guelph, Canada Abstract To specify a Bayesian network (BN), a conditional probability
More informationMonotone DAG Faithfulness: A Bad Assumption
Monotone DAG Faithfulness: A Bad Assumption David Maxwell Chickering Christopher Meek Technical Report MSR-TR-2003-16 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 Abstract
More informationTowards Association Rules with Hidden Variables
Towards Association Rules with Hidden Variables Ricardo Silva and Richard Scheines Gatsby Unit, University College London, WCN AR London, UK Machine Learning Department, Carnegie Mellon, Pittsburgh PA,
More informationDirected and Undirected Graphical Models
Directed and Undirected Davide Bacciu Dipartimento di Informatica Università di Pisa bacciu@di.unipi.it Machine Learning: Neural Networks and Advanced Models (AA2) Last Lecture Refresher Lecture Plan Directed
More informationBayesian Networks. Vibhav Gogate The University of Texas at Dallas
Bayesian Networks Vibhav Gogate The University of Texas at Dallas Intro to AI (CS 6364) Many slides over the course adapted from either Dan Klein, Luke Zettlemoyer, Stuart Russell or Andrew Moore 1 Outline
More informationAutomatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries
Automatic Differentiation Equipped Variable Elimination for Sensitivity Analysis on Probabilistic Inference Queries Anonymous Author(s) Affiliation Address email Abstract 1 2 3 4 5 6 7 8 9 10 11 12 Probabilistic
More informationOutline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012
CSE 573: Artificial Intelligence Autumn 2012 Reasoning about Uncertainty & Hidden Markov Models Daniel Weld Many slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline
More informationLearning Causal Bayesian Networks from Observations and Experiments: A Decision Theoretic Approach
Learning Causal Bayesian Networks from Observations and Experiments: A Decision Theoretic Approach Stijn Meganck 1, Philippe Leray 2, and Bernard Manderick 1 1 Vrije Universiteit Brussel, Pleinlaan 2,
More informationCS839: Probabilistic Graphical Models. Lecture 2: Directed Graphical Models. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 2: Directed Graphical Models Theo Rekatsinas 1 Questions Questions? Waiting list Questions on other logistics 2 Section 1 1. Intro to Bayes Nets 3 Section
More informationPublished in: Tenth Tbilisi Symposium on Language, Logic and Computation: Gudauri, Georgia, September 2013
UvA-DARE (Digital Academic Repository) Estimating the Impact of Variables in Bayesian Belief Networks van Gosliga, S.P.; Groen, F.C.A. Published in: Tenth Tbilisi Symposium on Language, Logic and Computation:
More informationA Distinction between Causal Effects in Structural and Rubin Causal Models
A istinction between Causal Effects in Structural and Rubin Causal Models ionissi Aliprantis April 28, 2017 Abstract: Unspecified mediators play different roles in the outcome equations of Structural Causal
More informationBasic System and Subsystem Structures in the Dataflow Algebra. A. J. Cowling
Verification Testing Research Group, Department of Computer Science, University of Sheffield, Regent Court, 211, Portobello Street, Sheffield, S1 4DP, United Kingdom Email: A.Cowling @ dcs.shef.ac.uk Telephone:
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 7 Approximate
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationAdvances in Cyclic Structural Causal Models
Advances in Cyclic Structural Causal Models Joris Mooij j.m.mooij@uva.nl June 1st, 2018 Joris Mooij (UvA) Rotterdam 2018 2018-06-01 1 / 41 Part I Introduction to Causality Joris Mooij (UvA) Rotterdam 2018
More informationRECOVERING NORMAL NETWORKS FROM SHORTEST INTER-TAXA DISTANCE INFORMATION
RECOVERING NORMAL NETWORKS FROM SHORTEST INTER-TAXA DISTANCE INFORMATION MAGNUS BORDEWICH, KATHARINA T. HUBER, VINCENT MOULTON, AND CHARLES SEMPLE Abstract. Phylogenetic networks are a type of leaf-labelled,
More informationNormal Fans of Polyhedral Convex Sets
Set-Valued Analysis manuscript No. (will be inserted by the editor) Normal Fans of Polyhedral Convex Sets Structures and Connections Shu Lu Stephen M. Robinson Received: date / Accepted: date Dedicated
More information