Approximate Online Inference for Dynamic Markov Logic Networks

Similar documents
Project 6: Minigoals Towards Simplifying and Rewriting Expressions

Nondeterministic Automata vs Deterministic Automata

TIME AND STATE IN DISTRIBUTED SYSTEMS

Engr354: Digital Logic Circuits

1 PYTHAGORAS THEOREM 1. Given a right angled triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides.

Arrow s Impossibility Theorem

Arrow s Impossibility Theorem

Activities. 4.1 Pythagoras' Theorem 4.2 Spirals 4.3 Clinometers 4.4 Radar 4.5 Posting Parcels 4.6 Interlocking Pipes 4.7 Sine Rule Notes and Solutions

NON-DETERMINISTIC FSA

Learning Partially Observable Markov Models from First Passage Times

CS311 Computational Structures Regular Languages and Regular Grammars. Lecture 6

Automatic Synthesis of New Behaviors from a Library of Available Behaviors

Lecture Notes No. 10

Generalization of 2-Corner Frequency Source Models Used in SMSIM

Lesson 2: The Pythagorean Theorem and Similar Triangles. A Brief Review of the Pythagorean Theorem.

1B40 Practical Skills

Lecture 6: Coding theory

ANALYSIS AND MODELLING OF RAINFALL EVENTS

CS 573 Automata Theory and Formal Languages

Chapter 8 Roots and Radicals

Ling 3701H / Psych 3371H: Lecture Notes 9 Hierarchic Sequential Prediction

University of Sioux Falls. MAT204/205 Calculus I/II

THE PYTHAGOREAN THEOREM

ILLUSTRATING THE EXTENSION OF A SPECIAL PROPERTY OF CUBIC POLYNOMIALS TO NTH DEGREE POLYNOMIALS

Outline. Theory-based Bayesian framework for property induction Causal structure induction

Bayesian Networks: Approximate Inference

Alpha Algorithm: Limitations

Spacetime and the Quantum World Questions Fall 2010

Exercise 3 Logic Control

Comparing the Pre-image and Image of a Dilation

Matrices SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics (c) 1. Definition of a Matrix

Chapter 4 State-Space Planning

Algorithms & Data Structures Homework 8 HS 18 Exercise Class (Room & TA): Submitted by: Peer Feedback by: Points:

Maintaining Mathematical Proficiency

where the box contains a finite number of gates from the given collection. Examples of gates that are commonly used are the following: a b

Linear Algebra Introduction

Algorithm Design and Analysis

Finite State Automata and Determinisation

AP Calculus BC Chapter 8: Integration Techniques, L Hopital s Rule and Improper Integrals

Introduction to Olympiad Inequalities

12.4 Similarity in Right Triangles

System Validation (IN4387) November 2, 2012, 14:00-17:00

Tutorial Worksheet. 1. Find all solutions to the linear system by following the given steps. x + 2y + 3z = 2 2x + 3y + z = 4.

The University of Nottingham SCHOOL OF COMPUTER SCIENCE A LEVEL 2 MODULE, SPRING SEMESTER MACHINES AND THEIR LANGUAGES ANSWERS

Algorithm Design and Analysis

Unit 4. Combinational Circuits

Discrete Structures Lecture 11

22: Union Find. CS 473u - Algorithms - Spring April 14, We want to maintain a collection of sets, under the operations of:

CS 491G Combinatorial Optimization Lecture Notes

Review Topic 14: Relationships between two numerical variables

A Lower Bound for the Length of a Partial Transversal in a Latin Square, Revised Version

A Non-parametric Approach in Testing Higher Order Interactions

Bases for Vector Spaces

Counting Paths Between Vertices. Isomorphism of Graphs. Isomorphism of Graphs. Isomorphism of Graphs. Isomorphism of Graphs. Isomorphism of Graphs

Probability. b a b. a b 32.

Part 4. Integration (with Proofs)

Alpha Algorithm: A Process Discovery Algorithm

Table of Content. c 1 / 5

1.3 SCALARS AND VECTORS

For a, b, c, d positive if a b and. ac bd. Reciprocal relations for a and b positive. If a > b then a ab > b. then

Now we must transform the original model so we can use the new parameters. = S max. Recruits

Test Generation from Timed Input Output Automata

CSE 332. Sorting. Data Abstractions. CSE 332: Data Abstractions. QuickSort Cutoff 1. Where We Are 2. Bounding The MAXIMUM Problem 4

CARLETON UNIVERSITY. 1.0 Problems and Most Solutions, Sect B, 2005

arxiv: v1 [cond-mat.mtrl-sci] 10 Aug 2017

Technische Universität München Winter term 2009/10 I7 Prof. J. Esparza / J. Křetínský / M. Luttenberger 11. Februar Solution

Comparing Alternative Methods for Inference in Multiply Sectioned Bayesian Networks

Section 3.6. Definite Integrals

5. Every rational number have either terminating or repeating (recurring) decimal representation.

Electromagnetism Notes, NYU Spring 2018

TOPIC: LINEAR ALGEBRA MATRICES

Behavior Composition in the Presence of Failure

INTEGRATION. 1 Integrals of Complex Valued functions of a REAL variable

(a) A partition P of [a, b] is a finite subset of [a, b] containing a and b. If Q is another partition and P Q, then Q is a refinement of P.

= state, a = reading and q j

6.3.2 Spectroscopy. N Goalby chemrevise.org 1 NO 2 H 3 CH3 C. NMR spectroscopy. Different types of NMR

CHENG Chun Chor Litwin The Hong Kong Institute of Education

CS 2204 DIGITAL LOGIC & STATE MACHINE DESIGN SPRING 2014

SECTION A STUDENT MATERIAL. Part 1. What and Why.?

, g. Exercise 1. Generator polynomials of a convolutional code, given in binary form, are g. Solution 1.

Chapter 3. Vector Spaces. 3.1 Images and Image Arithmetic

6.3.2 Spectroscopy. N Goalby chemrevise.org 1 NO 2 CH 3. CH 3 C a. NMR spectroscopy. Different types of NMR

AVL Trees. D Oisín Kidney. August 2, 2018

PAIR OF LINEAR EQUATIONS IN TWO VARIABLES

PYTHAGORAS THEOREM WHAT S IN CHAPTER 1? IN THIS CHAPTER YOU WILL:

Symmetrical Components 1

Proving the Pythagorean Theorem

Intermediate Math Circles Wednesday 17 October 2012 Geometry II: Side Lengths

Intermediate Math Circles Wednesday, November 14, 2018 Finite Automata II. Nickolas Rollick a b b. a b 4

Math 32B Discussion Session Week 8 Notes February 28 and March 2, f(b) f(a) = f (t)dt (1)

A REVIEW OF CALCULUS CONCEPTS FOR JDEP 384H. Thomas Shores Department of Mathematics University of Nebraska Spring 2007

Implication Graphs and Logic Testing

Ch. 2.3 Counting Sample Points. Cardinality of a Set

8 THREE PHASE A.C. CIRCUITS

Global alignment. Genome Rearrangements Finding preserved genes. Lecture 18

Continuous Random Variables Class 5, Jeremy Orloff and Jonathan Bloom

Instructions. An 8.5 x 11 Cheat Sheet may also be used as an aid for this test. MUST be original handwriting.

Chapter Gauss Quadrature Rule of Integration

Propositional models. Historical models of computation. Application: binary addition. Boolean functions. Implementation using switches.

CS 188: Artificial Intelligence Fall Announcements

Transcription:

Approximte Online Inferene for Dynmi Mrkov Logi Networks Thoms Geier, Susnne Biundo Institute of Artifiil Intelligene Ulm University Ulm, Germny forenme.surnme@uni-ulm.de Astrt We exmine the prolem of filtering for dynmi proilisti systems using Mrkov Logi Networks. We propose method to pproximtely ompute the mrginl proilities for the urrent stte vriles tht is suitle for online inferene. Contrry to existing lgorithms, our pproh does not work on the level of elief propgtion, ut n e used with every lgorithm suitle for inferene in Mrkov Logi Networks, suh s MCSAT. We present n evlution of its performne on two dynmi domins. Keywords-mrkov logi networks; dynmi proilisti inferene; online inferene I. INTRODUCTION One of the gols of Artifiil Intelligene reserh is to provide tehnil systems with the ility to tke on tsks in rel-world setting. A populr nd very pprent exmple is root nvigtion nd ting; ut the prolem strethes further to ny sensory pltform, like PC equipped with mirophone or video mer, or even PC with only mouse nd keyord input. Any suh system ontins n expliit or, most of the time, impliit model of its environment. For exmple, even the sved user preferenes of some softwre produt re model of the produt s user. Most urrent tehnil systems rely on deterministi models. But in order to gin flexiility nd roustness, the inorportion of unertinty is sometimes enefiil to ommodte for prtil oservility or nondeterministi ehvior. Inorporting sttistil knowledge to mke est guesses out user s preferenes n enhne situtions where the totl time of intertion is not long enough to justify letting the user expliitly stte ny preferenes. For exmple, tiket vending mhine t trin sttion n tegorize user into eginner or expert in order to deide etween very simple or more effiient user interfe on the fly. One tool for reting suh proilisti models n e Mrkov Logi Networks (MLN) [1]. They omine firstorder predite logi with proilisti semntis nd llow more strt nd onvenient wy of modeling proilisti systems thn working t propositionl level like it is neessry when rfting Byesin networks diretly. Proilisti weights n e provided y experts, tken from literture, or they n e retrieved from dt sets y prmeter lerning tehniques. In ddition, to e le to model more omplex systems it is often neessry to inorporte time for systems tht hnge dynmilly nd informtion from pst stte shll e rried over y mens of the proilisti model. Suh prolems n rnge from ojet trking to the development of mrkets or soil networks. In two experimentl setups, dynmi MLNs hve lredy een used to model nd reognize events from pre-proessed video dt of prking lot surveillne sene [2] nd from GPS dt reorded during gme of pture the flg [3]. In oth settings, the MLNs hve een pplied in n offline mode, where the inferene is done over defined time frme, usully the whole experiment, fter the experiment is over. For rel pplitions this solution is often insuffiient nd method is needed tht n run online nd in rel-time. Reent pulitions y Nth nd Domingos ddress the need for effiient online inferene methods for MLNs [4], [5]. Their pproh of Expnding Frontier Belief Propgtion (EFBP) desries messge omputtion shedule for elief propgtion. The method ims to reuse pst omputtion results when mking inrementl hnges to the model. Although elief propgtion is usully very fst, it n fil to onverge to the orret solution or even enter into osilltion when pplied to loopy grphs. For this reson, we propose n lterntive pproh of reusing pst inferene results, whih is not limited to elief propgtion, ut n lso e pplied to MCMC methods nd inferene lgorithms speil to MLNs, like MCSAT. This is hieved y onstruting new MLN for eh time step whih is then ugmented with dditionl informtion tht is tken from the mrginl proilities otined during the omputtion for the lst time step. The disdvntge of not working on the level of messge pssing is loss in flexiility. While EFBP n e tuned to weigh ury ginst speed y the use of prmeter, the pproh presented in this pper is fixed to ompute only pproximte results. The rest of the pper is strutured s follows. First we desrie MLNs nd our pproximtion method. Then we give n overview of the relevnt relted work. Finlly we provide n evlution of the method using two dynmi domins. The pper loses with onlusion nd perspetive on future pplitions.

II. MARKOV LOGIC NETWORKS In the following prgrphs, we desrie MLNs nd their proilisti semntis. After introduing dynmi Mrkov Logi Networks (DMLN), we define slie networks s n pproximtion to DMLN. A Mrkov logi network L = {(f 1, w 1 ),..., (f n, w n )} for n N is set of first-order formuls f 1,..., f n tht re given weights w 1,..., w n R. Together with finite set of onstnts C, they define proility distriution over ll interprettions (or possile worlds). An interprettion mps eh grounding of eh predite to truth vlue. Let g(f) e the set of groundings of formul f otined y repling the free vriles in f y ll omintions of onstnts from C. Given n interprettion x, then n i (x) = def {g g g(f i ) nd x = g} re the numer of groundings of formul f i tht re true under x. Then the proility distriution P L tht is defined y the MLN L is given s P L (X = x) = 1 def Z exp ( w i n i (x) ) (1) where i rnges over ll formuls in L nd Z is the normliztion onstnt. Note tht we n onsider n interprettion to e n ssignment to multivrite proility distriution where the rndom vriles re the truth vlues of the elements of the Herrnd se (the toms formed y the grounding of the predites). We n thus ompute the mrginl proility of ground tom p eing true s P L (p) = 1 def Z i P L (X = x)i x (p), (2) x where the funtion I x mps p to 1 if x = p nd to 0, otherwise. For prtil resons sorted (or typed) logil lnguge is used for MLNs. In order to model dynmi domins we ssign dedited time sort T whose onstnts re elements from the nturl numers, i.e., C T N. We demnd tht the time prmeter ppers s the first rgument in every time-dependent predite. A dynmi Mrkov logi network is MLN for whih suh dedited time sort T exists. We ll DMLN pure, if ll predites re time-dependent. A. Slie Networks The min ide of the pper is to reuse old omputtions when doing inferene over DMLN progressing in time. When employing elief propgtion lgorithm, this n e hieved y seletively updting only newer nodes while reusing the messges emitted y older nodes, like it is done y EFBP. Contrry, reusing pst results when performing inferene with MCMC methods is not s simple, euse new evidene usully invlidtes lredy otined smples. Also, MCSAT does not lend itself very well to tweking, euse it requires tht ftors re given s weighted logil Tle I: Listing of the MLNs, used for evlution. The upper one, whih is n dption of the lssi smokers exmple to dynmi domin, is tken from Kersting et. l [6]. The lower one is inspired y the soil fore model for pedestrin movement [7]. () Dynmi Smokers Domin //smoking uses ner 1.0 smokes(t,x) ner(t,x) //friends shre smoking hits 1.0 friends(t,x,y) (smokes(t,x) smokes(t,y)) //friendship persists over time 3.0 friends(t,x,y) friends(t+1,x,y) //smoking persists over time 3.0 smokes(t,x) smokes(t+1,x) () Soil Fore Domin //predite is funtionl in Lotion t(, Agent, Lotion!) //only move one step t time 2 t(t+1,,x) t(t,,x-1) t(t,,x) t(t,,x+1) //two gents do not oupy the sme lotion 1.5!(t(t,1,x) t(t,2,x)) formuls. Thus one nnot simply dd ritrry ftors to n existing network. We overome these prolems y using the mrginl proilities of ground toms of pst time step to onstrut weighted formuls to e inluded in the network for the next time step. These formuls pture the summry over the removed messges from the older time slie. Otining simply new MLN, we n then run inferene with every lgorithm tht is suited to ompute mrginl proilities for MLNs. We re now going to introdue the neessry voulry. If L is pure DMLN, then L[t] is DMLN, for whih the time sort hs only two onstnts t 1 nd t, i.e., C T = {t 1, t} nd ll formuls tht ontin only toms of single time step re fixed to the time t. We ll suh temporl frgment slie network. The Herrnd se of slie for time step t thus ontins only the ground toms of time t nd t 1. The intrtime formuls tht relte etween vriles t time t 1 re removed. Figure 1 illustrtes whih omponents of n unrolled network re instntited for ertin slie when the ground MLN is seen s ftor grph. Note tht ground formuls re ftor nodes nd ground toms re vrile nodes. As textul exmple, we look t the smokers domin from Tle I. To rete the slie for time 3, we ground ll

t 2 t 1 t t + 1 Figure 1: The figure shows ftor grph representtion of ground slie network for time t in old, while the omplete network over the whole time spn is indited dshed. The ox surrounds the ground toms ssoited with time t. The squre nodes re the ftor nodes, whih re indued y ground formuls. The irles re vrile nodes representing ground toms. Notie the slie network t time t ontins only the intr-time formul onneting vrile nd for time t, lthough oth vriles re lso instntited for time t 1. inter-time formuls to the time frme etween 2 nd 3 nd ground ll intr-time formuls to the time 3. The resulting slie network is given s follows: //intr-time formuls 1.0 smokes(3,x) ner(3,x) 1.0 friends(3,x,y) (smokes(3,x) smokes(3,y)) //inter-time formuls 3.0 friends(2,x,y) friends(3,x,y) 3.0 smokes(2,x) smokes(3,x) Wht is still left to desrie is the proess of trnsferring informtion from one slie to the next. This is done y dding formuls tht pture the mrginl distriution for eh ground tom s it ws lulted during the lst slie; we ll this proess ugmenttion. Given MLN L, set of ground toms V, nd funtion p : V [0, 1], tht mps toms to their mrginl proilities, we define the ugmented MLN y L (V, p) = def L {( f, ln p(f) 1 p(f) ) f V }. (3) The ugmenttion preserves only some informtion from previous slie. To sty ext, it would e neessry to rry over the joint distriution over the rndom vriles in the lst time step. We pproximte the joint distriution y the mrginl distriutions over the toms inside the lst time step, ssuming mrginl independene. This pproximtion is lso done in the ftored frontier lgorithm for dynmi Byesin networks [8]. Given pure DMLN L, we define the sequene of ugmented slie networks s the ugmented MLNs L i, with 0 i in the following wy: def L 0 = L[0] (4) def L i = L[i] (V i 1, p i 1 ) (5) The seond definition uses the set V i 1 to refer to the ground toms of time i 1. Also, we hve revited the mrginl proilities desried y the MLN L i 1 with p i 1. The resoning why the ugmenttion is defined s desried goes s follows. We ssume tht we re performing elief propgtion on the ground ftor grph, where eh ground formul orresponds to ftor node nd is djent to the vrile nodes of the ground toms tht pper inside the formul. We wnt to rete new ftor f v for eh vrile node v of time t 1 in slie t tht summrizes the inoming messges to v originting from the removed ftors during the instntition of slie t 1. Sine ll ftors tht were present during slie t 1 re removed in slie t, we must summrize ll inoming messges for v. Fortuntely, the mrginl distriution of v during slie t 1 is extly the summry over those messges. And this n e otined even if we do not hve ess to the messges in the first ple, e.g., when running n MCMC lgorithm. Thus y dding ftor tht emits the mrginl distriution of v during the lst slie, i.e., p t 1 (v), s its onstnt messge to v, we pture the frozen messges from slie t 1 nd n put them into the model for slie t. This onstrution is sound nd ext s long s the ftor grph ontins no yles spnning over two time steps. Beuse then, the messges oming from the older prt will not hnge in the light of informtion oming from the newer network nd n e sfely frozen. We hve now defined series of MLNs, we ll slie MLNs, tht eh rnge over two time steps. The mrginl proilities tht re defined y them re n pproximtion to the mrginl proilities defined y the DMLN tht rnges over the omplete time spn. The definition indites the intended wy of inferring those proilities. This is done y suessively onstruting the slie networks, inferring their mrginl proilities, onstruting the next slie using the output of the lst, nd so forth. For omputing the mrginl proilities of slie, every inferene lgorithm tht works on MLNs nd omputes mrginl proilities n e used. III. RELATED WORK Nth nd Domingos hve mde two ontriutions to online inferene for MLNs in 2010. In Effiient Lifting for Online Proilisti Inferene [4] they desrie wy

to updte lifted networks in order to redue the ost of the lifting proedure when deling with inrementl hnges. As dynmi pplition, they pply their lgorithm to the omputer vision tsk of video kground segmenttion over short snippets (out ten frmes) of motion dt. In Effiient Belief Propgtion for Utility Mximiztion nd Repeted Inferene [5], they desrie n lgorithm lled Expnding Frontier Belief Propgtion, whose purpose is to reuse s muh informtion from n erlier run of elief propgtion s possile. This pproh is pplile to hnges of evidene nd network struture, nd n thus e useful for online inferene. We desrie this pproh in more detil, s it is relted to the ide of slie networks. EFBP egins y performing norml elief propgtion on Mrkov network. The finl messges fter onvergene re stored. Then the network gets hnged, whih in our se mens the ddition of new temporl step nd the ording evidene. All nodes tht re diretly ffeted y the hnge re onsidered tive. Then elief propgtion is performed with only the tive nodes reomputing their messges. The new messges re onstntly ompred with the stored messges from the omputtion efore the network updte. If non-tive node reeives messge tht differs more thn predefined onstnt from the old messge, then it gets tivted nd prtiiptes in elief propgtion for the updted network. The nodes tht re tivted diretly fter network updte in EFBP re the nodes tht re prt of the urrent slie in our pproh. The differene to the slie network pproh is tht we nnot tivte dditionl nodes nd thus our method is speil se of EFBP where the messge threshold is infinity nd no nodes re tivted. But in ontrst to EFBP, slie networks llow using different inferene lgorithms thn EFBP, whih requires elief propgtion. IV. EVALUATION We hve implemented the slie method for MLNs nd set of inferene lgorithms sed on ftor grph representtion using the Sl progrmming lnguge. The Alhemy system 1 nd pymlns 2 hve een used s referene implementtions. The system, inluding the experiments, is ville for downlod t our wesite 3. The experiments were run on n Intel Core2 CPU with 2.8 GHz nd 4 GB of RAM. We hve evluted the pproh using two domins. The listing for oth is given in Tle I. The first domin is simple model of pedestrin movement. Agents my wnder in one-dimensionl spe nd they re repelled y other gents. We hve simulted this prolem for 20 time steps nd two gents. Evidene ws dded s indited in the figure. We hve used loopy elief propgtion with 1 http://lhemy.s.wshington.edu 2 http://www9-old.in.tum.de/people/jin/mlns 3 http://www.uni-ulm.de/in/ki/forshung/mln/s Lotion Lotion () Slie Filtering () Norml Filtering Figure 2: Two plots of the proility distriution for the lotion of only gent A ginst time. We hve oserved the positions of oth gents A nd B t severl time steps, nnotted y letters. For exmple t time 5, gent A is t lotion 2 nd gent B is t lotion 3. Drker ells represent higher proility of gent A oupying the lotion. We hve plotted the slie pproximtion () nd inferring over the omplete unrolled network up to the urrent time step (). Notie, the pproximtion does not differ muh from the ext solution. flooding shedule s the inferene lgorithm. We rn four prllel omputtions until the mrginls hve onverged elow vrine of 10 5. The proility distriutions re grphilly visulized in Figure 2. The differene etween inferene using slie networks () nd norml inferene () is very smll for the given domin nd prolem. Whether the error produed y the pproximtion is fesile depends on the inferene tsk nd the requirements of the pplition. For the seond experiment, we hve mesured the running times of the slie version of Gis smpling nd MCSAT ginst inferene on the unrolled network. This ws done using the dynmi smokers domin y Kersting et l. [6]. We hve hosen simpler setup thn theirs with only four persons to redue the inferene times for the experiment. We hve simulted the prolem for 12 time steps. Agin we rn

CPU (s) CPU (s) 10 1 10 0 10 1 10 3 10 2 10 1 10 0 10 1 Gis norml Gis slie 0 2 4 6 8 10 12 () Gis Smpling MCSAT norml MCSAT slie V. CONCLUSION AND FUTURE WORK We hve presented n pproh for pproximte omputtion of mrginl proilities for DMLNs tht is suitle for online inferene. The pproximtion ensures tht t every time step only limited mount of omputtion must e performed. The onept of slie MLNs is oth speil se of the EFBP lgorithm nd generliztion. The EFBP lgorithm is more flexile thn using slie networks euse it llows to trde in speed for improved ury. On the other hnd EFBP is limited to elief propgtion inferene while slie networks llow to employ ny inferene lgorithm for mrginl proilities of MLNs. In the future, we intend to pply proilisti models sed on MLNs to the integrtion of sensory dt with symoli knowledge. Among the plnned pplitions is the seletion of output modlities for multi-modl user interfes sed on the environmentl sitution, like lighting nd noise level. Properties of the user nd personl preferenes shll lso e tken into ount. For this pplition, fst inferene lgorithm is prtiulrly importnt in order to redue the experiened lg of the resulting system. ACKNOWLEDGMENT This work is done within the Trnsregionl Collortive Reserh Centre SFB/TRR 62 Compnion-Tehnology for Cognitive Tehnil Systems funded y the Germn Reserh Foundtion (DFG). 0 2 4 6 8 10 12 () MCSAT Figure 3: The plots list the running time of Gis smpling nd MCSAT oth on the unrolled network nd inrementlly using the sliing pproh for 12 time steps. four prllel omputtions until the mrginl proilities onverged to mximum vrine elow 0.003. To help the onvergene of the Gis smpler, we hve redued the weights on some formuls in ontrst to the originl model. We hve lso dded rndom evidene, oserving rndom truth vlue for eh ground tom with proility of 0.5, thus oserving out one hlf of the rndom vriles. We ompre the omputtion time of unrolled inferene for eh inrement of time ginst the inrementl inferene times for the slie network pproh. The CPU time for the unrolled omputtions inreses with the time spn over whih to infer, while the omputtion times for the sliing pproh roughly sty onstnt. As expeted the effort for the slie pproh is very low ompred to redoing inferene on the unrolled network on eh time step. REFERENCES [1] M. Rihrdson nd P. Domingos, Mrkov logi networks, Mhine Lerning, vol. 62, no. 1-2, pp. 107 136, 2006. [2] S. Trn nd L. Dvis, Event modeling nd reognition using mrkov logi networks, Computer Vision ECCV 2008, pp. 610 623, 2008. [3] A. Sdilek nd H. Kutz, Reognizing multi-gent tivities from GPS dt, in Proeedings of the 24th AAAI Conferene on Artifiil Intelligene, 2010, pp. 1134 1139. [4] A. Nth nd P. Domingos, Effiient lifting for online proilisti inferene, in Proeedings of the 24th AAAI Conferene on Artifiil Intelligene, 2010, pp. 1194 1198. [5] A. Nth nd Domingos, Effiient Belief Propgtion for Utility Mximiztion nd Repeted Inferene, in Proeedings of the 24th AAAI Conferene on Artifiil Intelligene, 2010, pp. 1187 1192. [6] K. Kersting, B. Ahmdi, nd S. Ntrjn, Counting elief propgtion, in Proeedings of the 25th Conferene on Unertinty in Artifiil Intelligene. AUAI Press, 2009, pp. 277 284. [7] D. Heling nd P. Molnár, Soil fore model for pedestrin dynmis, Phys. Rev. E, vol. 51, no. 5, pp. 4282 4286, 1995. [8] K. Murphy nd Y. Weiss, The ftored frontier lgorithm for pproximte inferene in DBNs, in Proeedings of the 17th Conferene on Unertinty in AI, 2001, pp. 378 385.