Knowledge and Action in Semi-Public Environments
|
|
- Calvin Pitts
- 6 years ago
- Views:
Transcription
1 Knowledge and Action in Semi-Public Environments Wiebe van der Hoek, Petar Iliev, and Michael Wooldridge University of Liverpool, United Kingdom Abstract. We introduce and study the notion of a Public Environment: a system in which a publicly known program is executed in an environment that is partially observable to agents in the system. Although agents do not directly have access to all variables in the system, they may come to know the values of unobserved variables because they know how the program is manipulating these variables. We develop a logic for reasoning about Public Environments, and an axiomatization of the logic. 1 Introduction Our primary concern in the present paper is the following issue: Suppose that a number of agents are engaged in a commonly known algorithmic activity in some environment. What can be said about (the evolution of) their knowledge if they cannot make a complete and correct observation of their environment? To investigate this issue, we introduce a computational model for epistemic logic known as Public Environments (pes), and we then investigate the notions of knowledge that arise in pes. pes build upon the well-known interpreted systems model of knowledge [6], in which the knowledge of an agent is characterised via an epistemic indistinguishability relation over system states, whereby two states s and t are said to be indistinguishable to an agent i if the variables visible to i have the same values in s and t. In pes, as in the interpreted systems model of knowledge [6], agents have access to a local set of variables. But, crucially, their knowledge can be based on a refinement of the indistinguishability relation that derives from these variables: agents may observe the occurrence of an action, and from this be able to rule out some states, even though the local information in such states is the same as in the current state. Moreover, the protocol in a pe is a standard pdl program, and it is commonly known that the program under execution is visible to all agents in the pe. To better understand this idea, suppose there is some pdl program π that is being executed, and that π manipulates the values of the variables in the system. The program π is assumed to be public knowledge. Suppose agent i sees the variables y and z, and this is commonly known. So, it is public knowledge that i knows the value of y and z, at any stage of the program. There is a
2 2 third variable, x, which is not visible to i. Now, suppose that the program π is actually the assignment x := y. After this program is executed, it will be common knowledge that i knows the (new) value of x, even though the value of x is not visible to i, because i sees the value of y and knows that the program assigned this value to x. If the program y := x is executed instead, then i will come to learn the (old and current) value of x. Were the assignment y := x + z, then i would learn the value of x as well. Thus agents can learn the values of variables through the execution of certain publicly known programs, even though those variables are not visible to the agents. The kind of questions that we want to investigate using pes then typically relate to, for example, how the process of executing a publicly known program in a pe will affect the knowledge of agents who are able to see only some subset of the variables that the program is manipulating. Using pe, it is possible to analyse problems like the Russian Cards [4] and the Dining Cryptographers [3], and indeed many game-like scenarios, by incorporating the protocol (the outcome of the tossing of a coin, the passing of a card from one player to another) explicitly in the object language: we are not restricted to modelling every action by an information update, as in DEL. So, pe s address the problem of Dynamic Epistemic Logic with factual change [9]. [5] also studies Dynamic Epistemic Logic with assignments, but where our assignments are restricted to program variables, assignments in [5] regard the assignment of the truth of an arbitrary formula to a propositional atom. 2 Motivation Recall our motivating concern. Suppose that a number of agents are engaged in a commonly known algorithmic activity. What can be said about (the evolution of) their knowledge if they cannot make a complete and correct observation of their environment? To motivate this question we present two well-known, but seemingly unrelated, scenarios and show that the solutions to the problems described in these scenarios require an answer to particular instances of our question above. We begin with a well known story, first described in [3]. Three cryptographers are sitting down to diner at their favorite threestar restaurant. Their waiter informs them that arrangements have been made with the maitre d hotel for the bill to be paid anonymously. One of the cryptographers might be paying for the dinner, or it might have been NSA. The three cryptographers respect each other s right to make an anonymous payment, but they wonder if NSA is paying. They resolve their uncertainty fairly by carrying out the following protocol: Each cryptographer flips an unbiased coin behind his menu, between him and the cryptographer on his right, so that only the two of them can see the
3 3 outcome. Each cryptographer then states aloud whether the two coins he can see-the one he flipped and the one his left-hand neighbor flippedfell on the same side or on different sides. If one of the cryptographers is the payer, he states the opposite of what he sees. An odd number of differences uttered at the table indicates that a cryptographer is paying; an even number indicates that NSA is paying (assuming that the dinner was paid for only once). Yet if a cryptographer is paying, neither of the other two learns anything from the utterances about which cryptographer it is. What are the essential features of the problem and its solution? The obvious answer is that we are looking for a cryptographic protocol that is provably correct with respect to some formal model of the task. The search for such a protocol is not related to our main question. However, we find the proposed solution to be very relevant to the problem we study. Note that in order to be convinced of the correctness of this solution, we must be able to reason about the evolution of the knowledge of the cryptographers during the execution of a commonly known protocol; what is more, the essential feature of this protocol is that the cryptographers cannot make complete and accurate observations of their environment. Our second example comes from [2]. Before describing the problem, we would like to remind the reader about the main idea of this paper. Roughly, the authors want to formalise the following intuition. Suppose that we have a task that must be performed by a robot. Then we can decide if the robot can perform the task as follows. First, we try to determine the minimal knowledge required to perform the task and then we try to determine if the robot s sensors and reasoning abilities allow it to gain that knowledge. If we have a match, then the robot is potentially able to perform the task. Their formal treatment is based on the following example. Two horizontal, perpendicular robotic arms must coordinate as follows. The first arm must push a hot object lengthwise across a table until the second arm is able to push it sideways so that it falls into a cooling bin. The length of the table is marked in feet, from 0 through 10. The object is initially placed at position 0 on the table. The second arm is able to push the object if it is anywhere in the region [3, 7].... We consider two variants of the problem: 1. The arms share a controller. The controller has access to a sensor reporting the position of the object with error no greater than 1, i.e., if the object s current location is q then the reading can be anywhere in [q 1, q + 1]; 2. Same as above, except that the error bound is 4 rather than 1. It is not hard to see that in the second case, there is no protocol that performs the task, whereas in the first case there is. For example, a centralized protocol that deals with 1 is the following (where r is the current reading):
4 4 If r 4 then Move(arm 1 ) else Move(arm 2 ). If we apply the authors intuition to this problem, we may say that the minimal knowledge necessary to perform the task is to know when you are in a certain location (namely the region [3, 7]) within some reasonable error bound. It is obvious that in the first case the robot is able to aquire this knowledge while in the second it is not. And again, it is obvious that in order to see that this answer is correct, we must say something about (the evolution of) the knowledge of an agent engaged in an algorithmic activity while not being able to make complete and correct observations of its environment. We show that the protocols described in both examples can be modeled in an uniform way. Let s first model the solution to the dining cryptographers problem proposed by David Chaum. We name the three cryptographers 1, 2 and 3 respectively. Suppose that instead of coins each different pair of cryptographers (i, j ), where i, j {1, 2, 3} and i j, is assigned a different variable c (i,j ) that can take the boolean values 0 and 1. These values model the two sides of the coin shared between i and j and the current value of the variable c (i,j ) is visible only to i and j. Let each cryptographer i be assigned a private variable p i that is not visible to the rest. These three variables can take only the values 0 and 1. If a certain variable is set to 1, then the respective cryptographer is paying. We will assume that at most one of the variables p 1, p 2, p 3 is set to 1. Next we model the announcement made by each cryptographer in the following way. Let us associate a variable a i with each cryptographer i. Each a i is visible to all the cryptographers and holds the value c (i,j ) c (i,k) if p i = 0 or the value 1 c (i,j ) c (i,k) otherwise, where stands for the exclusive or or xor of the two values and j k. In this way we model the two types of announcement a cryptographer can make depending on the fact whether he is paying or not. Finally, we introduce a variable r which holds the result of a 1 a 2 a 3, i.e., the number of differences uttered at the table. Then r = 1 if and only if the number of differences is odd. Let us summarise. We have a set of variables V = {p 1, p 2, p 3, c (1,2), c (1,3), c (2,3), a 1, a 2, a 3, r}. We associate a subset V (i) V with every cryptographer i, where i {1, 2, 3}. Each set V (i) represents the variables visible to the respective cryptographer. Therefore, we have V (1) = {p 1, c (1,2), c (1,3), a 1, a 2, a 3, r}; V (2) = {p 2, c (1,2), c (2,3), a 1, a 2, a 3, r}; V (3) = {p 3, c (1,3), c (2,3), a 1, a 2, a 3, r}. All variables range over the set B = {0, 1}. Now we can represent the protocol described in [3] as an (non-deterministic) algorithm that changes the values of the variables in the appropriate way. For example, 1. if p 1 := 0 then a 1 := c (1,2) c (1,3) ; 4. if p 3 = 0 then else a 1 := 1 ( c (1,2) c (1,3) ); a 3 := c (1,3) c (2,3) ; 2. if p 2 = 0 then a 2 := c (1,2) c (2,3) ; else a 3 := 1 (c (1,3) c (2,3) ); else a 2 := 1 ( c (1,2) c (2,3) ); 5. r := a 1 a 2 a 3 ; The first 4 lines of the program say that if cryptographer i is not paying, i.e., the variable p i is set to 0, then i truthfully announces whether c (i,j ) and c (i,k)
5 5 are different or not, i.e., a i is set to c (i,j ) c (i,k). If i is paying, i.e., p i is set to 1, then a i is set to the opposite value of c (i,j ) c (i,k). Line 5 sets the value of r to xor of a 1, a 2, a 3. Note that we assume that not more than one of the variables p 1, p 2, p 3 is set to 1. This is common knowledge among the agents. We do not assume that they have no other knowledge beside the knowledge obtained from the observable variables. For example, they must be convinced that an odd number of differences implies that one of them is paying, i.e., they must know some of the properties of the xor function and they implicitly know that not more than one of them is paying, no one is lying etc. In short, in order to model the dynamics of the relevant knowledge (in this case this means the knowledge obtained from observation), we may have to assume a lot of relevant background knowledge. The only requirement we have is: The knowledge gained from observation does not contradict the background knowledge of the agents, i.e., in this case no one of the agents can consider it possible (based on the values of the observable variables only) that more than one of the variables p i can have the value 1. Now, we show how to model our second example in this setting. The essential problem here is that the robot s sensor can detect its current position within some error bound. Therefore, the problem consists in the relation between the position read by the sensor, and the real position, i.e., between the current value of some observable variable that is within some error bound with respect to the value of some unobservable variable. So, let x be a variable observed by an agent (intuitively, this variable stores the sensor readings), y be a variable that is not observable (intuitively y stores the real position) and e be another visible variable (intuitively, e has some constant value that represents the error bound). This means that we assume that the robot knows what its error bound is. We have the following simple program. 1. x := y k x := y + k; where k e. That is, x is non-deterministically assigned some value that differs from y within the error bound e. Now, we may ask the question what a robot observing x can know about the value of y. Having established that the essential features of the scenario from [2] can be modeled in our framework, we will concentrate on the dining cryptographers problem to motivate the assumption we make. Of course, these assumptions will be true for our second example. Let us address the evolution of the knowledge gained from observation of the three cryptographers during the execution of this program. Each step of the program induces some change in the knowledge of each cryptographer by changing the value of the relevant program variables. We can model the knowledge of all the cryptographers at a given point during the execution of the algorithm using the standard machinery of Kripke models. The underlying set of such a model consists of:
6 6 (W ) All the assignments of values to the program variables considered possible by at least one of the cryptographers at this particular program step. The epistemic relation modelling the knowledge of a cryptographer i must be such that If i cannot distinguish between two valuations then they must coincide on the values assigned to the variables in V (i); If a cryptographer i can distinguish two valuations at a particular step of the computation, then s(he) can distinguish their updated images at the next step of the computation. If M is a model for our description of the protocol, then we would like it to satisfy properties like the following. Let n = ( p 1 p 2 p 3 ) (i.e., the NSA paid) M = (n [π] K i n) ( n [π] K i n) i=1,2,3 i=1,2,3 Saying that if the NSA paid, all cryptographers will know it afterwards, and if the NSA did not pay but a cryptographer did, this will become known as well. Of course, on top of this we need: M = p i [π] j i( K j p i ) Note that we do not say that if two valuations coincide on the values of the variables visible to i, then i cannot differentiate between them. This assumption would lead to counterintuitive scenarios. Let us summarise. A group of agents is engaged in an algorithmic activity. These agents can observe only a part of their environment which is affected by this activity. Based on their knowledge of the algorithm and on the values of the observable variables, they can draw some conclusions and update their knowledge at each step of the program. We assume that each variable can be assigned only finitely many values. To make things easier, we further assume that the possible values can be just 0 and 1. During its execution, the algorithm can act on only finitely many variables of the agents environment. The basic algorithmic steps are assignments of the form x := t, where t is a term of the language to be defined soon and tests. The agents knowledge at each particular step of the algorithm is modelled using Kripke models. The dynamics of the knowledge is modelled using suitably defined updates of the Kripke models. 3 Language and semantics Let Ag = {a 1,..., a m } be a set of agents and Var = {x 1,... x n } a set of variables. We define ϕ L: ϕ := ϕ 0 V i x ϕ ϕ ϕ [τ]ϕ K i ϕ O i ϕ ϕ
7 7 where i Ag. V i x says that agent i sees the value of x. O i ϕ denotes that i observes that ϕ holds. K i is the knowledge operator, and will be a universal modal operator: it facilitates us to express properties like [τ](k i (x = 0) K j (x = 0): no matter what the actual valuation is, after execution of τ, agent i knows that the value of x is 0, while j does not know this. To define a Boolean expression ϕ 0, we first define terms t. In this paper, terms will have values over {0, 1}, this is a rather arbitrary choice, but what matters here is that the domain is finite. Terms are defined as t := 0 1 x t + t t t t VAR(t) denotes the set of variables occurring in t. Boolean expressions over terms are: ϕ 0 := t = t t < t ϕ 0 ϕ 0 ϕ 0 Finally, we define programs: τ := ϕ 0? x := t τ τ τ; τ where x Var and ϕ 0 is a Boolean expression. A valuation θ : Var {0.1} assigns a value to each variable. Let the set of valuations be Θ. We assume that v is extended to a function v : T er {0, 1} in a standard way, i.e., we assume a standard semantics = B for which v = B ϕ 0 is defined in terms of giving a meaning to +,,, <. We extend this semantics = B to establish the truth of algebraic claims ϕ 0, in particular that enables us to reason about such claims: {ϕ 1, ϕ 2,...} = B ϕ 0 simply means that any valuation satisfying ϕ 1,..., also satisfies ϕ 0. Each agent i is able to read the value of some variables V (i) Var. For two valuations θ 1, θ 2, we write θ 1 i θ 2 if for all x V (i), θ 1 (x) = θ 2 (x). Definition 1 (Epistemic Models). Given Var and Ag, we first define an epistemic S 5-model for Ag with universal modality, or epistemic model, for short, as M = W, R, V, f where 1. W is a non-empty set of states; 2. f : W Θ assigns a valuation θ to each state; 3. R : Ag 2 (W W ) assigns an accessibility relation to each agent: we write R i uv rather than (u, v) R(i). Each R(i) is supposed to be an equivalence relation, and moreover, we assume that R i uv implies that f (u) i f (v); 4. V : Ag 2 Var keeps track of the set of variables U that agent i can see. If M = W, R, V, f, we write w M for w W. For u, v M, we write u i v for f (u) i f (v). A pair M, v (with v M ) is called a pointed model. Let EM denote the class of pointed epistemic models. We now define R τ EM EM. This is a simple variation of pdl ([8]), for which we use assignments and tests as atomic programs, and sequential
8 8 composition and choice as composites. In a setting like ours, where we have knowledge, the test will behave as a public announcement ([1]). First, given a valuation θ Θ, define (x := t)(θ) Θ as follows: ((x := t)(θ))(y) = { θ(y) if y x θ(t) else (1) Definition 2 (Assignment). Given (M, v) = ( W, R, V, f, v), we define R (x:=t) (M, v)(m, v ) iff (M, v ) = W, R, V, f, v ), where 1. W = {w w W } 2. R i v u iff v i u and R i vu 3. V = V 4. f (w ) = (x := t)(f (w)); i.e., if f (w) = θ, then f (w ) = (x := t)(θ) Definition 3 (Test). Given a pointed model M, v, we now define the effect of a test ϕ 0?. This time, we first specify the result of applying the test to the whole model M = W, R, V, f : (ϕ 0?)(M ) = M = W, R, V, f where 1. W = {v v W & M, v = ϕ 0 } 2. R i v u iff v i u and R i vu 3. V = V 4. f (v ) = f (v) We then say that R ϕ0?(m, v)(m, v ) if (ϕ 0?)(M, v) = (M, v ) and M, v = ϕ 0. Finally, we define the relations for sequential composition and choice ([8]): R τ1;τ 2 (M, v)(m, v ) iff M, v such that R τ1 (M, v)(m, v ) and R τ2 (M, v )(M, v ). R τ1 τ 2 (M, v)(m, v ) iff either R τ1 (M, v)(m, v ) or R τ2 (M, v)(m, v ). Definition 4 (Public Environment). Given a set of programs T, a Public Environment P = EM, T is a set of pointed epistemic models EM EM closed under programs τ T. Let M = W, R, V, f EM. A triple P, M, v is called a state of the public environment, or a state. We define: P, M, v = ϕ 0 iff f (v) = B ϕ 0 P, M, v = V i x iff x V (i) P, M, v = ϕ iff not P, M, v = ϕ P, M, v = ϕ ψ iff P, M, v = ϕ and P, M, v = ψ P, M, v = K i ϕ iff for all u W : R i vu implies P, M, u = ϕ P, M, v = O i ϕ iff for all w W : f (w) i f (v) P, M, w = ϕ P, M, v = ϕ iff for all w W : P, M, w = ϕ P, M, v = [τ]ϕ iff (τ)(m, v)(m, v ) implies P, M, v = ϕ
9 9 We write M i ϕ for the dual of K i ϕ, i.e., M i ϕ = K i ϕ. Also, Ôiϕ will denote the dual of O i ϕ. Note that the operator O i is definable in terms of valuation descriptions and boxes. Let x 1,... x m be the variables that agent i can observe (which can be expressed as Ψ i = k m V ix k j >m V ix j ). Then O i ϕ is equivalent to the following schema, where each c i ranges over the set {0, 1}. Ψ i (x 1 = c 1 2 = c 2... x m = c m (x 1 = c 1 x 2 = c 2... x m = c m ϕ)) Example 1. Figure 1 (left) shows a public environment for two variables x and y where V 1 = {x} and V 2 = {y}. The program that is executed is (x = 1 y = 1)?; x := 0; y := 1. Note that the final epistemic model it is common knowledge that x = 0 y = 1. Note however that we cannot identify the three states despite the fact that the valuations of the variables are the same (x = 1 v y = 1)? x:= , , x := y y:= Fig. 1. Executing (x = 1 y = 1)?; x := 0; y := 1 (left) and x := y (right). Example 2. Consider the model M from Figure 1 (right). Assume V 1 = {x}, and V 2 = V 3 =. The following table summarises the change in knowledge from M, 00 (first row) to M, 00 (second row) while executing the program x := y. K 1 (y = 0) K 2 (x = 0) K 3 (x = y) K 3 (x = 0) K 1 (y = 0) K 2 (x = 0) K 3 (x = y) K 3 (x = 0)
10 10 Through the assignment x := y agent 1 learns the value of y (because he reads x), agent 2 learns the value of x (because he know the value of y) and agent 3, like all other agents, comes to know that x = y. Example 3. We show under which conditions an agent i can learn the value of a variable. Consider the following program, where x and y are different variables: The following are valid: α = ((y = 0?; x := t) (y = 1?; x := u)) 1. (K i (y = 0) [α]k i (x = t)) (K i (y = 1) [α]k i (x = u)) Knowing the condition for branching implies knowing which program is executed; 2. (V i x K i (t u)) ([α]k i y = 0 [α]k i y = 1) If an agent can read a variable x, and the value of the variable depends on a variable y, then the agent knows retrospectively what y was. 3. ( V i y V i x K i (y = 0) K i (y = 1)) [α]( K i (y = 0) K i (y = 1)) An agent who cannot see x nor y cannot deduce y s value from α. Example 4. We want to swap the value of two variables in a public environment, where the only spare variable is visible to an observer. Can we swap the variables without revealing the variables we want to swap? Formally, let Var = {x 1, x 2, x 3 }, Ag = {1}, and V (1) = {x 3 }. Informally, the designer of the program wants to ensure that agent 1 never learns the value of x 1 or x 2. Formally, if i {1, 2}, we can capture this in the following scheme: χ = [π]( K 1 (x i = 1) K 1 (x i = 0)) Consider the following program π: x 3 := x 1 ; x 1 := x 2 ; x 2 := x 3 Clearly, if M is the epistemic model that formalises this, we have M = χ. But of course, π above is not the only solution to the problem of swapping variables. Now consider the following program π: x 1 := x 1 +x 2 ; x 2 := x 1 x 2 ; x 1 := x 1 x 2 ; In this case, with M the epistemic model, we have M = χ, as desired. Learning and Recall The principle of recall (wrt. a program τ) is K i [τ]ϕ [τ]k i ϕ (2) It is straightforward to verify that assignment and test satisfy (2), and moreover, that it is preserved by sequential composition and choice. However, now consider the converse of (2), which is often referred to as no-learning: [τ]k i ϕ K i [τ]ϕ (3) This principle is not valid, as we can see from Example 2. In that example, we have P, M, 00 = M 1 x := y (x = 1), but not P, M, 00 = x := y M 1 (x = 1) (and hence [x := t]k i ϕ K i [x := t]ϕ is not valid). Semantically, no learning is w, t(r i vw & R τ wt s(r τ vs R i st)) (4)
11 11 We can phrase failure of (4) in Example 2 as follows: In 00, agent 1 considers the state possible, which with the assignment x := y would map to 11, however, after the assignment takes the state 00 to 00, since 1 sees variable x, he does not consider the state 11 as an alternative any longer. Loosely formulated: through the assignment x := y in 00, agent 1 learns that was not the current state. From the definition of assignment, it is easy to see that the following is valid: x := t M i ϕ x := t Ôiϕ M i x := t ϕ (5) Also, the test operator fails no learning: again, in Example 2, we have M, 00 = M 1 y = 1?, but we do not have P, M, 00 = y = 1? M 1 : by the fact that 00 does not survive the test y = 1, agent 1 learns that it was not the real state. Before revisiting the dining cryptographers, we mention some validities. = [x := t]k i (x = t) [x = t?]k i (x = t) x := t K i (x = t) Agents see the programs and are aware of their effects. An assignment x := t can always be executed, which is not true for a test x = t?. = K i (t = u) [x := t]k i (x = u) Knowing the value of a term implies knowing the value of a variable if it gets assigned that term. Dining cryptographers revisited. We will now indicate how to model this scenario in such a way that it is possible to reason about lying participants, a situation where more than one of the cryptographers is paying etc. We introduce just three new variables h 1, h 2, h 3 and modify slightly our previous algorithm. 1. if h 1 = 1 then p 1 := p 1 ; 4. if p 1 = 0 then a 1 := c (1,2) c (1,3) ; else p 1 := 1 p 1 ; else a 1 := 1 ( c (1,2) c (1,3) ); 2. if h 2 = 1 then p 2 := p 2 ; 5. if p 2 = 0 then a 2 := c (1,2) c (2,3) ; else p 2 := 1 p 2 ; else a 2 := 1 ( c (1,2) c (2,3) ); 3. if h 3 = 1 then p 3 := p 3 ; 6. if p 3 = 0 then a 3 := c (1,3) c (2,3) ; else p 3 := 1 p 3 ; else a 3 := 1 ( c (1,3) c (2,3) ); 7. r := a 1 a 2 a 3 Notice the way we have modelled lying in the first three lines of the program. If h i = 0 then the agent A i will, in effect, behave contrary to what his paying variable indicates. In the original treatment of the problem, it is implicity assumed that all the cryptographers are honest and at most one of them is paying. This (extremely) strong assumption can be made explicit in our framework. Let ϕ stand for the formula: (h 1 h 2 h 3 ) {(p 1 p 2 p 3 ) ( p 1 p 2 p 3 ) ( p 1 p 2 p 3 ) ( p 1 p 2 p 3 )} If we stipulate that our initial epistemic models satisfy ϕ then the original assumptions of [3] will become common knowledge among the agents. And for the epistemic model M associated with our assumptions, we have the properties we wanted to confirm in Example 1, namely: M = (a 4 [π] K i n) ( a 4 [π] K i n) i=1,2,3 i=1,2,3
12 12 If we chose, however, to concentrate on the general case where the motivation and honesty of the participants is unknown then many of the shortcomings of this protocol can be easily seen. If two participants behave as if they have paid then we will have a collision and the final result displayed in r will be 0 making this case indistinguishable from the case where no one is paying. Notice the words behave as if they have paid. This behaviour can result from the fact that they may be lying, or only one of them is lying or both have paid. As long as the variable h i, is visible only to A i, these cases cannot be distinguished by the other two agents. Similar observations can be made in the case where only one or all the cryptographers behave as if they have paid. 4 Axiomatization Let L 0 be the language without modal operators. The usual ([8]) dynamic axioms apply, although not to the full language: exceptions are the axioms Assign, and test(τ). Propositional and Boolean Component Rules of Inference Prop ϕ if ϕ is a prop. tautology Modus Ponens ϕ, (ϕ ψ) ψ Bool ϕ 0 if B ϕ 0 Necessitation ϕ ϕ Epistemic and Universal Component Visibility IK the S5 axioms for knowledge VD 1 V ix [τ]v ix UB the S5 axioms for VD 2 V ix [τ] V ix BO ϕ O iϕ VB 1 V ix V ix OK O iϕ K iϕ VB 2 V ix V ix KV (x = c V ix) K i(x = c) VK (V ix x = c) K i(x = c), c {0, 1} Dynamic Component Dynamic and Epistemic Assign [x := t]ϕ 0 ϕ 0([t/x]) RL x := t M iϕ Func [x := t]ϕ x := t ϕ x := t Ôiϕ Mi x := t ϕ K (τ) [τ](ϕ ψ) ([τ]ϕ [τ]ψ) Dynamic and Universal union(τ) [τ τ ]ϕ ([τ]ϕ [τ ]ϕ) τ ϕ τ ϕ comp(τ) [τ; τ ]ϕ [τ][τ ]ϕ x := t x := t ϕ x := t ϕ test(τ) [ϕ?]ϕ 0 (ϕ ϕ 0) ϕ 0? ϕ 0? ϕ (ϕ ϕ 0? ϕ) Fig. 2. Axioms of kppe. Axiom Assign is not valid for arbitrary ϕ, as the following example shows. First of all, we need to exclude formulas of the form V i x. It is obvious that [x := y]v i x does not imply V i y. But even if we made an exception for V i formulas, we would leave undesired equivalences: Take y for t. If [x := y]k 1 (y = 0) holds and Ass were valid, this would imply that K 1 y = 0, which it should not (it might well be that agent 1 learned the value of y since he sees that x becomes 0).
13 13 Note that VK is only valid for values 0 and 1, and not for arbitrary terms: we do not have = (V i x x = t) K i (x = t) (take for instance t = z for a counterexample). Our completeness proof uses Theorem 1, of which the proof immediately follows from the following equivalences: [x := t]v i x V i x [ϕ 0?]V i x (ϕ 0 V i x) [x := t]ϕ 0 ϕ 0 [t/x] [ϕ 0?]ψ 0 (ϕ 0 ψ 0 ) [α](ϕ ψ) ([α]ϕ [α]ψ) [x := t] ϕ [x := t]ϕ [ϕ 0?] ϕ (ϕ 0 [ϕ 0?]ϕ [x := t] ϕ [x := t]ϕ [ϕ 0?] ϕ (ϕ 0 [ϕ 0?]ϕ [x := t]k i ϕ (K i [x := t]ϕ [x := t]o i ϕ) [ϕ 0?]K i ϕ (ϕ K i [ϕ 0?]ϕ) Theorem 1. Every formula is equivalent to one without dynamic operators. Theorem 1 implies that every effect of a program is completely determined by the first epistemic model. More precisely, it implies that for every formula ϕ there is an equivalent epistemic formula (using V i and ) ϕ which is provably equivalent. Theorem 2. The logic kppe is sound and complete with respect to public environments. 5 Related Work & Conclusions We have introduced a framework for reasoning about programs and valuations where knowledge is based on a hybrid semantics using notions from interpreted systems and general S5 axioms. This is only a first step in doing so. Possible extensions are manyfold. First, we think it is possible to include repetition ( ) as an operator on programs and still obtain a well-behaved logical system, although the technical details for doing so can become involved. There are several restrictions in our framework that may be worthwhile relaxing, like allowing a richer language for tests, and not assuming that it is common knowledge which variables are seen by whom, or what the program under execution is. Those assumptions seem related, and removing them may well be a way to reason about Knowledge-based Programs ([7]), where the programs are distributed over the agents, and where it would be possible to branch in a program depending on the knowledge of certain agents. References 1. Baltag, A., Moss, L.S., Solecki, S.: The logic of common knowledge, public announcements, and private suspicions. In: Gilboa, I. (ed.) Proceedings of the 7th conference on theoretical aspects of rationality and knowledge (TARK 98). pp (1998) 2. Brafman, R., Halpern, J.Y., Shoham, Y.: On the knowledge requirements of tasks. Artificial Intelligence 98, (1998)
14 14 3. Chaum, D.: The dining cryptographers problem: Unconditional sender and recipient untraceability. Journal of Cryptology 1(1), (1988) 4. van Ditmarsch, H.: The russian cards problem. Studia Logica 75, (2003) 5. van Ditmarsch, H., van der Hoek, W., Kooi, B.: Dynamic epistemic logic with assignments. In: AAMAS05. pp (2005) 6. Fagin, R., Halpern, J.Y., Moses, Y., Vardi, M.Y.: Reasoning About Knowledge. The MIT Press: Cambridge, MA (1995) 7. Fagin, R., Halpern, J.Y., Moses, Y., Vardi, M.Y.: Knowledge-based programs. Distributed Computing 10(4), (1997) 8. Harel, D., Kozen, D., Tiuryn, J.: Dynamic Logic. The MIT Press: Cambridge, MA (2000) 9. Sietsma, F.: Model checking for dynamic epistemic logic with factual change. Tech. rep., UvA and CWI, Amsterdam (2007), sietsma/papers/mcdelfc.pdf
Product Update and Looking Backward
Product Update and Looking Backward Audrey Yap May 21, 2006 Abstract The motivation behind this paper is to look at temporal information in models of BMS product update. That is, it may be useful to look
More informationAn Inquisitive Formalization of Interrogative Inquiry
An Inquisitive Formalization of Interrogative Inquiry Yacin Hamami 1 Introduction and motivation The notion of interrogative inquiry refers to the process of knowledge-seeking by questioning [5, 6]. As
More informationSOME SEMANTICS FOR A LOGICAL LANGUAGE FOR THE GAME OF DOMINOES
SOME SEMANTICS FOR A LOGICAL LANGUAGE FOR THE GAME OF DOMINOES Fernando R. Velázquez-Quesada Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas Universidad Nacional Autónoma de México
More informationLogics For Epistemic Programs
1 / 48 Logics For Epistemic Programs by Alexandru Baltag and Lawrence S. Moss Chang Yue Dept. of Philosophy PKU Dec. 9th, 2014 / Seminar 2 / 48 Outline 1 Introduction 2 Epistemic Updates and Our Target
More informationCommon Knowledge in Update Logics
Common Knowledge in Update Logics Johan van Benthem, Jan van Eijck and Barteld Kooi Abstract Current dynamic epistemic logics often become cumbersome and opaque when common knowledge is added for groups
More informationResearch Statement Christopher Hardin
Research Statement Christopher Hardin Brief summary of research interests. I am interested in mathematical logic and theoretical computer science. Specifically, I am interested in program logics, particularly
More informationSchematic Validity in Dynamic Epistemic Logic: Decidability
This paper has been superseded by W. H. Holliday, T. Hoshi, and T. F. Icard, III, Information dynamics and uniform substitution, Synthese, Vol. 190, 2013, 31-55. Schematic Validity in Dynamic Epistemic
More informationLogic and Artificial Intelligence Lecture 12
Logic and Artificial Intelligence Lecture 12 Eric Pacuit Currently Visiting the Center for Formal Epistemology, CMU Center for Logic and Philosophy of Science Tilburg University ai.stanford.edu/ epacuit
More informationCOMP219: Artificial Intelligence. Lecture 19: Logic for KR
COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof
More informationReasoning about the Transfer of Control
Journal of Artificial Intelligence Research 37 (2010) 437 477 Submitted 8/09; published 3/10 Reasoning about the Transfer of Control Wiebe van der Hoek Dirk Walther Michael Wooldridge Department of Computer
More informationTR : Public and Private Communication Are Different: Results on Relative Expressivity
City University of New York CUNY) CUNY Academic Works Computer Science Technical Reports The Graduate Center 2008 TR-2008001: Public and Private Communication Are Different: Results on Relative Expressivity
More informationCOMP219: Artificial Intelligence. Lecture 19: Logic for KR
COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof
More informationExtending Probabilistic Dynamic Epistemic Logic
Extending Probabilistic Dynamic Epistemic Logic Joshua Sack joshua.sack@gmail.com Abstract This paper aims to extend in two directions the probabilistic dynamic epistemic logic provided in Kooi s paper
More informationDynamic Epistemic Logic Displayed
1 / 43 Dynamic Epistemic Logic Displayed Giuseppe Greco & Alexander Kurz & Alessandra Palmigiano April 19, 2013 ALCOP 2 / 43 1 Motivation Proof-theory meets coalgebra 2 From global- to local-rules calculi
More informationTowards A Multi-Agent Subset Space Logic
Towards A Multi-Agent Subset Space Logic A Constructive Approach with Applications Department of Computer Science The Graduate Center of the City University of New York cbaskent@gc.cuny.edu www.canbaskent.net
More informationValentin Goranko Stockholm University. ESSLLI 2018 August 6-10, of 29
ESSLLI 2018 course Logics for Epistemic and Strategic Reasoning in Multi-Agent Systems Lecture 5: Logics for temporal strategic reasoning with incomplete and imperfect information Valentin Goranko Stockholm
More informationReversed Squares of Opposition in PAL and DEL
Center for Logic and Analytical Philosophy, University of Leuven lorenz.demey@hiw.kuleuven.be SQUARE 2010, Corsica Goal of the talk public announcement logic (PAL), and dynamic epistemic logic (DEL) in
More informationNeighborhood Semantics for Modal Logic Lecture 5
Neighborhood Semantics for Modal Logic Lecture 5 Eric Pacuit ILLC, Universiteit van Amsterdam staff.science.uva.nl/ epacuit August 17, 2007 Eric Pacuit: Neighborhood Semantics, Lecture 5 1 Plan for the
More informationAdding Modal Operators to the Action Language A
Adding Modal Operators to the Action Language A Aaron Hunter Simon Fraser University Burnaby, B.C. Canada V5A 1S6 amhunter@cs.sfu.ca Abstract The action language A is a simple high-level language for describing
More informationLecture 8: Introduction to Game Logic
Lecture 8: Introduction to Game Logic Eric Pacuit ILLC, University of Amsterdam staff.science.uva.nl/ epacuit epacuit@science.uva.nl Lecture Date: April 6, 2006 Caput Logic, Language and Information: Social
More informationPopulational Announcement Logic (PPAL)
Populational Announcement Logic (PPAL) Vitor Machado 1 and Mario Benevides 1,2 1 Institute of Mathematics Federal University of Rio de Janeiro 2 System Engineering and Computer Science Program (PESC/COPPE)
More informationOverview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR
COMP219: Artificial Intelligence Lecture 19: Logic for KR Last time Expert Systems and Ontologies oday Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof theory Natural
More informationCorrelated Information: A Logic for Multi-Partite Quantum Systems
Electronic Notes in Theoretical Computer Science 270 (2) (2011) 3 14 www.elsevier.com/locate/entcs Correlated Information: A Logic for Multi-Partite Quantum Systems Alexandru Baltag 1,2 Oxford University
More informationTopics in Social Software: Information in Strategic Situations (Draft: Chapter 4) Eric Pacuit Comments welcome:
Topics in Social Software: Information in Strategic Situations (Draft: Chapter 4) Eric Pacuit Comments welcome: epacuit@cs.gc.cuny.edu February 12, 2006 Chapter 1 Communication Graphs The previous chapter
More informationRough Sets for Uncertainty Reasoning
Rough Sets for Uncertainty Reasoning S.K.M. Wong 1 and C.J. Butz 2 1 Department of Computer Science, University of Regina, Regina, Canada, S4S 0A2, wong@cs.uregina.ca 2 School of Information Technology
More informationDEL-sequents for Regression and Epistemic Planning
DEL-sequents for Regression and Epistemic Planning Guillaume Aucher To cite this version: Guillaume Aucher. DEL-sequents for Regression and Epistemic Planning. Journal of Applied Non-Classical Logics,
More informationSimulation and information: quantifying over epistemic events
Simulation and information: quantifying over epistemic events Hans van Ditmarsch 12 and Tim French 3 1 Computer Science, University of Otago, New Zealand, hans@cs.otago.ac.nz 2 CNRS-IRIT, Université de
More informationFirst-Degree Entailment
March 5, 2013 Relevance Logics Relevance logics are non-classical logics that try to avoid the paradoxes of material and strict implication: p (q p) p (p q) (p q) (q r) (p p) q p (q q) p (q q) Counterintuitive?
More informationMaximal Introspection of Agents
Electronic Notes in Theoretical Computer Science 70 No. 5 (2002) URL: http://www.elsevier.nl/locate/entcs/volume70.html 16 pages Maximal Introspection of Agents Thomas 1 Informatics and Mathematical Modelling
More informationTowards Symbolic Factual Change in Dynamic Epistemic Logic
Towards Symbolic Factual Change in Dynamic Epistemic Logic Malvin Gattinger ILLC, Amsterdam July 18th 2017 ESSLLI Student Session Toulouse Are there more red or more blue points? Are there more red or
More informationChanging Types. Dominik Klein Eric Pacuit. April 24, 2011
Changing Types Dominik Klein Eric Pacuit April 24, 2011 The central thesis of the epistemic program in game theory (Brandenburger, 2007) is that the basic mathematical models of a game situation should
More informationPropositional Language - Semantics
Propositional Language - Semantics Lila Kari University of Waterloo Propositional Language - Semantics CS245, Logic and Computation 1 / 41 Syntax and semantics Syntax Semantics analyzes Form analyzes Meaning
More informationUndecidability in Epistemic Planning
Undecidability in Epistemic Planning Thomas Bolander, DTU Compute, Tech Univ of Denmark Joint work with: Guillaume Aucher, Univ Rennes 1 Bolander: Undecidability in Epistemic Planning p. 1/17 Introduction
More informationCHAPTER 11. Introduction to Intuitionistic Logic
CHAPTER 11 Introduction to Intuitionistic Logic Intuitionistic logic has developed as a result of certain philosophical views on the foundation of mathematics, known as intuitionism. Intuitionism was originated
More informationAn Independence Relation for Sets of Secrets
Sara Miner More Pavel Naumov An Independence Relation for Sets of Secrets Abstract. A relation between two secrets, known in the literature as nondeducibility, was originally introduced by Sutherland.
More informationEvidence with Uncertain Likelihoods
Evidence with Uncertain Likelihoods Joseph Y. Halpern Cornell University Ithaca, NY 14853 USA halpern@cs.cornell.edu Riccardo Pucella Cornell University Ithaca, NY 14853 USA riccardo@cs.cornell.edu Abstract
More informationReasoning about Fuzzy Belief and Common Belief: With Emphasis on Incomparable Beliefs
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Reasoning about Fuzzy Belief and Common Belief: With Emphasis on Incomparable Beliefs Yoshihiro Maruyama Department
More informationK-NCC: Stability Against Group Deviations in Non-Cooperative Computation
K-NCC: Stability Against Group Deviations in Non-Cooperative Computation Itai Ashlagi, Andrey Klinger, and Moshe Tennenholtz Technion Israel Institute of Technology Haifa 32000, Israel Abstract. A function
More informationWhat is DEL good for? Alexandru Baltag. Oxford University
Copenhagen 2010 ESSLLI 1 What is DEL good for? Alexandru Baltag Oxford University Copenhagen 2010 ESSLLI 2 DEL is a Method, Not a Logic! I take Dynamic Epistemic Logic () to refer to a general type of
More informationIntroduction to Metalogic
Philosophy 135 Spring 2008 Tony Martin Introduction to Metalogic 1 The semantics of sentential logic. The language L of sentential logic. Symbols of L: Remarks: (i) sentence letters p 0, p 1, p 2,... (ii)
More informationCharacterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal Logic
Characterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal Logic Joseph Y. Halpern Computer Science Department Cornell University, U.S.A. e-mail: halpern@cs.cornell.edu Leandro Chaves Rêgo
More informationNeighborhood Semantics for Modal Logic Lecture 3
Neighborhood Semantics for Modal Logic Lecture 3 Eric Pacuit ILLC, Universiteit van Amsterdam staff.science.uva.nl/ epacuit August 15, 2007 Eric Pacuit: Neighborhood Semantics, Lecture 3 1 Plan for the
More informationToday. Next week. Today (cont d) Motivation - Why Modal Logic? Introduction. Ariel Jarovsky and Eyal Altshuler 8/11/07, 15/11/07
Today Introduction Motivation- Why Modal logic? Modal logic- Syntax riel Jarovsky and Eyal ltshuler 8/11/07, 15/11/07 Modal logic- Semantics (Possible Worlds Semantics): Theory Examples Today (cont d)
More informationStrategic Coalitions with Perfect Recall
Strategic Coalitions with Perfect Recall Pavel Naumov Department of Computer Science Vassar College Poughkeepsie, New York 2604 pnaumov@vassar.edu Jia Tao Department of Computer Science Lafayette College
More informationPrinciples of Knowledge Representation and Reasoning
Principles of Knowledge Representation and Reasoning Modal Logics Bernhard Nebel, Malte Helmert and Stefan Wölfl Albert-Ludwigs-Universität Freiburg May 2 & 6, 2008 Nebel, Helmert, Wölfl (Uni Freiburg)
More informationLogic: Propositional Logic (Part I)
Logic: Propositional Logic (Part I) Alessandro Artale Free University of Bozen-Bolzano Faculty of Computer Science http://www.inf.unibz.it/ artale Descrete Mathematics and Logic BSc course Thanks to Prof.
More informationLogic: Propositional Logic Truth Tables
Logic: Propositional Logic Truth Tables Raffaella Bernardi bernardi@inf.unibz.it P.zza Domenicani 3, Room 2.28 Faculty of Computer Science, Free University of Bolzano-Bozen http://www.inf.unibz.it/~bernardi/courses/logic06
More informationMajority Logic. Introduction
Majority Logic Eric Pacuit and Samer Salame Department of Computer Science Graduate Center, City University of New York 365 5th Avenue, New York 10016 epacuit@cs.gc.cuny.edu, ssalame@gc.cuny.edu Abstract
More informationArbitrary Announcements in Propositional Belief Revision
Arbitrary Announcements in Propositional Belief Revision Aaron Hunter British Columbia Institute of Technology Burnaby, Canada aaron hunter@bcitca Francois Schwarzentruber ENS Rennes Bruz, France francoisschwarzentruber@ens-rennesfr
More informationA logical formalism for the subjective approach in a multi-agent setting
logical formalism for the subjective approach in a multi-agent setting Guillaume ucher Université Paul Sabatier, Toulouse (F) University of Otago, Dunedin (NZ) aucher@irit.fr bstract. Representing an epistemic
More informationcse371/mat371 LOGIC Professor Anita Wasilewska Fall 2018
cse371/mat371 LOGIC Professor Anita Wasilewska Fall 2018 Chapter 7 Introduction to Intuitionistic and Modal Logics CHAPTER 7 SLIDES Slides Set 1 Chapter 7 Introduction to Intuitionistic and Modal Logics
More informationUnderstanding the Brandenburger-Keisler Belief Paradox
Understanding the Brandenburger-Keisler Belief Paradox Eric Pacuit Institute of Logic, Language and Information University of Amsterdam epacuit@staff.science.uva.nl staff.science.uva.nl/ epacuit March
More informationDynamic Logics of Knowledge and Access
Dynamic Logics of Knowledge and Access Tomohiro Hoshi (thoshi@stanford.edu) Department of Philosophy Stanford University Eric Pacuit (e.j.pacuit@uvt.nl) Tilburg Center for Logic and Philosophy of Science
More informationNatural Deduction. Formal Methods in Verification of Computer Systems Jeremy Johnson
Natural Deduction Formal Methods in Verification of Computer Systems Jeremy Johnson Outline 1. An example 1. Validity by truth table 2. Validity by proof 2. What s a proof 1. Proof checker 3. Rules of
More informationTechnische Universität München Department of Computer Science. Joint Advanced Students School 2009 Propositional Proof Complexity.
Technische Universität München Department of Computer Science Joint Advanced Students School 2009 Propositional Proof Complexity May 2009 Frege Systems Michael Herrmann Contents Contents ii 1 Introduction
More informationInference and update. Fernando Raymundo Velázquez-Quesada
DOI 10.1007/s11229-009-9556-2 Inference and update Fernando Raymundo Velázquez-Quesada Received: 18 November 2008 / Accepted: 7 April 2009 The Author(s) 2009. This article is published with open access
More informationReasoning with Probabilities. Eric Pacuit Joshua Sack. Outline. Basic probability logic. Probabilistic Epistemic Logic.
Reasoning with July 28, 2009 Plan for the Course Day 1: Introduction and Background Day 2: s Day 3: Dynamic s Day 4: Reasoning with Day 5: Conclusions and General Issues Probability language Let Φ be a
More informationDon t Plan for the Unexpected: Planning Based on Plausibility Models
Don t Plan for the Unexpected: Planning Based on Plausibility Models Thomas Bolander, DTU Informatics, Technical University of Denmark Joint work with Mikkel Birkegaard Andersen and Martin Holm Jensen
More informationModal and temporal logic
Modal and temporal logic N. Bezhanishvili I. Hodkinson C. Kupke Imperial College London 1 / 83 Overview Part II 1 Soundness and completeness. Canonical models. 3 lectures. 2 Finite model property. Filtrations.
More informationPropositional Dynamic Logic
Propositional Dynamic Logic Contents 1 Introduction 1 2 Syntax and Semantics 2 2.1 Syntax................................. 2 2.2 Semantics............................... 2 3 Hilbert-style axiom system
More informationStabilizing Boolean Games by Sharing Information
John Grant Sarit Kraus Michael Wooldridge Inon Zuckerman Stabilizing Boolean Games by Sharing Information Abstract. We address the issue of manipulating games through communication. In the specific setting
More informationYanjing Wang: An epistemic logic of knowing what
An epistemic logic of knowing what Yanjing Wang TBA workshop, Lorentz center, Aug. 18 2015 Background: beyond knowing that A logic of knowing what Conclusions Why knowledge matters (in plans and protocols)
More informationManual of Logical Style
Manual of Logical Style Dr. Holmes January 9, 2015 Contents 1 Introduction 2 2 Conjunction 3 2.1 Proving a conjunction...................... 3 2.2 Using a conjunction........................ 3 3 Implication
More informationMultiparty Computation
Multiparty Computation Principle There is a (randomized) function f : ({0, 1} l ) n ({0, 1} l ) n. There are n parties, P 1,...,P n. Some of them may be adversarial. Two forms of adversarial behaviour:
More informationExplicit Logics of Knowledge and Conservativity
Explicit Logics of Knowledge and Conservativity Melvin Fitting Lehman College, CUNY, 250 Bedford Park Boulevard West, Bronx, NY 10468-1589 CUNY Graduate Center, 365 Fifth Avenue, New York, NY 10016 Dedicated
More informationETL, DEL, and Past Operators
ETL, DEL, and Past Operators Tomohiro Hoshi Stanford University thoshi@stanford.edu Audrey Yap University of Victoria ayap@uvic.ca Abstract [8] merges the semantic frameworks of Dynamic Epistemic Logic
More informationBasic counting techniques. Periklis A. Papakonstantinou Rutgers Business School
Basic counting techniques Periklis A. Papakonstantinou Rutgers Business School i LECTURE NOTES IN Elementary counting methods Periklis A. Papakonstantinou MSIS, Rutgers Business School ALL RIGHTS RESERVED
More informationApproximations of Modal Logic K
WoLLIC 2005 Preliminary Version Approximations of Modal Logic K Guilherme de Souza Rabello 2 Department of Mathematics Institute of Mathematics and Statistics University of Sao Paulo, Brazil Marcelo Finger
More informationBig Brother Logic: Logical modeling and reasoning about agents equipped with surveillance cameras in the plane
Big Brother Logic: Logical modeling and reasoning about agents equipped with surveillance cameras in the plane Olivier Gasquet, Valentin Goranko, Francois Schwarzentruber June 10, 2014 Abstract We consider
More informationThe Logic of Proofs, Semantically
The Logic of Proofs, Semantically Melvin Fitting Dept. Mathematics and Computer Science Lehman College (CUNY), 250 Bedford Park Boulevard West Bronx, NY 10468-1589 e-mail: fitting@lehman.cuny.edu web page:
More informationKnowledge and Information Flow
Chapter 5 Knowledge and Information Flow Overview The first part of this course has shown how logical systems describe the world using objects, predicates, quantifiers and propositional combinations. This
More informationModal Logic XX. Yanjing Wang
Modal Logic XX Yanjing Wang Department of Philosophy, Peking University May 6th, 2016 Advanced Modal Logic (2016 Spring) 1 Completeness A traditional view of Logic A logic Λ is a collection of formulas
More informationHow to share knowledge by gossiping
How to share knowledge by gossiping Andreas Herzig and Faustine Maffre University of Toulouse, IRIT http://www.irit.fr/lilac Abstract. Given n agents each of which has a secret (a fact not known to anybody
More informationPropositional Logic. Fall () Propositional Logic Fall / 30
Propositional Logic Fall 2013 () Propositional Logic Fall 2013 1 / 30 1 Introduction Learning Outcomes for this Presentation 2 Definitions Statements Logical connectives Interpretations, contexts,... Logically
More informationTo every formula scheme there corresponds a property of R. This relationship helps one to understand the logic being studied.
Modal Logic (2) There appeared to be a correspondence between the validity of Φ Φ and the property that the accessibility relation R is reflexive. The connection between them is that both relied on the
More informationcis32-ai lecture # 18 mon-3-apr-2006
cis32-ai lecture # 18 mon-3-apr-2006 today s topics: propositional logic cis32-spring2006-sklar-lec18 1 Introduction Weak (search-based) problem-solving does not scale to real problems. To succeed, problem
More informationDynamics for Inference
Dynamics for Inference Alexei Angelides 1 Introduction Consider the following two situations: (DS) from P Q, and P, infer Q. and (WS) I order beef, Jesse orders fish, Darko orders veg. A new waiter walks
More informationAN EXTENSION OF THE PROBABILITY LOGIC LP P 2. Tatjana Stojanović 1, Ana Kaplarević-Mališić 1 and Zoran Ognjanović 2
45 Kragujevac J. Math. 33 (2010) 45 62. AN EXTENSION OF THE PROBABILITY LOGIC LP P 2 Tatjana Stojanović 1, Ana Kaplarević-Mališić 1 and Zoran Ognjanović 2 1 University of Kragujevac, Faculty of Science,
More informationModelling Uniformity and Control during Knowledge Acquisition
Proceedings of the Twenty-First International FLAIRS Conference (2008) Modelling Uniformity and Control during Knowledge Acquisition Bernhard Heinemann Fakultät für Mathematik und Informatik, FernUniversität
More informationUnit 1. Propositional Logic Reading do all quick-checks Propositional Logic: Ch. 2.intro, 2.2, 2.3, 2.4. Review 2.9
Unit 1. Propositional Logic Reading do all quick-checks Propositional Logic: Ch. 2.intro, 2.2, 2.3, 2.4. Review 2.9 Typeset September 23, 2005 1 Statements or propositions Defn: A statement is an assertion
More informationDeductive Algorithmic Knowledge
Deductive Algorithmic Knowledge Riccardo Pucella Department of Computer Science Cornell University Ithaca, NY 14853 riccardo@cs.cornell.edu Abstract The framework of algorithmic knowledge assumes that
More informationArtificial Intelligence. Propositional logic
Artificial Intelligence Propositional logic Propositional Logic: Syntax Syntax of propositional logic defines allowable sentences Atomic sentences consists of a single proposition symbol Each symbol stands
More informationAn Extended Interpreted System Model for Epistemic Logics
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) An Extended Interpreted System Model for Epistemic Logics Kaile Su 1,2 and Abdul Sattar 2 1 Key laboratory of High Confidence
More informationThe Algebra of Multi-Agent Dynamic Belief Revision
LCMAS 2005 The Algebra of Multi-Agent Dynamic Belief Revision Alexandru Baltag 1 Computing Laboratory University of Oxford Oxford, U.K. Mehrnoosh Sadrzadeh 2 Department of Philosophy University of Quebec
More informationAn Introduction to Modal Logic III
An Introduction to Modal Logic III Soundness of Normal Modal Logics Marco Cerami Palacký University in Olomouc Department of Computer Science Olomouc, Czech Republic Olomouc, October 24 th 2013 Marco Cerami
More informationApplied Logic. Lecture 1 - Propositional logic. Marcin Szczuka. Institute of Informatics, The University of Warsaw
Applied Logic Lecture 1 - Propositional logic Marcin Szczuka Institute of Informatics, The University of Warsaw Monographic lecture, Spring semester 2017/2018 Marcin Szczuka (MIMUW) Applied Logic 2018
More informationFORMAL PROOFS DONU ARAPURA
FORMAL PROOFS DONU ARAPURA This is a supplement for M385 on formal proofs in propositional logic. Rather than following the presentation of Rubin, I want to use a slightly different set of rules which
More informationPrice: $25 (incl. T-Shirt, morning tea and lunch) Visit:
Three days of interesting talks & workshops from industry experts across Australia Explore new computing topics Network with students & employers in Brisbane Price: $25 (incl. T-Shirt, morning tea and
More informationOn Representing Actions in Multi-Agent Domains
On Representing ctions in Multi-gent Domains hitta aral and Gregory Gelfond Department of omputer Science and Engineering rizona State University bstract. Reasoning about actions forms the foundation of
More informationTR : Binding Modalities
City University of New York (CUNY) CUNY Academic Works Computer Science Technical Reports Graduate Center 2012 TR-2012011: Binding Modalities Sergei N. Artemov Tatiana Yavorskaya (Sidon) Follow this and
More informationLearning to Act: Qualitative Learning of Deterministic Action Models
Learning to Act: Qualitative Learning of Deterministic Action Models Thomas Bolander Nina Gierasimczuk October 8, 2017 Abstract In this paper we study learnability of fully observable, universally applicable
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationDecision Theory and the Logic of Provability
Machine Intelligence Research Institute 18 May 2015 1 Motivation and Framework Motivating Question Decision Theory for Algorithms Formalizing Our Questions 2 Circles of Mutual Cooperation FairBot and Löbian
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More informationUpdate As Evidence: Belief Expansion
Update As Evidence: Belief Expansion Roman Kuznets and Thomas Studer Institut für Informatik und angewandte Mathematik Universität Bern {kuznets, tstuder}@iam.unibe.ch http://www.iam.unibe.ch/ltg Abstract.
More information1.1 Statements and Compound Statements
Chapter 1 Propositional Logic 1.1 Statements and Compound Statements A statement or proposition is an assertion which is either true or false, though you may not know which. That is, a statement is something
More informationEpistemic Informativeness
Epistemic Informativeness Yanjing Wang and Jie Fan Abstract In this paper, we introduce and formalize the concept of epistemic informativeness (EI) of statements: the set of new propositions that an agent
More informationS4LP and Local Realizability
S4LP and Local Realizability Melvin Fitting Lehman College CUNY 250 Bedford Park Boulevard West Bronx, NY 10548, USA melvin.fitting@lehman.cuny.edu Abstract. The logic S4LP combines the modal logic S4
More informationALL NORMAL EXTENSIONS OF S5-SQUARED ARE FINITELY AXIOMATIZABLE
ALL NORMAL EXTENSIONS OF S5-SQUARED ARE FINITELY AXIOMATIZABLE Nick Bezhanishvili and Ian Hodkinson Abstract We prove that every normal extension of the bi-modal system S5 2 is finitely axiomatizable and
More informationLogics of Rational Agency Lecture 3
Logics of Rational Agency Lecture 3 Eric Pacuit Tilburg Institute for Logic and Philosophy of Science Tilburg Univeristy ai.stanford.edu/~epacuit July 29, 2009 Eric Pacuit: LORI, Lecture 3 1 Plan for the
More information