A logical approach to multilevel security of probabilistic systems

Size: px
Start display at page:

Download "A logical approach to multilevel security of probabilistic systems"

Transcription

1 Distrib. Comput. (1998) 11: c Springer-Verlag 1998 A logical approach to multilevel security of probabilistic systems James W. Gray, III,1, Paul F. Syverson,2 1 Department of Computer Science, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 2 Center for High Assurance Computer Systems, Naval Research Laboratory, Washington, DC 20375, USA ( syverson@itd.nrl.navy.mil) Summary. We set out a modal logic for reasoning about multilevel security of probabilistic systems. This logic contains expressions for time, probability, and knowledge. Making use of the Halpern-Tuttle framework for reasoning about knowledge and probability, we give a semantics for our logic and prove it is sound. We give two syntactic definitions of perfect multilevel security and show that their semantic interpretations are equivalent to earlier, independently motivated characterizations. We also discuss the relation between these characterizations of security and between their usefulness in security analysis. Key words: Formal modeling Verification Knowledge Security Probabilistic Systems 1 Introduction Multilevel security is the aspect of computer security concerned with protecting information that is classified with respect to a multilevel hierarchy (e.g., UNCLASSIFIED, SE- CRET, TOP SECRET). A probabilistic system is a hardware or software system that makes probabilistic choices (e.g., by consulting a random number generator) during its execution. Such probabilistic choices are useful in a multilevel security context for introducing noise to reduce the rate of (or eliminate) illicit communication between processes at different classification levels. In this paper, we are concerned with definitions of perfect (information-theoretic) multilevel security in the sense that the definitions rule out all illicit communication without relying on any complexity-theoretic assumptions. That is, our model allows the system penetrators to have unlimited computational power; yet, our definitions are sufficient to ensure there can be no illicit communication. The systems we address can be depicted in the form shown in Fig. 1. This general form is intended to represent systems including physical hardware with hard-wired connections to other systems, an operating system kernel with Supported by grant HKUST 608/94E from the Hong Kong Research Grants Council. Supported by ONR. Correspondence to: P.F. Syverson connections to other processes provided by shared memory, and processes executing on a multiprocessor with connections to other systems provided by an interprocess communication (IPC) mechanism. There is a system, called Σ, that provides services to the other systems. For example, in the case of a multiuser relational database, Σ would store and control access to a set of relations. Σ is the system with respect to which we will be reasoning about multilevel security. There is a set of systems (labeled S 1, S 2,..., S i in the figure), called the covert senders, that have access to secret information. These systems are called covert senders because they may attempt to covertly send secret information, via Σ, to other systems that are not authorized to see the information. It is these attempts with which we are concerned. As is commonly done in the literature, we will often refer to the covert senders as high systems (referring to the situation where the covert senders have access to highly classified information). We will also refer to the set of covert senders collectively as the high environment, denoted H. These systems are part of the environment in the sense that they are in the environment of the central system, Σ. There is a second set of systems (labeled R 1, R 2,..., R j in the figure), called the covert receivers, that are not authorized to see the secret information that is available to the covert senders. We will often refer to the covert receivers as low systems, or collectively as the low environment, denoted L. If the covert senders are able to use Σ to communicate information to the covert receivers, we will say that Σ has a covert channel, or equivalently, for our purposes, that Σ is insecure. A few notes are in order. 1. It is important to bear in mind that the threat that we are concerned with is not that the users (i.e., the human users) of the covert sender systems are attempting to send secret information to the covert receivers. We assume that if they wanted to, they could more easily pass notes in the park and entirely bypass Σ. Rather, we are concerned that the covert senders are actually trojan horses (i.e., they appear to be something that the user wants, but actually contain something else that is entirely undesirable to the user) and that these trojan

2 74 S 1 R1 covert senders S 2.. R2 covert receivers S i Rj Fig. 1. The general form of a system horses are attempting to send secret information to the covert receivers. This is a legitimate concern since system developers do not want to incur the cost of verifying every component of a conglomerate system with respect to multilevel security requirements. Ideally, only a small number of components in the system (e.g., in our case only Σ) have security requirements and thereby require verification; while the remaining components can be implemented by off-the-shelf hardware and software that are unverified with respect to security (and therefore may be trojan horses). We assume a worst case scenario, where all of the covert senders and covert receivers are trojan horses. Indeed, we assume that all of the trojan horses are cooperating in an attempt to transmit information from the covert senders to the covert receivers. 2. It is also important to bear in mind that in our intended application, the covert senders will not be able to communicate directly to the covert receivers (i.e., by bypassing Σ). Typically, there are software, hardware or other physical controls to prevent this. For example, nonbypassability is one of the well-known principles of a reference monitor (see [13]), which is one of the typical applications we have in mind. 3. Our model contrasts sharply with much other work on security (e.g., [31], [11]) in that we consider a set of untrusted agents (viz, the covert senders and receivers) that are connected via a trusted agent, whereas these other works consider a set of trusted agents connected via an untrusted agent. This difference in our model reflects the difference in the respective applications. The work of Meadows in [31] and Dolev et al. in [11] is intended for the analysis of a set of legitimate (and trusted) agents that are attempting to establish secure communication over an untrusted network. In that work, the assumption is that the penetrator is able to subvert the network (i.e., the central component of the system), but not the trusted (lateral) agents. In contrast, our work is intended to be used to analyze a centralized server that serves a set of untrusted entities. Correspondingly, our assumption is that the penetrator may be able to subvert the untrusted (lateral) agents, but not the central server. 4. The fact that we have partitioned the set of systems external to Σ into two sets, high and low, may seem to indicate that we are limiting ourselves to two levels of information (e.g., SECRET and UNCLASSIFIED). However, this is not the case. In a more general setting, information is classified (users are cleared, resp.) according to a finite, partially ordered set (see, e.g., Denning s [9]); that is, there is a finite set of classification levels (clearance levels, resp.) that is ordered by a reflexive, transitive, and anti-symmetric relation, which we call dominates. (In fact this set forms a finite lattice.) A given user is permitted to observe a given piece of information only if the user s clearance dominates the classification of the information. In the case where there are more than two levels, a separate analysis would be performed for each level, x; in each analysis, the set of levels would be partitioned into those that are dominated by x (i.e., the low partition) and the set of levels that are not dominated by x (i.e., the high partition). Thus, we have lost no generality by restricting our attention to two levels. The motivation for reasoning about the probabilistic behavior of systems has appeared in examples and discussions of many authors (cf. [4, 17, 26, 28, 30, 42]). Essentially, the motivation is that it is possible for a probabilistic system to satisfy many existing definitions of security (e.g., Sutherland s Nondeducibility [40], McCullough s Restrictiveness [29], etc.) and still contain probabilistic covert channels. Our long term goal is to develop a logic that can be used to reason about the multilevel security of a given system Σ. The logical definition of security in this paper delineates ideal security for probabilistic systems. As such it does not apply to real systems, which are too complex to be simultaneously ideally secure and adequately functional. Nonetheless, it is important to establish the ideal in order to know what is possible. Further, we view the present work as a step in the direction of practical verification of multilevel security of real probabilistic systems (presumably based on definitions of security that allow some limited information flow). In prior work ([18]), we gave a logic in which an information-theoretic definition of security the first author s Probabilistic Noninterference (PNI) ([17]) was expressed as K L (ϕ) R L (ϕ) (1) where K L (ϕ) is intuitively regarded as L knows ϕ and R L (ϕ) is intuitively regarded as L is permitted to know ϕ. Thus, Formula 1 is intuitively interpreted as If L knows ϕ then L is permitted to know ϕ, or in other words, what L knows is a subset of what L is permitted to know.

3 75 This intuitively-appealing formula was proposed by Glasgow, MacEwen, and Panangaden [14] and further developed by Bieber and Cuppens [1, 2]. In other work [19] we extended their approach to a probabilistic framework, retaining the syntactic form of their definition of security, viz Formula 1. However, the knowledge operator (K L ) proposed by Bieber and Cuppens (and its probabilistic analog proposed by us) is nonstandard and rather unnatural. For example, what a subject knows does not change over time. In particular, in our probabilistic framework, subjects know the probability distribution over all of their future interactions, including all future outputs they will receive. This is in contrast with the intuitive notion of knowledge (as well as the standard formalizations of knowledge such as by Chandy and Misra [6] or Halpern [21]) wherein a subject can acquire knowledge as it interacts with its environment. Another disadvantage of the prior work of Glasgow et al., Bieber and Cuppens, and the present authors is that Formula 1 makes use of a permitted knowledge operator (R L ). Such an operator has no standard semantics and seems, by its very nature, to be application specific. For example, see [8] wherein permitted knowledge is essentially formalized as knowledge that is permitted, as defined in the present application. In the present work, we develop a new formalization of PNI using the framework of Halpern and Tuttle [22]. In this framework, the knowledge operator is given the standard semantics. Also, our new formalization does not make use of a permitted knowledge operator; it is therefore free of the nonstandard operators that were used in our previous formalization. Thus, the present paper can be viewed as superseding our prior work. In another sense, the present work can be viewed as a novel application of Halpern and Tuttle s framework, since we instantiate their framework with an adversary (see Definition 2.1) fundamentally different from those described in [22]. The remainder of the paper is organized as follows. In Sect. 2 we set out our model of computation. In Sects. 3 and 4, we set out the syntax and semantics of our logic and in Sect. 5, we prove its soundness. In Sect. 6 we state our primary definition of security and prove that it is equivalent to Probabilistic Noninterference. In Sect. 7 we state our verification condition and show that it is equivalent to the Applied Flow Model. Finally, in Sect. 8, we give some conclusions of this work. 2 System model In this section, we describe our system model. This is the model by which we will (in Sect. 4) give semantics to our logic. First, we describe the general system model, which is taken from Halpern and Tuttle [22]. The framework of Halpern and Tuttle builds on the work of Fagin and Halpern in [12]. It also encompasses earlier work of Ruspini in [35]. After giving the general system model, we tailor the model to our needs by imposing some additional structure on the model and (in Halpern and Tuttle s terminology) choosing the adversaries, resulting in our application-specific model. 2.1 General system model We have a set of agents, P 1,P 2,...,P n, each with its own local state. The global state is an n-tuple of the local agents states. 1 A run of the system is a mapping of times to global states. We assume that time is discrete because we are dealing with security at the digital level of the system. We are not, for example, addressing security issues such as analog channels in hardware. Therefore, we will assume that times are natural numbers. The probabilities of moving among global states are represented in the model by means of labeled computation trees. The nodes of the trees represent global states. For any given node in a tree, the children of that node represent the set of global states that could possibly come next. Each arc from a node to one of its children is labeled with the probability of moving to that state. Thus, from any given node, the sum of the probabilites on its outgoing arcs must be one. We also assume the set of outgoing arcs is finite and that all arcs are labeled with nonzero probabilities. This final assumption can be viewed as a convention that if the probability of moving from state x to state y is zero, then state y is not included as a child of state x. Certain events in a system may be regarded as nonprobabilistic, while still being nondeterministic. The typical example occurs when a user is to choose an input, and in the analysis of the system we do not wish to assign a probability distribution to that choice; in such cases, we regard that choice as nonprobabilistic. All nonprobabilistic choices in the system are lumped into a single choice that is treated as being made by an adversary prior to the start of execution. Thus, after this choice is made, the system s execution is purely probabilistic. In Halpern and Tuttle s words, the nonprobabilistic choices have been factored out. In the model of computation, each possible choice by the adversary corresponds to a labeled computation tree. In other words, a system is represented as a set of computation trees, each one corresponding to a different choice by the adversary. There is no indication how the adversary s choice is made, just that it is made once and for all, prior to the start of execution. 2.2 Application-specific system model In this section, we impose some additional structure on the general model described in the previous section. We fix the set of agents, fix our model and intuitions regarding communication, place some (environmental) constraints on the agents, and fix the set of choices available to the adversary. Agents. As indicated in Fig. 1 and the surrounding discussion, we can limit our model to three agents: (1) the system 1 Halpern and Tuttle also include the state of the environment as part of the global state. In their usage of the term, the environment is intended to capture everything relevant to the state of the system that cannot be deduced from the agents local states [22, Sect. 2]. This typically includes messages in transit on the communication medium. However, we model such things as part of the covert senders and receivers local states; we therefore omit what they call the environment from our model. In contradistinction, we refer to everything external to Σ as the environment ; viz, the covert senders and receivers constitute the environment.

4 76 under consideration, denoted Σ, (2) the covert senders (or alternatively, the high environment), denoted H, and (3) the covert receivers (or alternatively, the low environment), denoted L. In the remainder of the paper, we will tacitly assume that the global system is comprised of these three agents. Model of communication. Our model of communication is similar to those of Bieber and Cuppens, Millen, and the first author (cf. [2], [32], and [17], respectively). We view Σ s interface as a collection of channels on which inputs and outputs occur. Since we consider the agent H (resp., L )to consist of all processing that is done in the high (resp., low) environment, including any communication mechanism that delivers messages to Σ, we will not need to model messages in transit or, in Halpern and Tuttle s terminology, the state of the environment; rather, these components of the global state will be included as part of H s and L s state. In many systems of interest, the timing of events is of concern. (See Lampson s [25] for an early description of covert communication channels that depend on timing.) In particular, some covert communication channels depend on a clock being shared between the covert senders and receivers. Such channels are typically called timing channels; see Wray s [43] for examples and discussion. To handle such cases, we take the set of times (i.e., the domain of the runs) to be the ticks of the best available shared clock. 2 Events occurring between two ticks are regarded as occurring on the latter tick. This is sufficient for the purposes of our analysis because, as far as the covert senders and receivers are concerned, this is the most accurate information available. Also note that if the timing of certain events (wrt the best available shared clock) is nonprobabilistic, we can consider the various possibilities to be choices that are made by the adversary and factor out that nondeterminism as discussed by Halpern and Tuttle [22]. Since the mechanisms of high-level 3 I/O routines may introduce covert channels (see, e.g., McCullough s [28, Sect. 2.3]), we take a very low-level view of I/O. In particular, we assume one input and one output per channel per unit time (where times are chosen according to the above considerations). That is, for each time we have a vector of inputs (one for each channel) and a vector of outputs (one for each channel). If a given agent produces no new data value at a given time, it may in fact serve as a signal in a covert channel exploitation. Hence, we treat such no new signal events as inputs. Similarly, we do not consider the possibility that the system can prevent an input from occurring. Rather, the system merely chooses whether to make use of the input or ignore it. Any acknowledgement that an input has been received is considered to be an output. Given these considerations, we fix our model of communication as follows. We assume the following basic sets of symbols, all nonempty: C: a finite set of input/output channel names, c 1,...,c k, I: representing the set of input values, 2 A shared clock may be an explicit clock supplied by Σ, e.g, the system clock, or it may be a clock manufactured by the covert senders and receivers for their own purposes; see [43] for examples and discussion. 3 In this context, high-level means highly abstract rather than highly classified. O: representing the set of output values. N + : representing the set of positive natural numbers. This set will be used as our set of times. Since there is one input per channel at each time, we will be talking about the vector of inputs that occurs at a given time. We will denote the set of all vectors of inputs by I[C]. Typical input vectors will be denoted a, a,a 1,... I[C]. Similarly, we will denote the set of all output vectors by O[C] and typical output vectors will be denoted b, b,b 1,... O[C]. Now, to talk about the history of input vectors up to a given time, we introduce notation for traces. We will denote the set of input traces of length k by I C,k. Mathematically, I C,k is a shorthand for the set of functions from C {1, 2,...k} to I. Therefore, for a trace α I C,k,we will denote the single input on channel c C at time k k by α(c, k ). We will also need to talk about infinite traces of inputs. For this we use the analogous notation I C,, which is short hand for the set of functions from C N + to I. Similarly, we will denote the set of output traces of length k by O C,k and the set of infinite output traces by O C,. Naturally, for an output trace β, β(c, k) represents the output on channel c at time k. There will be situations where we want to talk about vectors or traces of inputs or outputs on some subset of the channels, S C. In such cases we will use the natural generalizations of the above notations, viz, I[S], I S,k, I S,, etc. Environmental constraints. Any given agent will be able to see the inputs and outputs on a subset of the system s I/O channels. We make this precise by restricting vectors and traces to subsets of C. Given an input vector a I[C] and a set of channels S C, we define a S I[S] tobethe input vector on channels in S such that a S(c) =a(c) for all c S. Similarly, given an input trace α I C,k and a set of channels S C, we define α S I S,k to be the input trace for channels in S such that α S(c, k )=α(c, k ) for all c S and all k k. We assume that the set of low channels, denoted L, is a subset of C. Intuitively, L is the set of channels that the low environment, L, is able to directly see. In particular, L is able to see both the inputs and the outputs that occur on channels in L. In practice, there will be some type of physical or procedural constraints on the agent L to prevent it from directly viewing the inputs and outputs on channels in C L. For example, those channels may represent wires connected to workstations that are used for processing secret data. In this case, the secret workstations might be located inside a locked and guarded room. In addition, periodic checks of the wires might be made to ensure that there are no wiretaps on them. In this way, L is prevented from directly viewing the data that passes over the channels in C L. On the other hand, we place no constraints on the set of channels that H is able to see. In particular, we make the worst-case assumption that H is able to see all inputs and outputs on all channels.

5 77 The above considerations are consistent with what we ve called the Secure Environment Assumption in previous work [17, 18]. In the present paper, this assumption is made precise in terms of our definition of the adversary to be given next. The adversary. As discussed above, in Halpern and Tuttle s framework, all nonprobabilistic choices are factored out of the execution of the system by fixing an adversary at the start of execution. To make use of this framework, we must define the set of possible adversaries from which this choice is made. The adversary in our application is the pair of agents, H and L, that are attempting to send data from the high environment across the system Σ to the low environment. To be fully general, we model these agents as mixed strategies (in the game-theoretic sense). That is, at each point in the execution of the system the strategy gives the probability distribution over the set of next possible inputs, conditioned on the history up to the current point. In the next section, we present an example to motivate the need for such generality. Before doing that, we make the adversary precise with the following two definitions. Definition 2.1 An adversary is a conditional probability function, A(a α, β) (where a I[C] and for some time, k, α I C,k and β O C,k ). Intuitively, the adversary describes the environment s conditional distribution on the next input vector, given the previous history of inputs and outputs. By saying that A(a α, β) is a conditional probability function we require that 0 A(a α, β) 1, and A(a α, β) =1 a In fact, it is trivial to define a conditional probability mass function corresponding to A where a, α, and β are replaced with the values of the corresponding random variables [33]. Such a conditional probability mass function can be defined in terms of the probability measure µ A given in definition 4.4 below. Definition 2.2 We say that an adversary A satisfies the Secure Environment Assumption with respect to a set of channels L C iff there exists a pair of conditional probability functions H and L such that for all a I[C], all k N +, all α I C,k, and all β O C,k, A(a α, β) =H (a (C L) α, β) L (a L α L, β L) (where denotes real multiplication). The Secure Environment Assumption can be intuitively understood as saying that the input on channels in (C L) at time k is (conditionally) statistically independent of the input on channels in L at time k, and the input on channels in L at time k depends only on previous inputs and outputs on channels in L. For the remainder of this paper, we will assume that all adversaries from which the initial choice is made satisfy the Secure Environment Assumption. Later in this section, we describe how a given adversary A and the description of a particular system, Σ, are used to construct the corresponding computation tree T A. Since there is one tree for each possible adversary, we can think of the set of trees as being indexed by the adversaries. Therefore, we will often write T A, T A, T Ai, etc. It is clear that for an adversary A that satisfies the Secure Environment Assumption (wrt L), the conditional probability functions H and L are unique. Further, given H and L, there is a unique adversary, A, for which H and L are the probability functions that satisfy the corresponding constraint. There is therefore no ambiguity in writing T H,L, T H,L, etc. when we want to refer to the components of the adversary individually. Note that our definition of an adversary is not meant to be as general as the adversary discussed by Halpern and Tuttle. (In fact, Halpern and Tuttle give no structure at all to their adversary.) Rather, our adversary is application-specific; in particular, it is for reasoning about multilevel security of probabilistic systems and is not designed to be used outside that domain. On the other hand, this particular adversary represents a novel application of Halpern and Tuttle s framework. In Halpern and Tuttle s examples, the adversary represents one or both of two things: the initial input to the system; and the schedule according to which certain events (e.g., processors taking steps) occur. In contrast, our adversary does not represent a given input to the system. Rather, it represents a mixed strategy for choosing the inputs to the system. In some sense, we can think of this as a generalization on the first item above; however, our application still fits within the framework set out by Halpern and Tuttle. The state of the system. At any given point, P, in any given computation tree, T A, there should be a well-defined state of the system. For our purposes, the state includes the following information. 1. All inputs and outputs that have occurred on all channels up to the current time. 2. The adversary. In [22], Halpern and Tuttle make the assumption that all points in all trees are unique. They suggest (and we adopt) the following idea to ensure that this is true. The state encodes the adversary. That is, all nodes in tree T A encode A. Note that we do not assume that any given agent knows the adversary; just that it is somehow encoded in the state. We can think of the high part of the adversary, H, as being encoded in the high environment and the low part, L, as being encoded in the low environment. 3. Additional components of the global state represent the internal state of Σ. For example, in describing Σ, it is often convenient to use internal state variables. The state of these variables can be thought of as a vector of values, one value for each state variable. Thus, the internal state, when it exists, will be denoted c, and the history of internal states will be denoted γ. Computation trees. Now that we have set out the possible states of the system (i.e., the points of computations), we can talk about the construction of the computation trees.

6 78 For each reachable point, P, we assume that Σ s probability distribution on outputs is given. For example, this can be given by a conditional probability distribution, O (b, c α, β, γ), where α, β, and γ give the history (up through some time k) of inputs, outputs, and internal states, respectively, c is a vector of internal state variables (i.e., the internal system state at time k + 1), and b is the vector of outputs produced by the system (at time k + 1). Given O (b, c α, β, γ) and the adversary A, wecan construct the corresponding computation tree by starting with the initial state of the system (i.e., the point at the root of the tree with empty histories of inputs, outputs, etc.) and iteratively extending points as follows. Let P be a point in the tree with internal system history γ, input history α, and output history β. We will make P a child of P iff 1. P is formed from P by modifying the internal system state to c and extending P s input history (output history, resp.) with a (b, resp.); and 2. both O (b, c α, β, γ) and A(a α, β) are positive. In such cases, we label the arc from P to P with O (b, c α, β, γ) A(a α, β), i.e., the system, Σ, and the environment, A, make their choices independently. Runs of the system. A run of the system is an infinite sequence of states along a path in one of the computation trees. When we want to talk about the particular run, ρ, and time, k, at which a point P occurs, we will denote the point by the pair (ρ, k). Further, if we wish to talk about the various components of the run, i.e., the trace of the inputs, α, outputs, β, or other variables, γ, we will denote the run by (α, β, γ) and denote the point, P,by(α, β, γ, k). For a given tree, T, we denote the set of runs (i.e., infinite sequences of states), formed by tracing a path from the root, by runs(t ). For security applications we are concerned with information flow into and out of the system rather than with information in the system per se. Thus, though our system model is adequate to represent internal states and traces thereof, in subsequent sections it will be adequate to represent systems entirely in terms of input and output. In particular, system behavior can be represented by O (b α, β) rather than O (b, c α, β, γ). 3 Syntax In this section we set out our formal language and use it to describe two simple systems. Then we give the axioms and rules of our logic. 3.1 Formation rules To describe the operation of the system under consideration (viz, Σ), we use a variant of Lamport s Raw Temporal Logic of Actions (RTLA) [24]. 4 The primary difference is that we 4 Roughly speaking, Raw Temporal Logic of Actions (RTLA) is the same as Lamport s Temporal Logic of Actions (TLA) without the treatment of stuttering [24]. Since we are not, in this paper, concerned with refinement, we omit the considerations of stuttering and use RTLA. add a modal operator Pr i (ϕ) that allows us to specify and reason about the probabilistic behavior of the system. From the previous section, we assume the following basic sets of symbols, all nonempty: C, I, O, and R. Members of R will have the usual representation e.g., 43.5 R. We will also be talking about the subjects (or agents) of the system. Formally, a subject, S C, is identified with the process s view of the system, i.e. the set of channels on which it can see the inputs and outputs. Formulae in the language are built up according to the following rules. constants from the set of basic symbols are terms. state variables (representing the value of that variable in the current state) are terms. Among the state variables, there are two reserved for each communication channel. For each c C, we have a state variable c in that takes values from I, and another state variable c out that takes values from O. Note that, implicitly, inputs are from the covert senders and receivers into the system (Σ) and outputs are from the system to the covert senders and receivers. This is because Σ is the system under consideration (i.e., with respect to which we are reasoning about security). We have no mechanism (and no need) to specify communication between agents not including the system under consideration. primed state variables (e.g., c in ) are terms. (These represent the value of the variable in the next state.) We use standard operators among terms (e.g., + and for addition and multiplication, respectively), with parentheses for grouping subterms, to form composite terms. an atomic formula is an equation or inequality among terms. For any formula ϕ, ϕ is a formula (to be read intuitively as always ϕ). We build up composite formulae, in the usual recursive fashion using,,, and. for any nonmodal formula 5 ϕ, and for any subject S C, Pr S (ϕ) is a real-valued term. Intuitively, Pr S (ϕ) represents the subjective probability that S assigns to the formula ϕ, that is, the probability of ϕ, given the previous history of inputs and outputs on channels in S. We refer to Pr C (ϕ) (where C is the set of all communication channels) as the objective probability of ϕ, since it represents the probability of ϕ given all available information, i.e., the unbiased probability of ϕ. To specify and reason about our security properties of interest, we also add a set of modal operators on formulae: K 1,...,K n, representing knowledge for each subject (represented by the subscript of the operator). Therefore, we add the following additional formation rule to our syntax. For any formula ϕ, and for any subject S C, K S (ϕ) (representing that S knows ϕ) is a formula. Note that this and previous rules are mutually recursive; so, we can express, e.g., that S always knows that x =5. 5 A nonmodal formula is a formula that does not contain any knowledge or temporal operators.

7 Examples We now give two simple examples of how to describe systems in our language. Ultimately, we will have sufficient formal machinery to show that one of these systems is secure and the other is not; however, here we simply set them out formally. These descriptions are meant to give the reader an intuitive feel for the meaning of expressions in the language. Precise meanings will be given in Sect. 4. Also, the second of these examples will motivate our choice to model adversaries as strategies. Example 3.1 The first example is a simple encryption box that uses a one-time pad [10]. It has two channels, high and low. At each tick of the system clock, it inputs a0or1 on the high channel and outputs a0or1onthelowchan- nel. The low output is computed by taking the exclusive or (XOR) (denoted ) of the high input and a randomly generated bit. Note that we are modeling only the sender s (encrypting) side of a one-time pad system. Thus, issues such as how the random bit string is distributed to, and used by, the receiver s (decrypting) side are out of the scope of this specification. It is well known that the XOR of a data stream with an independent uniformly distributed bit stream results in an output stream that is uniformly distributed. Therefore, we can describe the encryption box as follows. Let C = {h, l}, I = {0, 1}, and O = {0, 1}. Then, the system is specified by the following formula. ( Pr C (l out =0)=0.5 Pr C (l out =1)=0.5 ) In this formula, l out is a state variable representing the output on the low channel, l. Therefore, l out is the output on l at the next time. Further, Pr C (l out = 0) denotes the probability that the output on l isa0atthenext time. Hence, the entire formula says that at all times, the probability of Σ producing a one (1) on the next clock tick is equal to the probability of producing a zero (0), which is equal to 0.5. Note that we have not specified the probability distribution over inputs, since this constitutes environment behavior rather than system behavior. Example 3.2 The second example is an insecure version of the simple encryption box. Shannon [37] gives an early description of this system. As in the first example, the system computes the exclusive or of the current high input and a randomly generated bit and outputs that value on the low channel at each time. However, in this system, the randomly generated bit used at any given tick is actually generated and output on the high output channel during the previous tick of the clock. This can be expressed in our formalism as follows. Let C = {h, l}, I = {0, 1}, and O = {0, 1}. The following formula specifies the system. (Pr C (h out =0)=0.5 Pr C (h out =1)=0.5 l out = h out h in) Note that in the third conjunct, h out is unprimed, indicating that the output on l at the next time is the exclusive or of the current output on h with the next input on h. Now note that if the high agent ignores its output, this system acts exactly as the system from the previous example (and can be used for perfect encryption). In particular, suppose we were to model an adversary as an input string the input to be provided by the high agent. Then, it is straightforward to prove that for any adversary (i.e., any high input string) fixed prior to the start of execution, the output to low will be uniformly distributed and, in fact, will contain no information about the high input string. However, the bit that will be used as the one-time pad at time k is available to the high agent at time k 1. Therefore, (due to the algebraic properties of exclusive or, viz, x x y = y) the high agent can use this information to counteract the encryption. In particular, the high agent can employ a (game-theoretic) strategy to send any information it desires across the system to the low agent. For example, suppose the high agent wishes to send a sequence of bits, b 1,b 2,... We ll denote the high input (resp., output) at time k by h in (k) (resp., h out (k)). The appropriate strategy for the high agent is as follows. The high agent chooses its input for time k +1 as h in (k +1)=h out (k) b k. Thus, the output to low at time k + 1, denoted l out (k +1)is computed as follows. l out (k +1) = h out (k) h in (k + 1) [by the system description] = h out (k) h out (k) b k [by the high strategy] = b k [by the properties of ] Thus, by employing the correct strategy, the high agent can noiselessly transmit an arbitrary message over Σ to the low agent. This, of course, motivates our choice to model adversaries as strategies, rather than, e.g., input strings. We now have some sense of the formal language, with the exception of the knowledge operator K S. As previously mentioned, this operator will be used to formalize the security properties that interest us. We will illustrate that use in a later section. For now we mention that in security analyses it is typical to assume that system users (and penetrators) know how the system works (i.e., its specifications are not secret); we make such assumptions explicit using our knowledge operator, in particular, if the system specification is given by a formula ϕ, we will assume that for every subject S, K S (ϕ). 3.3 The Logic We now give the axioms of our logic. In the following, we will use ϕ and ψ to refer to formulae of our language. Propositional reasoning. All instances of tautologies of propositional logic. Temporal reasoning. The following are standard axioms for temporal reasoning about discrete systems. The logic they constitute is generally called S4.3Dum. (See Goldblatt s [16] for details.) We have labelled the axioms with their historical names. Let ϕ and ψ be formulae of our language. K ((ϕ) (ϕ ψ)) ψ

8 80 T ϕ ϕ 4 ϕ ϕ L (ϕ ϕ ψ) (ψ ψ ϕ) Dum ((ϕ ϕ) ϕ) ( ϕ ϕ) ϕ can be interpreted roughly as saying that at some point ϕ is true. Formally, it is viewed as notational shorthand: for all formulae ϕ, ϕ = ϕ. K guarantees that the temporal operator respects modus ponens. Each of the other axioms captures a feature of time that we desire. 4 gets us transitivity. T guarantees that we don t run out of time points (seriality) and that temporal references include the present. L guarantees that all points in time are connected (linearity). And, Dum guarantees that time is discrete. (Between any two points in time there are at most finitely many other points; see Goldblatt s [16] for further discussion.) Real number axioms. Standard field and order axioms for the real numbers (to apply to members of R and function terms with range R.) We will not enumerate these axioms. (See any elementary real analysis book for enumeration, e.g., [27] or [34].) Epistemic reasoning. The (nonredundant) axioms of the Lewis system S5 (cf. Chellas, [7], or Goldblatt. [16]) apply to the knowledge operators (K S ). As for temporal axioms, we give the axioms their historical names. Let S be a subject, and let ϕ and ψ be formulae of our language. K (K S (ϕ) K S (ϕ ψ)) K S (ψ) (Knowledge respects modus ponens.) T K S (ϕ) ϕ (What one knows is true.) 5 K S (ϕ) K S K S (ϕ) (If you don t know something, then you know that you don t know it.) Random variable axioms. The standard requirements for random variables (in the probability theoretic sense). PM (Positive Measure) for any formula, ϕ, and any subject, S, Pr S (ϕ) 0 (The probability of any event is greater than or equal to zero.) NM (Normalized Measure) for any channel, c, and any subject, S, a I Pr S(c in = a) = 1 (The probability of all possi- bilities sums to one.) b O Pr S(c out = b) =1 Additional axioms. Since our logic contains three different modalities, we need some axioms to describe the interactions among them. The following are not intended to be complete in any sense; they are merely sufficient for the present purposes. K For any formula ϕ and any subject S, K S (ϕ) (K S ϕ) (If S knows something is always true, then S always knows it s true.) KPr For any formula ϕ, any subject S, and any real number r, K S (Pr C (ϕ) =r) K S (Pr S (ϕ) =r) (If S knows the objective probability of ϕ, then S knows its subjective probability of ϕ and the two probabilities are the same.) The above are all of our axioms. We now give the rules of our logic, which are both standard. MP. (Modus Ponens) From ϕ and ϕ ψ infer ψ. Nec. (Necessitation) This rule applies to both of our modal operators: and K S. (It is called necessitation because it was originally applied to a necessity operator.) From ϕ infer ϕ From ϕ infer K S (ϕ) Note that in the above, ϕ indicates a derivation of ϕ from the axioms alone, rather than from a set of premises. (Derivations are defined below.) Thus, in the case of the knowledge operator (and analogously for ) Nec says that if ϕ is a theorem (derivable without any premises) then all subjects know ϕ. We can now define formal derivations. Definition 3.3 Let Γ be a finite set of formulae of our language. A finite sequence of formulae ϕ 1,ϕ 2,ϕ 3,...,ϕ n is called a derivation (of ϕ n from Γ ) iff each ϕ k (k =1,...,n) satisfies one of the following: ϕ k Γ ϕ k is an axiom. ϕ k follows from some theorem by Nec. For some i, j < k, ϕ k results from ϕ i and ϕ j by MP. We write Γ ϕ to indicate a derivation of ϕ from Γ, and we write ϕ to indicate a derivation of ϕ from the axioms alone. This completes our statement of the formal system. 4 Semantics In the previous section we presented a syntactic system. So far we have only intuitive meanings to attach to this formalism. In this section we provide semantics for our system in terms of the Halpern-Tuttle framework and our applicationspecific model set out in Sect Semantic model A model M is a tuple of the form: R, +,,,W,T,C,I,O,v, κ 1,...,κ P (C) Here, R and its operations and ordering relation gives us the real numbers; W is the set of points (i.e., global states or worlds ); T is the set of labeled computation trees (with nodes from W ); C, I, and O are the sets of channels, possible inputs, and possible outputs, respectively; v is the assignment function, which assigns semantic values to syntactic expressions at each point; (values of v at a particular point P, will be indicated by the projection v P ); and the κ is are knowledge accessibility relations, one each for each subject S. Essentially, two points are accessible for a given subject if that subject cannot distinguish between those two points. (i.e., the subject does not know which of the points he is in.) We describe these accessibility relation precisely

9 81 in the next section. In the remainder of this paper we will generally denote the accessibility relations corresponding to subject S by κ S. In assigning meaning to our language, it is of fundamental importance to associate a probability space with each labeled computation tree. In particular, for each labeled computation tree T A we will construct a sample space of runs, R A, an event space, X A (i.e., those subsets of R A to which a probability can be assigned), and a probability measure µ A that assigns probabilities to members of X A. Our construction of this probability space is quite natural and standard (see, e.g., Seidel s [36] as well as [22] for two instances). We will not go into detail explaining the basic concepts of probability and measure theory here (cf. [20] or [39]). Definition 4.1 For a labeled computation tree T A, the associated sample space R A is the set of all infinite paths starting from the root of T A. Definition 4.2 For any sample space R A, the set e R A, is called a generator iff it consists of the set of all traces with some common finite prefix. Intuitively, generators are probability-theoretic events corresponding to finite traces. Definition 4.3 For any sample space R A, we define the event space, X A, to be the (unique) field of sets generated by the set of all generators. That is, X A is the smallest subset of P (R A ) that contains all of the generators and is closed under countable union and complementation. Definition 4.4 We define the probability measure, µ A,on X A in the standard way. Suppose e is a generator corresponding to the finite prefix given by (ρ, k). Then, µ A (e) is defined as the product of the transition probabilities from the root of the tree, along the path ρ, up to time k. Further, it is well known that there is a unique extension of µ A to the entire event space (cf. [20]). We will be rather abusive in the use of our probability measures µ A. In particular, when we have a finite set of points, x, we will write µ A (x) to denote the probability (as assigned by µ A ) of passing through one of the points in x. Technically, this is wrong, since µ A is defined for sets of runs; not for sets of points. However, the mapping between the two is extremely natural; the set of runs corresponding to a set of points is the set of all runs that pass through those points. Further, by the construction of our probability spaces, all sets of runs corresponding to finite sets of points are measureable. Therefore, there is no danger in this abuse of notation and it greatly simplifies our presentation. 4.2 Assignment function Given the above semantic model, the main technical question we need to address in assigning meaning to formulae in our logic is: For a given subject at a given point in its execution (i.e., at a given node in a given computation tree), what sample space should be used in evaluating the probability that subject assigns to a given formula? As discussed by Halpern and Tuttle, after choosing these sample spaces, assigning meaning to probability formulae is straightforward. Further, assigning meanings to nonprobability formulae will be done in the standard ways, so that too will be straightforward. We denote the sample space for subject S at point P by S S,P. Our approach in assigning these sample spaces is discussed by Halpern and Tuttle, where they describe it as correspond[ing] to what decision theorists would call an agent s posterior probability [22, Sect. 6]. In particular, we choose S S,P to be the set of points within tree(p ) that have the same history of inputs and outputs on channels in S as occur on the path to point P. Essentially, this means that S s probability space takes into account all inputs and outputs that S has seen up to the current point; S does not forget anything it has seen. More precisely, we have the following definitions. Definition 4.5 Let S C be a subject and let ρ 1 =(α 1,β 1, γ 1 ) and ρ 2 =(α 2,β 2,γ 2 ) be two runs (not necessarily in the same tree). We say that ρ 1 and ρ 2 have the same S-history up to time k if and only if 6 i, 1 i k, c S, α (c, i) =α(c, i) β (c, i) =β(c, i) Definition 4.6 Let S C be a subject and let P 1 =(ρ 1,k 1 ) and P 2 =(ρ 2,k 2 ) be two points (not necessarily in the same tree). We say that P 1 and P 2 have the same S-history if and only if the following two conditions hold. 1. k 1 = k ρ 1 and ρ 2 have the same S-history up to time k 1. Definition 4.7 Since points are unique even across trees, for a given point P, there is no ambiguity in referring to the tree that contains P. In the following, we will use tree(p ) to denote that tree. Definition 4.8 Let S C be a subject and P be a point; the sample space for S at point P is given by S S,P = { P tree(p )=tree(p ) P and P have the same S-history} Now, for a given point P, we will assign truth values to temporal formulae ϕ at that point. In addition, we assign values to variables, for example the input on a channel, at that point. The assignment function that does both of these is denoted by v P. To define v P, we will need to assign truth values to formulae containing primed variables. Therefore we will also define functions v (P1,P 2) (where P 1 and P 2 are points and we 6 In other settings, we might also consider the possibility that a subject S has internal state variables and could use these to make finer distinctions between points. However, in our application, all of the internal processing of the relevant subjects (viz, H and L ) is encoded in the adversary and is thus factored out of the computation tree. We therefore do not lose any needed generality in making this definition.

The Epistemic Representation of Information Flow Security in. Paul F. Syverson. Naval Research Laboratory. James W. Gray, III

The Epistemic Representation of Information Flow Security in. Paul F. Syverson. Naval Research Laboratory. James W. Gray, III The Epistemic Representation of Information Flow ecurity in Probabilistic ystems Paul F. yverson Center for High Assurance Computing ystems Naval Research Laboratory Washington, DC 0375 James W. Gray,

More information

Secrecy in Multiagent Systems

Secrecy in Multiagent Systems Secrecy in Multiagent Systems Joseph Y. Halpern & Kevin R. O Neill Department of Computer Science Cornell University halpern@cs.cornell.edu; kroneill@gmail.com We introduce a general framework for reasoning

More information

Mathematics 114L Spring 2018 D.A. Martin. Mathematical Logic

Mathematics 114L Spring 2018 D.A. Martin. Mathematical Logic Mathematics 114L Spring 2018 D.A. Martin Mathematical Logic 1 First-Order Languages. Symbols. All first-order languages we consider will have the following symbols: (i) variables v 1, v 2, v 3,... ; (ii)

More information

Systems of modal logic

Systems of modal logic 499 Modal and Temporal Logic Systems of modal logic Marek Sergot Department of Computing Imperial College, London utumn 2008 Further reading: B.F. Chellas, Modal logic: an introduction. Cambridge University

More information

Notes on BAN Logic CSG 399. March 7, 2006

Notes on BAN Logic CSG 399. March 7, 2006 Notes on BAN Logic CSG 399 March 7, 2006 The wide-mouthed frog protocol, in a slightly different form, with only the first two messages, and time stamps: A S : A, {T a, B, K ab } Kas S B : {T s, A, K ab

More information

The State Explosion Problem

The State Explosion Problem The State Explosion Problem Martin Kot August 16, 2003 1 Introduction One from main approaches to checking correctness of a concurrent system are state space methods. They are suitable for automatic analysis

More information

Adding Modal Operators to the Action Language A

Adding Modal Operators to the Action Language A Adding Modal Operators to the Action Language A Aaron Hunter Simon Fraser University Burnaby, B.C. Canada V5A 1S6 amhunter@cs.sfu.ca Abstract The action language A is a simple high-level language for describing

More information

3 The Semantics of the Propositional Calculus

3 The Semantics of the Propositional Calculus 3 The Semantics of the Propositional Calculus 1. Interpretations Formulas of the propositional calculus express statement forms. In chapter two, we gave informal descriptions of the meanings of the logical

More information

Formal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University

Formal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University Formal Epistemology: Lecture Notes Horacio Arló-Costa Carnegie Mellon University hcosta@andrew.cmu.edu Logical preliminaries Let L 0 be a language containing a complete set of Boolean connectives, including

More information

Design of Distributed Systems Melinda Tóth, Zoltán Horváth

Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Design of Distributed Systems Melinda Tóth, Zoltán Horváth Publication date 2014 Copyright 2014 Melinda Tóth, Zoltán Horváth Supported by TÁMOP-412A/1-11/1-2011-0052

More information

An Independence Relation for Sets of Secrets

An Independence Relation for Sets of Secrets Sara Miner More Pavel Naumov An Independence Relation for Sets of Secrets Abstract. A relation between two secrets, known in the literature as nondeducibility, was originally introduced by Sutherland.

More information

Expressing Security Properties Using Selective Interleaving Functions

Expressing Security Properties Using Selective Interleaving Functions Expressing Security Properties Using Selective Interleaving Functions Joseph Halpern and Sabina Petride August 8, 2008 Abstract McLean s notion of Selective Interleaving Functions (SIFs) is perhaps the

More information

Advanced Topics in LP and FP

Advanced Topics in LP and FP Lecture 1: Prolog and Summary of this lecture 1 Introduction to Prolog 2 3 Truth value evaluation 4 Prolog Logic programming language Introduction to Prolog Introduced in the 1970s Program = collection

More information

Towards A Multi-Agent Subset Space Logic

Towards A Multi-Agent Subset Space Logic Towards A Multi-Agent Subset Space Logic A Constructive Approach with Applications Department of Computer Science The Graduate Center of the City University of New York cbaskent@gc.cuny.edu www.canbaskent.net

More information

Introduction to Temporal Logic. The purpose of temporal logics is to specify properties of dynamic systems. These can be either

Introduction to Temporal Logic. The purpose of temporal logics is to specify properties of dynamic systems. These can be either Introduction to Temporal Logic The purpose of temporal logics is to specify properties of dynamic systems. These can be either Desired properites. Often liveness properties like In every infinite run action

More information

Reasoning About Common Knowledge with Infinitely Many Agents

Reasoning About Common Knowledge with Infinitely Many Agents Reasoning About Common Knowledge with Infinitely Many Agents Joseph Y. Halpern Computer Science Department Cornell University halpern@cs.cornell.edu Richard A. Shore Mathematics Department Cornell University

More information

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

COMP219: Artificial Intelligence. Lecture 19: Logic for KR COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof

More information

To every formula scheme there corresponds a property of R. This relationship helps one to understand the logic being studied.

To every formula scheme there corresponds a property of R. This relationship helps one to understand the logic being studied. Modal Logic (2) There appeared to be a correspondence between the validity of Φ Φ and the property that the accessibility relation R is reflexive. The connection between them is that both relied on the

More information

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for?

Overview. Discrete Event Systems Verification of Finite Automata. What can finite automata be used for? What can finite automata be used for? Computer Engineering and Networks Overview Discrete Event Systems Verification of Finite Automata Lothar Thiele Introduction Binary Decision Diagrams Representation of Boolean Functions Comparing two circuits

More information

First Order Logic: Syntax and Semantics

First Order Logic: Syntax and Semantics irst Order Logic: Syntax and Semantics COMP30412 Sean Bechhofer sean.bechhofer@manchester.ac.uk Logic Recap You should already know the basics of irst Order Logic (OL) It s a prerequisite of this course!

More information

What You Must Remember When Processing Data Words

What You Must Remember When Processing Data Words What You Must Remember When Processing Data Words Michael Benedikt, Clemens Ley, and Gabriele Puppis Oxford University Computing Laboratory, Park Rd, Oxford OX13QD UK Abstract. We provide a Myhill-Nerode-like

More information

Ambiguous Language and Differences in Beliefs

Ambiguous Language and Differences in Beliefs Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning Ambiguous Language and Differences in Beliefs Joseph Y. Halpern Computer Science Dept. Cornell

More information

Introduction to Metalogic

Introduction to Metalogic Philosophy 135 Spring 2008 Tony Martin Introduction to Metalogic 1 The semantics of sentential logic. The language L of sentential logic. Symbols of L: Remarks: (i) sentence letters p 0, p 1, p 2,... (ii)

More information

Propositional Logic Arguments (5A) Young W. Lim 11/8/16

Propositional Logic Arguments (5A) Young W. Lim 11/8/16 Propositional Logic (5A) Young W. Lim Copyright (c) 2016 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version

More information

Using Counterfactuals in Knowledge-Based Programming

Using Counterfactuals in Knowledge-Based Programming Using Counterfactuals in Knowledge-Based Programming Joseph Y. Halpern Cornell University Dept. of Computer Science Ithaca, NY 14853 halpern@cs.cornell.edu http://www.cs.cornell.edu/home/halpern Yoram

More information

COMP219: Artificial Intelligence. Lecture 19: Logic for KR

COMP219: Artificial Intelligence. Lecture 19: Logic for KR COMP219: Artificial Intelligence Lecture 19: Logic for KR 1 Overview Last time Expert Systems and Ontologies Today Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof

More information

Classical Propositional Logic

Classical Propositional Logic The Language of A Henkin-style Proof for Natural Deduction January 16, 2013 The Language of A Henkin-style Proof for Natural Deduction Logic Logic is the science of inference. Given a body of information,

More information

An Inquisitive Formalization of Interrogative Inquiry

An Inquisitive Formalization of Interrogative Inquiry An Inquisitive Formalization of Interrogative Inquiry Yacin Hamami 1 Introduction and motivation The notion of interrogative inquiry refers to the process of knowledge-seeking by questioning [5, 6]. As

More information

Logic. Propositional Logic: Syntax

Logic. Propositional Logic: Syntax Logic Propositional Logic: Syntax Logic is a tool for formalizing reasoning. There are lots of different logics: probabilistic logic: for reasoning about probability temporal logic: for reasoning about

More information

KB Agents and Propositional Logic

KB Agents and Propositional Logic Plan Knowledge-Based Agents Logics Propositional Logic KB Agents and Propositional Logic Announcements Assignment2 mailed out last week. Questions? Knowledge-Based Agents So far, what we ve done is look

More information

The Logic of Proofs, Semantically

The Logic of Proofs, Semantically The Logic of Proofs, Semantically Melvin Fitting Dept. Mathematics and Computer Science Lehman College (CUNY), 250 Bedford Park Boulevard West Bronx, NY 10468-1589 e-mail: fitting@lehman.cuny.edu web page:

More information

185.A09 Advanced Mathematical Logic

185.A09 Advanced Mathematical Logic 185.A09 Advanced Mathematical Logic www.volny.cz/behounek/logic/teaching/mathlog13 Libor Běhounek, behounek@cs.cas.cz Lecture #1, October 15, 2013 Organizational matters Study materials will be posted

More information

Lecture 7. Logic. Section1: Statement Logic.

Lecture 7. Logic. Section1: Statement Logic. Ling 726: Mathematical Linguistics, Logic, Section : Statement Logic V. Borschev and B. Partee, October 5, 26 p. Lecture 7. Logic. Section: Statement Logic.. Statement Logic..... Goals..... Syntax of Statement

More information

Explicit Logics of Knowledge and Conservativity

Explicit Logics of Knowledge and Conservativity Explicit Logics of Knowledge and Conservativity Melvin Fitting Lehman College, CUNY, 250 Bedford Park Boulevard West, Bronx, NY 10468-1589 CUNY Graduate Center, 365 Fifth Avenue, New York, NY 10016 Dedicated

More information

Lecture 2: Perfect Secrecy and its Limitations

Lecture 2: Perfect Secrecy and its Limitations CS 4501-6501 Topics in Cryptography 26 Jan 2018 Lecture 2: Perfect Secrecy and its Limitations Lecturer: Mohammad Mahmoody Scribe: Mohammad Mahmoody 1 Introduction Last time, we informally defined encryption

More information

Marketing Impact on Diffusion in Social Networks

Marketing Impact on Diffusion in Social Networks Marketing Impact on Diffusion in Social Networks Pavel Naumov Vassar College, Poughkeepsie, New York, USA Jia Tao The College of New Jersey, Ewing, New Jersey, USA Abstract The article proposes a way to

More information

Principles of Knowledge Representation and Reasoning

Principles of Knowledge Representation and Reasoning Principles of Knowledge Representation and Reasoning Modal Logics Bernhard Nebel, Malte Helmert and Stefan Wölfl Albert-Ludwigs-Universität Freiburg May 2 & 6, 2008 Nebel, Helmert, Wölfl (Uni Freiburg)

More information

cis32-ai lecture # 18 mon-3-apr-2006

cis32-ai lecture # 18 mon-3-apr-2006 cis32-ai lecture # 18 mon-3-apr-2006 today s topics: propositional logic cis32-spring2006-sklar-lec18 1 Introduction Weak (search-based) problem-solving does not scale to real problems. To succeed, problem

More information

Propositional Logic: Syntax

Propositional Logic: Syntax Logic Logic is a tool for formalizing reasoning. There are lots of different logics: probabilistic logic: for reasoning about probability temporal logic: for reasoning about time (and programs) epistemic

More information

Logic. Propositional Logic: Syntax. Wffs

Logic. Propositional Logic: Syntax. Wffs Logic Propositional Logic: Syntax Logic is a tool for formalizing reasoning. There are lots of different logics: probabilistic logic: for reasoning about probability temporal logic: for reasoning about

More information

An Epistemic Characterization of Zero Knowledge

An Epistemic Characterization of Zero Knowledge An Epistemic Characterization of Zero Knowledge Joseph Y. Halpern, Rafael Pass, and Vasumathi Raman Computer Science Department Cornell University Ithaca, NY, 14853, U.S.A. e-mail: {halpern, rafael, vraman}@cs.cornell.edu

More information

Overview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR

Overview. Knowledge-Based Agents. Introduction. COMP219: Artificial Intelligence. Lecture 19: Logic for KR COMP219: Artificial Intelligence Lecture 19: Logic for KR Last time Expert Systems and Ontologies oday Logic as a knowledge representation scheme Propositional Logic Syntax Semantics Proof theory Natural

More information

Applied Logic. Lecture 1 - Propositional logic. Marcin Szczuka. Institute of Informatics, The University of Warsaw

Applied Logic. Lecture 1 - Propositional logic. Marcin Szczuka. Institute of Informatics, The University of Warsaw Applied Logic Lecture 1 - Propositional logic Marcin Szczuka Institute of Informatics, The University of Warsaw Monographic lecture, Spring semester 2017/2018 Marcin Szczuka (MIMUW) Applied Logic 2018

More information

An Introduction to Modal Logic III

An Introduction to Modal Logic III An Introduction to Modal Logic III Soundness of Normal Modal Logics Marco Cerami Palacký University in Olomouc Department of Computer Science Olomouc, Czech Republic Olomouc, October 24 th 2013 Marco Cerami

More information

The Bayesian Ontology Language BEL

The Bayesian Ontology Language BEL Journal of Automated Reasoning manuscript No. (will be inserted by the editor) The Bayesian Ontology Language BEL İsmail İlkan Ceylan Rafael Peñaloza Received: date / Accepted: date Abstract We introduce

More information

The semantics of propositional logic

The semantics of propositional logic The semantics of propositional logic Readings: Sections 1.3 and 1.4 of Huth and Ryan. In this module, we will nail down the formal definition of a logical formula, and describe the semantics of propositional

More information

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit:

Price: $25 (incl. T-Shirt, morning tea and lunch) Visit: Three days of interesting talks & workshops from industry experts across Australia Explore new computing topics Network with students & employers in Brisbane Price: $25 (incl. T-Shirt, morning tea and

More information

Lecture 2: Syntax. January 24, 2018

Lecture 2: Syntax. January 24, 2018 Lecture 2: Syntax January 24, 2018 We now review the basic definitions of first-order logic in more detail. Recall that a language consists of a collection of symbols {P i }, each of which has some specified

More information

First-Order Logic. Chapter Overview Syntax

First-Order Logic. Chapter Overview Syntax Chapter 10 First-Order Logic 10.1 Overview First-Order Logic is the calculus one usually has in mind when using the word logic. It is expressive enough for all of mathematics, except for those concepts

More information

The Importance of Being Formal. Martin Henz. February 5, Propositional Logic

The Importance of Being Formal. Martin Henz. February 5, Propositional Logic The Importance of Being Formal Martin Henz February 5, 2014 Propositional Logic 1 Motivation In traditional logic, terms represent sets, and therefore, propositions are limited to stating facts on sets

More information

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes

Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes These notes form a brief summary of what has been covered during the lectures. All the definitions must be memorized and understood. Statements

More information

Stochastic Histories. Chapter Introduction

Stochastic Histories. Chapter Introduction Chapter 8 Stochastic Histories 8.1 Introduction Despite the fact that classical mechanics employs deterministic dynamical laws, random dynamical processes often arise in classical physics, as well as in

More information

Lecture 14 - P v.s. NP 1

Lecture 14 - P v.s. NP 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) February 27, 2018 Lecture 14 - P v.s. NP 1 In this lecture we start Unit 3 on NP-hardness and approximation

More information

An Epistemic Characterization of Zero Knowledge

An Epistemic Characterization of Zero Knowledge An Epistemic Characterization of Zero Knowledge Joseph Y. Halpern, Rafael Pass, and Vasumathi Raman Computer Science Department Cornell University Ithaca, NY, 14853, U.S.A. e-mail: {halpern, rafael, vraman}@cs.cornell.edu

More information

COS433/Math 473: Cryptography. Mark Zhandry Princeton University Spring 2017

COS433/Math 473: Cryptography. Mark Zhandry Princeton University Spring 2017 COS433/Math 473: Cryptography Mark Zhandry Princeton University Spring 2017 Previously on COS 433 Takeaway: Crypto is Hard Designing crypto is hard, even experts get it wrong Just because I don t know

More information

Examples: P: it is not the case that P. P Q: P or Q P Q: P implies Q (if P then Q) Typical formula:

Examples: P: it is not the case that P. P Q: P or Q P Q: P implies Q (if P then Q) Typical formula: Logic: The Big Picture Logic is a tool for formalizing reasoning. There are lots of different logics: probabilistic logic: for reasoning about probability temporal logic: for reasoning about time (and

More information

First-Degree Entailment

First-Degree Entailment March 5, 2013 Relevance Logics Relevance logics are non-classical logics that try to avoid the paradoxes of material and strict implication: p (q p) p (p q) (p q) (q r) (p p) q p (q q) p (q q) Counterintuitive?

More information

A Mathematical Theory of Communication

A Mathematical Theory of Communication A Mathematical Theory of Communication Ben Eggers Abstract This paper defines information-theoretic entropy and proves some elementary results about it. Notably, we prove that given a few basic assumptions

More information

Logic. Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001

Logic. Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001 Logic Introduction to Artificial Intelligence CS/ECE 348 Lecture 11 September 27, 2001 Last Lecture Games Cont. α-β pruning Outline Games with chance, e.g. Backgammon Logical Agents and thewumpus World

More information

First Order Logic: Syntax and Semantics

First Order Logic: Syntax and Semantics CS1081 First Order Logic: Syntax and Semantics COMP30412 Sean Bechhofer sean.bechhofer@manchester.ac.uk Problems Propositional logic isn t very expressive As an example, consider p = Scotland won on Saturday

More information

Modal Logic. UIT2206: The Importance of Being Formal. Martin Henz. March 19, 2014

Modal Logic. UIT2206: The Importance of Being Formal. Martin Henz. March 19, 2014 Modal Logic UIT2206: The Importance of Being Formal Martin Henz March 19, 2014 1 Motivation The source of meaning of formulas in the previous chapters were models. Once a particular model is chosen, say

More information

Propositional Logic Review

Propositional Logic Review Propositional Logic Review UC Berkeley, Philosophy 142, Spring 2016 John MacFarlane The task of describing a logical system comes in three parts: Grammar Describing what counts as a formula Semantics Defining

More information

Propositional Logic: Part II - Syntax & Proofs 0-0

Propositional Logic: Part II - Syntax & Proofs 0-0 Propositional Logic: Part II - Syntax & Proofs 0-0 Outline Syntax of Propositional Formulas Motivating Proofs Syntactic Entailment and Proofs Proof Rules for Natural Deduction Axioms, theories and theorems

More information

Prefixed Tableaus and Nested Sequents

Prefixed Tableaus and Nested Sequents Prefixed Tableaus and Nested Sequents Melvin Fitting Dept. Mathematics and Computer Science Lehman College (CUNY), 250 Bedford Park Boulevard West Bronx, NY 10468-1589 e-mail: melvin.fitting@lehman.cuny.edu

More information

Propositional Logic Arguments (5A) Young W. Lim 11/29/16

Propositional Logic Arguments (5A) Young W. Lim 11/29/16 Propositional Logic (5A) Young W. Lim Copyright (c) 2016 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version

More information

Doxastic Logic. Michael Caie

Doxastic Logic. Michael Caie Doxastic Logic Michael Caie There are at least three natural ways of interpreting the object of study of doxastic logic. On one construal, doxastic logic studies certain general features of the doxastic

More information

A logical reconstruction of SPKI

A logical reconstruction of SPKI A logical reconstruction of SPKI Joseph Y. Halpern Cornell University Dept. of Computer Science Ithaca, NY 14853 halpern@cs.cornell.edu http://www.cs.cornell.edu/home/halpern Ron van der Meyden University

More information

Completeness Results for Memory Logics

Completeness Results for Memory Logics Completeness Results for Memory Logics Carlos Areces Santiago Figueira Sergio Mera Abstract Memory logics are a family of modal logics in which standard relational structures are augmented with data structures

More information

Nondeterministic finite automata

Nondeterministic finite automata Lecture 3 Nondeterministic finite automata This lecture is focused on the nondeterministic finite automata (NFA) model and its relationship to the DFA model. Nondeterminism is an important concept in the

More information

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5

Logical Inference. Artificial Intelligence. Topic 12. Reading: Russell and Norvig, Chapter 7, Section 5 rtificial Intelligence Topic 12 Logical Inference Reading: Russell and Norvig, Chapter 7, Section 5 c Cara MacNish. Includes material c S. Russell & P. Norvig 1995,2003 with permission. CITS4211 Logical

More information

CONSTRUCTION OF THE REAL NUMBERS.

CONSTRUCTION OF THE REAL NUMBERS. CONSTRUCTION OF THE REAL NUMBERS. IAN KIMING 1. Motivation. It will not come as a big surprise to anyone when I say that we need the real numbers in mathematics. More to the point, we need to be able to

More information

Anonymity and Information Hiding in Multiagent Systems

Anonymity and Information Hiding in Multiagent Systems Anonymity and Information Hiding in Multiagent Systems Joseph Y. Halpern Kevin R. O Neill Department of Computer Science Cornell University {halpern, oneill}@cs.cornell.edu April 19, 2004 Abstract We provide

More information

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel

7. Propositional Logic. Wolfram Burgard and Bernhard Nebel Foundations of AI 7. Propositional Logic Rational Thinking, Logic, Resolution Wolfram Burgard and Bernhard Nebel Contents Agents that think rationally The wumpus world Propositional logic: syntax and semantics

More information

Information Flow on Directed Acyclic Graphs

Information Flow on Directed Acyclic Graphs Information Flow on Directed Acyclic Graphs Michael Donders, Sara Miner More, and Pavel Naumov Department of Mathematics and Computer Science McDaniel College, Westminster, Maryland 21157, USA {msd002,smore,pnaumov}@mcdaniel.edu

More information

Bidirectional Decision Procedures for the Intuitionistic Propositional Modal Logic IS4

Bidirectional Decision Procedures for the Intuitionistic Propositional Modal Logic IS4 Bidirectional ecision Procedures for the Intuitionistic Propositional Modal Logic IS4 Samuli Heilala and Brigitte Pientka School of Computer Science, McGill University, Montreal, Canada {sheila1,bpientka}@cs.mcgill.ca

More information

Canonical models for normal logics (Completeness via canonicity)

Canonical models for normal logics (Completeness via canonicity) 499 Modal and Temporal Logic Canonical models for normal logics (Completeness via canonicity) Marek Sergot Department of Computing Imperial College, London Autumn 2008 Further reading: B.F. Chellas, Modal

More information

Chapter 4: Computation tree logic

Chapter 4: Computation tree logic INFOF412 Formal verification of computer systems Chapter 4: Computation tree logic Mickael Randour Formal Methods and Verification group Computer Science Department, ULB March 2017 1 CTL: a specification

More information

Reading 11 : Relations and Functions

Reading 11 : Relations and Functions CS/Math 240: Introduction to Discrete Mathematics Fall 2015 Reading 11 : Relations and Functions Instructor: Beck Hasti and Gautam Prakriya In reading 3, we described a correspondence between predicates

More information

Bisimulation for conditional modalities

Bisimulation for conditional modalities Bisimulation for conditional modalities Alexandru Baltag and Giovanni Ciná Institute for Logic, Language and Computation, University of Amsterdam March 21, 2016 Abstract We give a general definition of

More information

Logics of Rational Agency Lecture 3

Logics of Rational Agency Lecture 3 Logics of Rational Agency Lecture 3 Eric Pacuit Tilburg Institute for Logic and Philosophy of Science Tilburg Univeristy ai.stanford.edu/~epacuit July 29, 2009 Eric Pacuit: LORI, Lecture 3 1 Plan for the

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 7. Propositional Logic Rational Thinking, Logic, Resolution Wolfram Burgard, Maren Bennewitz, and Marco Ragni Albert-Ludwigs-Universität Freiburg Contents 1 Agents

More information

On the Complexity of the Reflected Logic of Proofs

On the Complexity of the Reflected Logic of Proofs On the Complexity of the Reflected Logic of Proofs Nikolai V. Krupski Department of Math. Logic and the Theory of Algorithms, Faculty of Mechanics and Mathematics, Moscow State University, Moscow 119899,

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 7. Propositional Logic Rational Thinking, Logic, Resolution Joschka Boedecker and Wolfram Burgard and Bernhard Nebel Albert-Ludwigs-Universität Freiburg May 17, 2016

More information

CHRISTIAN-ALBRECHTS-UNIVERSITÄT KIEL

CHRISTIAN-ALBRECHTS-UNIVERSITÄT KIEL INSTITUT FÜR INFORMATIK UND PRAKTISCHE MATHEMATIK A Constraint-Based Algorithm for Contract-Signing Protocols Detlef Kähler, Ralf Küsters Bericht Nr. 0503 April 2005 CHRISTIAN-ALBRECHTS-UNIVERSITÄT KIEL

More information

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY

PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY PERFECT SECRECY AND ADVERSARIAL INDISTINGUISHABILITY BURTON ROSENBERG UNIVERSITY OF MIAMI Contents 1. Perfect Secrecy 1 1.1. A Perfectly Secret Cipher 2 1.2. Odds Ratio and Bias 3 1.3. Conditions for Perfect

More information

Outline. Overview. Syntax Semantics. Introduction Hilbert Calculus Natural Deduction. 1 Introduction. 2 Language: Syntax and Semantics

Outline. Overview. Syntax Semantics. Introduction Hilbert Calculus Natural Deduction. 1 Introduction. 2 Language: Syntax and Semantics Introduction Arnd Poetzsch-Heffter Software Technology Group Fachbereich Informatik Technische Universität Kaiserslautern Sommersemester 2010 Arnd Poetzsch-Heffter ( Software Technology Group Fachbereich

More information

Propositional Logic Arguments (5A) Young W. Lim 11/30/16

Propositional Logic Arguments (5A) Young W. Lim 11/30/16 Propositional Logic (5A) Young W. Lim Copyright (c) 2016 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version

More information

Today. Next week. Today (cont d) Motivation - Why Modal Logic? Introduction. Ariel Jarovsky and Eyal Altshuler 8/11/07, 15/11/07

Today. Next week. Today (cont d) Motivation - Why Modal Logic? Introduction. Ariel Jarovsky and Eyal Altshuler 8/11/07, 15/11/07 Today Introduction Motivation- Why Modal logic? Modal logic- Syntax riel Jarovsky and Eyal ltshuler 8/11/07, 15/11/07 Modal logic- Semantics (Possible Worlds Semantics): Theory Examples Today (cont d)

More information

Notes on Modal Logic

Notes on Modal Logic Notes on Modal Logic Notes for Philosophy 151 Eric Pacuit January 25, 2009 These short notes are intended to supplement the lectures and text ntroduce some of the basic concepts of Modal Logic. The primary

More information

Majority Logic. Introduction

Majority Logic. Introduction Majority Logic Eric Pacuit and Samer Salame Department of Computer Science Graduate Center, City University of New York 365 5th Avenue, New York 10016 epacuit@cs.gc.cuny.edu, ssalame@gc.cuny.edu Abstract

More information

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events

Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Pairing Transitive Closure and Reduction to Efficiently Reason about Partially Ordered Events Massimo Franceschet Angelo Montanari Dipartimento di Matematica e Informatica, Università di Udine Via delle

More information

Lecture Notes on Software Model Checking

Lecture Notes on Software Model Checking 15-414: Bug Catching: Automated Program Verification Lecture Notes on Software Model Checking Matt Fredrikson André Platzer Carnegie Mellon University Lecture 19 1 Introduction So far we ve focused on

More information

Modal Logics. Most applications of modal logic require a refined version of basic modal logic.

Modal Logics. Most applications of modal logic require a refined version of basic modal logic. Modal Logics Most applications of modal logic require a refined version of basic modal logic. Definition. A set L of formulas of basic modal logic is called a (normal) modal logic if the following closure

More information

Introduction to Metalogic

Introduction to Metalogic Introduction to Metalogic Hans Halvorson September 21, 2016 Logical grammar Definition. A propositional signature Σ is a collection of items, which we call propositional constants. Sometimes these propositional

More information

A Little Deductive Logic

A Little Deductive Logic A Little Deductive Logic In propositional or sentential deductive logic, we begin by specifying that we will use capital letters (like A, B, C, D, and so on) to stand in for sentences, and we assume that

More information

Mathematical Logic Prof. Arindama Singh Department of Mathematics Indian Institute of Technology, Madras. Lecture - 15 Propositional Calculus (PC)

Mathematical Logic Prof. Arindama Singh Department of Mathematics Indian Institute of Technology, Madras. Lecture - 15 Propositional Calculus (PC) Mathematical Logic Prof. Arindama Singh Department of Mathematics Indian Institute of Technology, Madras Lecture - 15 Propositional Calculus (PC) So, now if you look back, you can see that there are three

More information

Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007

Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007 Dynamic Noninterference Analysis Using Context Sensitive Static Analyses Gurvan Le Guernic July 14, 2007 1 Abstract This report proposes a dynamic noninterference analysis for sequential programs. This

More information

1. Propositional Calculus

1. Propositional Calculus 1. Propositional Calculus Some notes for Math 601, Fall 2010 based on Elliott Mendelson, Introduction to Mathematical Logic, Fifth edition, 2010, Chapman & Hall. 2. Syntax ( grammar ). 1.1, p. 1. Given:

More information

Towards a Practical Secure Concurrent Language

Towards a Practical Secure Concurrent Language Towards a Practical Secure Concurrent Language Stefan Muller and Stephen Chong TR-05-12 Computer Science Group Harvard University Cambridge, Massachusetts Towards a Practical Secure Concurrent Language

More information

Tutorial on Mathematical Induction

Tutorial on Mathematical Induction Tutorial on Mathematical Induction Roy Overbeek VU University Amsterdam Department of Computer Science r.overbeek@student.vu.nl April 22, 2014 1 Dominoes: from case-by-case to induction Suppose that you

More information