A Probabilistic Language based upon Sampling Functions

Size: px
Start display at page:

Download "A Probabilistic Language based upon Sampling Functions"

Transcription

1 A Probabilistic Language based upon Samplg Functions Sungwoo Park Frank Pfenng Computer Science Department Carnegie Mellon University Sebastian Thrun Computer Science Department Stanford University ABSTRACT As probabilistic computations play an creasg role solvg various problems, researchers have designed probabilistic languages that treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive power. In this paper, we present a probabilistic language, called λ, which uniformly supports all kds of probability distributions discrete distributions, contuous distributions, and even those belongg to neither group. Its mathematical basis is samplg functions, i.e., mappgs from the unit terval (0.0, 1.0] to probability domas. We also briefly describe the implementation of λ as an extension of Objective CAML and demonstrate its practicality with three applications robotics: robot localization, people trackg, and robotic mappg. All experiments have been carried out with real robots. Categories and Subject Descriptors D.3.2 [Language Classifications]: Specialized application languages General Terms Languages, Experimentation Keywords Probabilistic language, Probability distribution, Samplg function, Robotics 1. INTRODUCTION As probabilistic computations play an creasg role solvg various problems, researchers have designed probabilistic languages to facilitate their modelg [11, 7, 30, 23, 26, 15, 21]. A probabilistic language treats probability distributions as primitive datatypes and abstracts from their representation schemes. As a result, it enables programmers Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. POPL 05, January 12 14, 2005, Long Beach, California, USA. Copyright 2005 ACM X/05/ $5.00. to concentrate on how to formulate probabilistic computations at the level of probability distributions rather than representation schemes. The translation of such a formulation a probabilistic language usually produces concise and elegant code. A typical probabilistic language supports at least discrete distributions, for which there exists a representation scheme sufficient for all practical purposes: a set of pairs consistg of a value the probability doma and its probability. We can use such a probabilistic language for problems volvg only discrete distributions. For those volvg non-discrete distributions, however, we usually use a conventional language for the sake of efficiency, assumg a specific kd of probability distributions (e.g., Gaussian distributions) or choosg a specific representation scheme (e.g., a set of weighted samples). For this reason, there has been little effort to develop probabilistic languages whose expressive power is beyond discrete distributions. Our work aims to develop a probabilistic language supportg all kds of probability distributions discrete distributions, contuous distributions, and even those belongg to neither group. Furthermore we want to draw no distction between different kds of probability distributions, both syntactically and semantically, so that we can achieve a uniform framework for probabilistic computation. Such a probabilistic language can have a significant practical impact, sce once formulated at the level of probability distributions, any probabilistic computation can be directly translated to code. The ma idea our work is that we can specify any probability distribution by answerg How can we generate samples from it?, or equivalently, by providg a samplg function for it. A samplg function is defed as a mappg from the unit terval (0.0, 1.0] to a probability doma D. Given a random number drawn from a uniform distribution over (0.0, 1.0], it returns a sample D, and thus specifies a unique probability distribution. For our purpose, we use a generalized notion of samplg function which maps (0.0, 1.0] to D (0.0, 1.0] where (0.0, 1.0] denotes an fite product of (0.0, 1.0]. Operationally it takes as put an fite sequence of random numbers drawn dependently from a uniform distribution over (0.0, 1.0], consumes zero or more random numbers, and returns a sample with the remag sequence. We present a probabilistic language, called λ, whose mathematical basis is samplg functions. We exploit the fact that samplg functions form a state monad, and base the syntax of λ upon the monadic metalanguage [17] the formulation of Pfenng and Davies [24]. A syntactic dis-

2 tction is drawn between regular values and probabilistic computations through the use of two syntactic categories: terms for regular values and expressions for probabilistic computations. It enables us to treat probability distributions as first-class values, and λ arises as a conservative extension of a conventional language. Examples show that λ provides a unified representation scheme for probability distributions, enjoys rich expressiveness, and allows high versatility encodg probability distributions. An important aspect of our work is to demonstrate the practicality of λ by applyg it to real problems. As the ma testbed, we choose robotics [29]. It offers a variety of real problems that necessitate probabilistic computations over contuous distributions. We implement λ as an extension of Objective CAML and use it for three applications robotics: robot localization [29], people trackg [20], and robotic mappg [31]. We use real robots for all experiments. λ does not support precise reasong about probability distributions that it does not permit a precise implementation of queries on probability distributions (such as expectation). This is fact a feature of probability distributions that precise reasong is impossible general. In other words, lack of support for precise reasong is the price we pay for rich expressiveness of λ. As a practical solution, we use the Monte Carlo method to support approximate reasong. As such, λ is a good choice for those problems which all kds of probability distributions are used or precise reasong is unnecessary or impossible. This paper is organized as follows. Section 2 gives a motivatg example for λ. Section 3 presents the type system and the operational semantics of λ. Section 4 shows how to encode various probability distributions λ, and Section 5 shows how to formally prove the correctness of encodgs, based upon the operational semantics. Section 6 demonstrates the use of the Monte Carlo method for approximate reasong and briefly describes our implementation of λ. Section 7 presents three applications of λ robotics. Section 8 discusses related work and Section 9 concludes. We refer readers to [22] for figures from experiments Section 7. Notation If a variable x ranges over the doma of a probability distribution P, then P (x) means, dependg on the context, either the probability distribution itself (as probability distribution P (x) ) or the probability of a particular value x (as probability P (x) ). If we do not need a specific name for a probability distribution, we use Prob. Similarly P (x y) means either the conditional probability P itself or the probability of x conditioned on y. We write P y or P ( y) for the probability distribution conditioned on y. U(0.0, 1.0] denotes a uniform distribution over the unit terval (0.0, 1.0]. 2. A MOTIVATING EXAMPLE A Bayes filter [9] is a popular solution to a wide range of state estimation problems. It estimates the state s of a system from a sequence of actions and measurements, where an action a makes a change to the state and a measurement m gives formation on the state. At its core, a Bayes filter computes a probability distribution Bel(s) of the state accordg to the followg update equations: Bel(s) A(s a, s )Bel(s )ds (1) Bel(s) ηp(m s)bel(s) (2) A(s a, s ) is the probability that the system transitions to state s after takg action a another state s, P(m s) the probability of measurement m state s, and η a normalizg constant ensurg Bel(s)ds = 1.0. The update equations are formulated at the level of probability distributions the sense that they do not assume a particular representation scheme. Unfortunately the update equations are difficult to implement for arbitrary probability distributions. When it comes to implementation, therefore, we usually simplify the update equations by makg additional assumptions on the system or choosg a specific representation scheme. For stance, with the assumption that Bel is a Gaussian distribution, we obta a variant of the Bayes filter called a Kalman filter [32]. If we approximate Bel with a set of samples, we obta another variant called a particle filter [3]. Even these variants of the Bayes filter are, however, not trivial to implement conventional languages, not to mention the difficulty of understandg the code. For stance, a Kalman filter requires various matrix operations cludg matrix version. A particle filter needs to manipulate weights associated with dividual samples, which often results complicated code. An alternative approach is to use an existg probabilistic language after discretizg all probability distributions. This idea is appealg theory but impractical for two reasons. First, given a probability distribution, we cannot easily choose an appropriate subset of its support upon which we perform discretization. Even when such a subset is fixed advance, the process of discretization may require a considerable amount of programmg; see [4] for an example. Second there are some probability distributions that cannot be discretized any meangful way. An example is probability distributions over probability distributions, which do occur real applications (see Section 7). Another example is probability distributions over function spaces. If we had a probabilistic language that supports all kds of probability distributions without drawg a syntactic or semantic distction, we could implement the update equations with much less effort. We present such a probabilistic language λ the next section. 3. PROBABILISTIC LANGUAGE λ In this section, we develop our probabilistic language λ. We beg by explag why we choose samplg functions as the mathematical basis of λ. 3.1 Mathematical basis The expressive power of a probabilistic language is determed to a large extent by its mathematical basis, i.e., which mathematical objects are used to specify probability distributions. Sce we tend to support all kds of probability distributions without drawg a syntactic or semantic distction, we cannot choose what is applicable only to a specific kd of probability distributions (e.g., probability mass functions, probability density functions, or cumulative distribution functions). Probability measures are a possibility because they are synonymous with probability distributions. They are, however, not a practical choice: a probability measure on a doma D maps not D but the set of events on D to [0.0, 1.0], and may be difficult to represent if D is an fite doma. Samplg functions overcome the problem with probability measures: they are applicable to all kds of probability distributions, and are also easy to represent because a global

3 random number generator supplants the use of fite sequences of random numbers. For this reason, we choose samplg functions as the mathematical basis of λ. 1 It is noteworthy that samplg functions form a state monad [16, 17] whose set of states is (0.0, 1.0]. Moreover samplg functions are operationally equivalent to probabilistic computations because they describe procedures for generatg samples. These two observations imply that if we use a monadic syntax for probabilistic computations, it becomes straightforward to terpret probabilistic computations terms of samplg functions. Hence we use a monadic syntax for probabilistic computations λ. 3.2 Syntax and type system As the lguistic framework of λ, we use the monadic metalanguage of Pfenng and Davies [24]. It is a reformulation of Moggi s monadic metalanguage λ ml [17], followg Mart-Löf s methodology of distguishg judgments from propositions [14]. It augments the lambda calculus, consistg of terms, with a separate syntactic category, consistg of expressions a monadic syntax. In the case of λ, terms denote regular values and expressions denote probabilistic computations. We say that a term evaluates to a value and an expression computes to a sample. The abstract syntax for λ is as follows: type A, B ::= A A A A A real term M, N ::= x λx:a. M M M (M, M) fst M snd M fix x:a. M prob E r expression E, F ::= M sample x from M E S value/sample V, W ::= λx:a. M (V, V ) prob E r real number r samplg sequence s ::= r 1r 2 r i where r i (0.0, 1.0] typg context Γ ::= Γ, x : A We use x as variables. λx : A. M is a lambda abstraction, and M M is an application term. (M, M) is a product term, and fst M and snd M are projection terms; we clude these terms order to support jot distributions. fix x : A. M is a fixed pot construct for recursive terms. A probability term prob E encapsulates an expression E; it is a first-class value denotg a probability distribution. Real numbers r are implemented as floatg pot numbers, sce the overhead of exact real arithmetic is not justified the doma where we work with samples and approximations anyway. There are three kds of expressions: terms M, bd expressions sample x from M E, and samplg expressions S. As an expression, M denotes a probabilistic computation that returns the result of evaluatg M. sample x from M E sequences two probabilistic computations (if M evaluates to a probability term). S consumes a random number from a samplg sequence, which is an fite sequence of random numbers drawn dependently from U(0.0, 1.0]. The type system employs a term typg judgment Γ M : A and an expression typg judgment Γ E A (Figure 1). Γ M : A means that M evaluates to a value of type A under typg context Γ, and Γ E A that E computes to 1 Note, however, that not every samplg function specifies a probability distribution. For stance, no probability distribution is specified by a samplg function mappg rational numbers to 0 and irrational numbers to 1. Thus we restrict ourselves to those samplg functions that determe probability distributions (i.e., measurable samplg functions). x : A Γ Γ x : A Var Γ, x : A M : B Γ λx:a. M : A B Lam Γ M 1 : A B Γ M 2 : A App Γ M 1 M 2 : B Γ M 1 : A 1 Γ M 2 : A 2 Γ M : A1 A2 Prod Fst Γ (M 1, M 2) : A 1 A 2 Γ fst M : A 1 Γ M : A 1 A 2 Γ, x : A M : A Snd Γ snd M : A 2 Γ fix x:a. M : A Fix Γ E A Γ prob E : A Prob Real Γ r : real Γ M : A Γ M A Term Γ M : A Γ, x : A E B Γ sample x from M E B Bd Γ S real Samplg Figure 1: Typg rules of λ a sample of type A under typg context Γ. The rule Prob is the troduction rule for the type constructor ; it shows that type A denotes probability distributions over type A. The rule Bd is the elimation rule for the type constructor. The rule Term means that every term converts to a probabilistic computation that volves no probabilistic choice. All the remag rules are standard. 3.3 Operational semantics Sce λ draws a syntactic distction between regular values and probabilistic computations, its operational semantics needs two judgments: one for term evaluations and another for expression computations. A term evaluation is always determistic and the correspondg judgment volves only terms. In contrast, an expression computation may consume random numbers and the correspondg judgment volves not only expressions but also samplg sequences. Sce an expression computation may voke term evaluations (e.g., to evaluate M sample x from M E), we first present the judgment for term evaluations and then use it for the judgment for expression computations. Both judgments use capture-avoidg substitutions [N/x]M and [N/x]E defed a standard way. For term evaluations, we troduce a judgment M N a call-by-value disciple. We could have equally chosen callby-name or call-by-need, but λ is tended to be embedded Objective CAML and hence we choose call-by-value for pragmatic reasons. We use evaluation contexts which are terms with a hole [] dicatg where a term reduction may occur. We use M R N for term reductions: evaluation context κ ::= [] κ M (λx:a. M) κ (κ, M) (V, κ) fst κ snd κ (λx:a. M) V R [V/x]M fst (V 1, V 2) R V 1 snd (V 1, V 2) R V 2 fix x:a. M R [fix x:a. M/x]M M R N κ[m] κ[n] We use M V for a term evaluation where denotes the reflexive and transitive closure of. A term evaluation is always determistic. For expression computations, we troduce a judgment s s which means that the computation of E with samplg sequence s reduces to the computation of F with samplg sequence s. It uses computation contexts

4 which are expressions with either a term hole [] term or an expression hole [] exp. [] term expects a term and [] exp expects an expression. We use s R s for expression reductions: computation context ι ::= [] exp [] term sample x from [] term E sample x from prob ι E sample x from prob V s R s rs R s M N ι[m] s ι[n] s s R s ι[e] s ι[f ] s We use s s for an expression computation where denotes the reflexive and transitive closure of. Note that an expression computation itself is determistic; it is only when we vary samplg sequences that an expression exhibits probabilistic behavior. An expression computation s s means that E takes a samplg sequence s, consumes a fite prefix of s order, and returns a sample V with the remag sequence s : Proposition 3.1. If s s, then s = r 1r 2 r ns (n 0) where s E r 2 r ns E n r ns s for a sequence of expressions E 1,, E n 1. Thus an expression computation cocides with the operational description of a samplg function when applied to a samplg sequence, which implies that an expression represents a samplg function. The type safety of λ consists of two properties: type preservation and progress. The proof of type preservation requires a substitution lemma, and the proof of progress requires a canonical forms lemma. Theorem 3.2 (Type preservation). If M N and M : A, then N : A. If s s and E A, then F A. Theorem 3.3 (Progress). If M : A, then either M is a value ( i.e., M = V ), or there exists N such that M N. If E A, then either E is a sample ( i.e., E = V ), or for any samplg sequence s, there exist F and s such that s s. The syntactic distction between terms and expressions λ is optional the sense that the grammar does not need to distguish expressions as a separate non-termal. On the other hand, the semantic distction, both statically ( the form of two typg judgments) and dynamically ( the form of evaluation and computation judgments) appears to be essential for a clean formulation of our probabilistic language. λ is a conservative extension of a conventional language because terms constitute a conventional language of their own. In essence, term evaluations are always determistic and we need only terms when writg determistic programs. As a separate syntactic category, expressions also provide a framework for probabilistic computation that abstracts from the defition of terms. For stance, the addition of a new term construct does not change the defition of expression computations. When programmg λ, therefore, the syntactic distction between terms and expressions aids us decidg which of determistic evaluations and probabilistic computations we should focus on. In the next section, we show how to encode various probability distributions and further vestigate properties of λ. 4. EXAMPLES When encodg a probability distribution λ, we naturally concentrate on a method of generatg samples, rather than tryg to calculate the probability assigned to each event. If the probability distribution itself is defed terms of a process of generatg samples, we simply translate the defition. If, however, the probability distribution is defed terms of a probability measure or an equivalent, we may not always derive a samplg function a mechanical manner. Instead we have to exploit its unique properties to devise a samplg function. Below we show examples of encodg various probability distributions λ. These examples demonstrate three properties of λ : a unified representation scheme for probability distributions, rich expressiveness, and high versatility encodg probability distributions. The samplg methods used the examples are all found simulation theory [2]. We assume primitive types t and bool, arithmetic and comparison operators, and a conditional term construct if M then N 1 else N 2. We also assume standard let-bdg, recursive let rec-bdg, and pattern matchg when it is convenient for the examples. While we do not discuss here type ference or polymorphism, the implementation handles these the manner familiar from ML. followg syntactic sugar for expressions: We use the unprob M sample x from M x eif M then E 1 else E 2 unprob (if M then prob E 1 else prob E 2) unprob M chooses a sample from the probability distribution denoted by M and returns it. eif M then E 1 else E 2 branches to either E 1 or E 2 dependg on the result of evaluatg M. Unified representation scheme λ provides a unified representation scheme for probability distributions. While its type system distguishes between different probability domas, its operational semantics does not distguish between different kds of probability distributions, such as discrete, contuous, or neither. We show an example for each case. We encode a Bernoulli distribution over type bool with parameter p as follows: let bernoulli = λp:real. prob sample x from prob S x p bernoulli can be thought of as a bary choice construct. It is expressive enough to specify any discrete distribution with fite support. In fact, bernoulli 0.5 suffices to specify all such probability distributions, sce it is capable of simulatg a bary choice construct [5]. As an example of contuous distribution, we encode a uniform distribution over a real terval (a, b] by exploitg the defition of the samplg expression: let uniform = λa:real. λb:real. prob sample x from prob S a + x (b a) We also encode a combation of a pot-mass distribution and a uniform distribution over the same doma, which is

5 neither a discrete distribution nor a contuous distribution: let pot uniform = prob Rich expressiveness sample x from prob S if x < 0.5 then 0.0 else x We now demonstrate the expressive power of λ with a number of examples. We encode a bomial distribution with parameters p and n 0 by exploitg probability terms: let bomial = λp:real. λn 0 :t. let bernoulli p = bernoulli p let rec bomial p = λn:t. if n = 0 then prob 0 else prob bomial p n 0 sample x from bomial p (n 1) sample b from bernoulli p if b then 1 + x else x Here bomial p takes an teger n as put and returns a bomial distribution with parameters p and n. If a probability distribution is defed terms of a recursive process of generatg samples, we can translate the defition to a recursive term. For stance, we encode a geometric distribution with parameter p as follows: let geometric rec = λp:real. let bernoulli p = bernoulli p let rec geometric = prob sample b from bernoulli p geometric eif b then 0 else sample x from geometric 1 + x Note that a geometric distribution has fite support. We encode an exponential distribution by usg the verse of its cumulative distribution function as a samplg function, which is known as the verse transform method: let exponential 1.0 = prob sample x from S log x The rejection method, which generates a sample from a probability distribution by repeatedly generatg samples from other probability distributions until they satisfy a certa condition, can be implemented with a recursive term. For stance, we encode a Gaussian distribution with mean m and variance σ 2 by the rejection method with respect to exponential distributions: let bernoulli 0.5 = bernoulli 0.5 let gaussian rejection = λm:real. λσ :real. let rec gaussian = prob gaussian sample y 1 from exponential 1.0 sample y 2 from exponential 1.0 eif y 2 (y 1 1.0) 2 /2.0 then sample b from bernoulli 0.5 if b then m + σ y 1 else m σ y 1 else unprob gaussian We encode the jot distribution between two dependent probability distributions usg a product term. If M P denotes P (x) and M Q denotes Q(y), the followg term denotes the jot distribution Prob(x, y) P (x)q(y): prob sample x from M P sample y from M Q (x, y) For the jot distribution between two terdependent probability distributions, we use a conditional probability, which we represent as a lambda abstraction takg a regular value and returng a probability distribution. If M P denotes P (x) and M Q denotes a conditional probability Q(y x), the followg term denotes the jot distribution Prob(x, y) P (x)q(y x): prob sample x from M P sample y from M Q x (x, y) We compute the tegration Prob(y) = Q(y x)p (x)dx a similar way: prob sample x from M P sample y from M Q P x y Due to lack of semantic constrats on samplg functions, we can specify probability distributions over unusual domas such as fite data structures (e.g., trees), function spaces, cyclic spaces (e.g., angular values), and even probability distributions themselves. For stance, we add two probability distributions over angular values a straightforward way: let add angle = λa 1 : real. λa 2 : real. prob sample s 1 from a 1 sample s 2 from a 2 (s 1 + s 2) mod (2.0 π) With the modulo operation mod, we take to account the fact that an angle θ is identified with θ + 2π. As a simple application, we implement a belief network [27]: We assume that P alarm burglary denotes the probability distribution that the alarm goes off when a burglary happens; other variables of the form P are terpreted a similar way. let alarm = λ(burglary, earthquake): bool bool. if burglary then P alarm burglary else if earthquake then P alarm burglary earthquake else P alarm burglary earthquake let john calls = λalarm :bool. if alarm then P John calls alarm else P John calls alarm let mary calls = λalarm :bool. if alarm then P Mary calls alarm else P Mary calls alarm The conditional probabilities alarm, john calls, and mary calls do not answer any query on the belief network; they only describe its structure. In order to answer a specific

6 query, we have to implement a correspondg probability distribution. For stance, order to answer what is the probability p Mary calls John calls that Mary calls when John calls?, we use Q Mary calls John calls below, which essentially implements logic samplg [8]: let rec Q Mary calls John calls = prob sample b from P burglary sample e from P earthquake sample a from alarm (b, e) sample j from john calls a sample m from mary calls a eif j then m else unprob Q Mary calls John calls Q burglary John calls P burglary denotes the probability distribution that a burglary happens, and P earthquake denotes the probability distribution that an earthquake happens. Then the mean of Q Mary calls John calls gives p Mary calls John calls. We will see how to calculate p Mary calls John calls Section 6. We can also implement most of the common operations on probability distributions. An exception is the Bayes operation (the second update equation of the Bayes filter uses it). P Q results a probability distribution R such that R(x) = ηp (x)q(x) where η is a normalization constant ensurg R(x)dx = 1.0; if P (x)q(x) is zero for every x, then P Q is undefed. Sce it is difficult to achieve a general implementation of P Q, we usually make an additional assumption on P and Q to achieve a specialized implementation. For stance, if we have a function p and a constant c such that p(x) = kp (x) c for a certa constant k, we can implement P Q by the rejection method: let bayes rejection = λp:a real. λc:real. λq: A. let rec bayes = prob sample x from Q sample u from prob S eif u < (p x)/c then x else unprob bayes bayes We will see another implementation Section 6. High versatility λ allows high versatility encodg probability distributions: given a probability distribution, we can exploit its unique properties and encode it many different ways. For stance, exponential 1.0 uses a logarithm function to encode an exponential distribution, but there is also an genious method (due to von Neumann) that requires only addition and subtraction operations: let exponential von Neumann 1.0 = let rec search = λk :real. λu:real. λu 1 :real. prob prob sample u from prob S eif u < u then k + u 1 else sample u from prob S eif u u then unprob (search k u u 1) else sample u from prob S unprob (search (k + 1.0) u u) sample u from prob S unprob (search 0.0 u u) The recursive term gaussian rejection consumes at least three random numbers. We can encode a Gaussian distribution with only two random numbers: let gaussian Box Muller = λm:real. λσ :real. prob sample u from prob S sample v from prob S m + σ 2.0 log u cos (2.0 π v) We can also approximate a Gaussian distribution by exploitg the central limit theorem: let gaussian central = λm:real. λσ :real. prob sample x 1 from prob S sample x 2 from prob S sample x 12 from prob S m + σ (x 1 + x x ) The three examples above serve as evidence of high versatility of λ : the more we know about a probability distribution, the better we can encode it. All the examples this section just rely on our tuition on samplg functions and do not actually prove the correctness of encodgs. For stance, we still do not know if bernoulli deed encodes a Bernoulli distribution, or equivalently, if the expression it generates True with probability p. In the next section, we vestigate how to formally prove the correctness of encodgs. 5. PROVING THE CORRECTNESS OF EN- CODINGS When programmg λ, we often ask What probability distribution characterizes outcomes of computg a given expression? The operational semantics of λ does not directly answer this question because an expression computation returns only a sgle sample from a certa, yet unknown, probability distribution. Therefore we need a different methodology for terpretg expressions directly terms of probability distributions. We take a simple approach that appeals to our tuition on the meang of expressions. We write E Prob if outcomes of computg E are distributed accordg to Prob. To determe Prob from E, we supply an fite sequence of dependent random variables from U(0.0, 1.0] and analyze the result of computg E terms of these random variables. If E Prob, then E denotes a probabilistic computation of generatg samples from Prob and we regard Prob as the denotation of prob E. We illustrate the above approach with a few examples. In each example, R i means the i-th random variable and Ri means the fite sequence of random variables begng from R i (i.e., R ir i+1 ). A random variable is regarded as a value because it represents real numbers (0.0, 1.0]. As a trivial example, consider prob S. The computation of S proceeds as follows: R 1 R R 2 Sce the outcome is a random variable from U(0.0, 1.0], we have S U(0.0, 1.0]. As an example of discrete distribution, consider bernoulli p. The expression it computes as follows: sample x from prob S x R 1 sample x from prob R 1 x R 2 R 1 R 2 R 2 if R 1 p; R 2 otherwise.

7 Sce R 1 is a random variable from U(0.0, 1.0], the probability of R 1 p is p. Thus the outcome is True with probability p and False with probability 1.0 p, and bernoulli p denotes a Bernoulli distribution with parameter p. As an example of contuous distribution, consider uniform a b. The expression it computes as follows: sample x from prob S a + x (b R 1 a + R 1 (b R 2 Sce we have a + R 1 (b a) (a 0, b 0] iff R 1 ( a0 a b a, b0 a b a ], the probability that the outcome lies (a 0, b 0] is b 0 a b a a0 a b0 a0 = b0 a0 b a b a where we assume (a 0, b 0] (a, b]. Thus uniform a b denotes a uniform distribution over (a, b]. The followg proposition shows that bomial p n denotes a bomial distribution with parameters p and n, which we write as Bomial p,n: Proposition 5.1. If bomial p n prob E p,n, then E p,n Bomial p,n. Proof. By duction on n. Base case n = 0. We have E p,n = 0. Sce Bomial p,n is a pot-mass distribution centered on 0, we have E p,n Bomial p,n. Inductive case n > 0. The computation of E p,n proceeds as follows: sample x from bomial p (n 1) sample b from bernoulli p if b then 1 + x else x sample x from prob x p,n R1 sample b from bernoulli p if b then 1 + x else x sample b from prob b Ri if b then 1 + x p,n 1 else x p,n Ri x p,n Ri+1 if b p = True; x p,n Ri+1 otherwise. By duction hypothesis, bomial p (n 1) generates a sample x p,n 1 from Bomial p,n 1 after consumg R 1 R i 1 for some i (which is actually n). Sce R i is an dependent random variable, bernoulli p generates a sample b p that is dependent of x p,n 1. Then we obta an outcome k with the probability of b p = True and x p,n 1 = k 1 or b p = False and x p,n 1 = k, which is equal to p Bomial p,n 1(k 1) + (1.0 p) Bomial p,n 1(k) = Bomial p,n(k). Thus we have E p,n Bomial p,n. As a fal example, we show that geometric rec p denotes a geometric distribution with parameter p. Suppose geometric prob E and E Prob. The computation of E proceeds as follows: E sample b from prob b R1 eif b then 0 else sample x from geometric 1 + R2 R2 if b p = True; sample x from prob E 1 + R2 otherwise. The first case happens with probability p and we get Prob(0) = p. In the second case, we compute the same expression E with samplg sequence R2. Sce all random variables are dependent, R2 can be thought of as a fresh sequence of random variables. Therefore the computation of E with samplg sequence R2 returns samples from the same probability distribution Prob and we get Prob(1 + k) = (1.0 p) Prob(k). Solvg the two equations, we get Prob(k) = p (1.0 p) k 1, which is the probability mass function for a geometric distribution with parameter p. The above approach can be thought of as an adaption of the method established simulation theory [2]. An alternative approach would be to develop a denotational semantics. For stance, if we ignore fixed pot constructs, it is straightforward to translate expressions to probability measures because probability measures form a monad [6, 26] and expressions already follow a monadic syntax. 2 In practice, however, the translation does not immediately reveal the probability measure correspondg to a given expression and we have to go through essentially the same analysis as the above approach. Ultimately we have to vert a samplg function represented by a given expression (because an event is assigned a probability proportional to the size of its verse image under the samplg function), but this is difficult to do a mechanical way the presence of various operators. Therefore it seems to be reasonable to analyze each expression dividually as demonstrated this section. 6. APPROXIMATE COMPUTATION IN λ We have explored both how to encode probability distributions λ and how to terpret λ terms of probability distributions. In this section, we discuss another important aspect of probabilistic languages: reasong about probability distributions. The expressive power of a probabilistic language is an important factor affectg its practicality. Another important factor is its support for reasong about probability distributions to determe their properties. In other words, it is important not only to be able to encode various probability distributions but also to be able to determe their properties such as means, variances, and probabilities of specific events. Unfortunately λ does not support precise reasong about probability distributions. That is, it does not permit a precise implementation of queries on probability distributions. Intuitively we must be able to calculate probabilities of specific events, but this is essentially vertg samplg functions. Given that we cannot hope for precise reasong λ, we choose to support approximate reasong by the Monte Carlo method [13]. It approximately answers a query on a probability distribution by generatg a large number of samples and then analyzg them. For stance, the belief network example Section 4, we can approximate p Mary calls John calls by generatg a large number of samples and countg the number of True s. Although the Monte Carlo method gives only an approximate answer, its accuracy improves with the number of samples. Moreover it can be applied to all kds of probability distributions and is therefore particularly suitable for λ. 2 In the presence of fixed pot constructs, expressions should be translated to a doma-theoretic structure because of recursive equations. While the work by Jones [10] suggests that such a structure could be constructed from a domatheoretic model of real numbers, we have not vestigated this direction.

8 In this section, we apply the Monte Carlo method to an implementation of the expectation query. We also show how to exploit the Monte Carlo method implementg the Bayes operation. Then we briefly describe our implementation of λ. 6.1 Expectation query Among common queries on probability distributions, the most important is the expectation query. The expectation of a function f with respect to a probability distribution P is the mean of f over P, which we write as fdp. Other queries may be derived as special cases of the expectation query. For stance, the mean of a probability distribution over real numbers is the expectation of an identity function. The Monte Carlo method states that we can approximate fdp with a set of samples V1,, V n from P : f(v 1) + + f(v n) lim = fdp n n We troduce a term construct expectation which exploits the above equation: term M ::= expectation M f M P Γ M f : A real Γ M P : A Exp Γ expectation M f M P : real M f f M P prob E P E s i V s i f V i v i 1 i n i expectation M f M P v i R n Exp R The rule Exp R says that if M f evaluates to a lambda abstraction denotg f and M P evaluates to a probability term denotg P, then expectation M f M P reduces to an approximation of fdp. A runtime variable n specifies the number of samples to be generated from P. The runtime system itializes samplg sequence s i to generate sample V i. A problem with the above defition is that although expectation is a term construct, its reduction is probabilistic because of samplg sequence s i the rule Exp R. This violates the prciple that a term evaluation is always determistic, and now the same term may evaluate to different values if it contas expectation. In practice, however, this is acceptable because λ is tended to be embedded Objective CAML which side-effects are already allowed for terms. Besides, mathematically the expectation of a function with respect to a probability distribution is always unique (if it exists). 3 Now we can calculate p Mary calls John calls as expectation (λx:bool. if x then 1.0 else 0.0) Q Mary calls John calls. 6.2 Bayes operation The previous implementation of the Bayes operation P Q assumes that we have a function p and a constant c such that p(x) = kp (x) c for a certa constant k. It is, however, often difficult to fd the optimal value of c (i.e., the maximum value of p(x)) and we have to take a conservative estimate of c. The Monte Carlo method, conjunction with importance samplg [13], allows us to dispense with c by approximatg Q with a set of samples and P Q with a set of weighted samples. We troduce a term construct 3 We defe expectation as a term construct only for pragmatic reasons. For stance, examples Section 7 become much more complicated if expectation is defed as an expression construct. bayes for the Bayes operation and an expression construct importance for importance samplg: term M ::= bayes M p M Q expression E ::= importance {(V i, w i) 1 i n} In the spirit of data abstraction, importance represents only an ternal data structure and is not directly available to the programmer. Γ M p : A real Γ M Q : A Bayes Γ bayes M p M Q : A Γ V i : A Γ w i : real 1 i n Γ importance {(V i, w i) 1 i n} A Imp M p p M Q prob E Q E s i V s i p V i w i 1 i n bayes M p M Q R prob importance {(V i, w i) 1 i n} Bayes R k 1 i=1 w ki=1 i w < r i where S = n S S i=1 wi importance {(V i, w i) 1 i rs R V s Imp R The rule Bayes R approximates Q with n samples V 1,, V n, where n is a runtime variable as the rule Exp R. Then it applies p to each sample V i to calculates its weight w i and creates a set {(V i, w i) 1 i n} of weighted samples as an argument to importance. The rule Imp R implements importance samplg: we use a random number r to probabilistically select a sample V k by takg to account the weights associated with all the samples. As with expectation, we decide to defe bayes as a term construct despite the fact that its reduction is probabilistic. The decision also conforms to our tuition that mathematically the result of the Bayes operation between two probability distributions is always unique. 6.3 Implementation of λ We have implemented λ by extendg the syntax of Objective CAML. The runtime system uses a global random number generator for all samplg sequences. Hence it generates fresh random numbers whenever it needs to compute samplg expressions, without explicitly itializg samplg sequences. The runtime system also allows the programmer to change the runtime variable n the rules Exp R and Bayes R, both of which voke expression computations durg term evaluations. Thus the programmer can control the accuracy approximatg probability distributions. 7. APPLICATIONS In this section, we present three applications of λ robotics: robot localization, people trackg, and robotic mappg. The goal is to estimate the state of a robot from sensor readgs, where the defition of state differs each case. In order to cope with uncertaty sensor readgs (due to limitations of sensors and noises from the environment), we estimate the state with a probability distribution. We use a Bayes filter as a framework for updatg the probability distribution. There are two kds of sensor readgs: action and measurement. As a Bayes filter, an action duces a state change whereas a measurement gives formation on the state. An action is represented as an odometry readg which returns the pose (i.e., position and orientation) of the robot relative to its itial pose. A measurement cludes

9 range readgs which return distances to objects at certa angles. We first consider robot localization, sce it directly implements update equations (1) and (2) Section Robot localization Robot localization [29] is the problem of estimatg the pose of a robot when a map of the environment is available. If the itial pose is given, the problem becomes pose trackg which keeps track of the robot pose by compensatg errors sensor readgs. If the itial pose is not given, the problem becomes global localization which begs with multiple hypotheses on the robot pose (and is therefore more difficult than pose trackg). We consider robot localization under the assumption that the environment is static. This assumption allows us to use a Bayes filter over the robot pose. Specifically the state the Bayes filter is the robot pose s = (x, y, θ), and we estimate s with a probability distribution Bel(s) over threedimensional real space. We compute Bel(s) accordg to update equations (1) and (2) with the followg terpretation: A(s a, s ) is the probability that the robot moves to pose s after takg action a another pose s. A is called an action model. P(m s) is the probability that measurement m is taken at pose s. P is called a perception model. Given an action a and a pose s, we can generate a new pose s from A( a, s ) by addg a noise to a and applyg it to s. Given a measurement m and a pose s, we can also compute κp(m s) where κ is an unknown constant: the map determes a unique measurement m s for the pose s, and the difference between m and m s is proportional to P(m s). Then, if M A denotes conditional probability A and M P m returns a function f(s) = κp(m s), we can implement update equations (1) and (2) as follows: let Bel new = prob sample s from Bel sample s from M A (a, s ) s (1) let Bel new = bayes (M P m) Bel } (2) Now we can implement pose trackg or global localization by specifyg an itial probability distribution of robot pose. In the case of pose trackg, it is usually a pot-mass distribution or a Gaussian distribution; the case of global localization, it is usually a uniform distribution over the open space the map. 7.2 People trackg People trackg [20] is an extension of robot localization that it estimates not only the robot pose but also the positions of people (or unmapped objects). As robot localization, the robot can take an action to change its pose. Unlike robot localization, however, the robot must categorize sensor readgs a measurement by decidg whether they are caused by objects the map or by people. Those sensor readgs that correspond with objects the map are used to update the robot pose; the rest of sensor readgs are used to update the positions of people. A simple approach is to mata a probability distribution Bel(s, u) of robot pose s and positions u of people. While it works well for pose trackg, this approach is not a general solution for global localization. The reason is that sensor readgs from people are correctly terpreted only with a correct hypothesis on the robot pose, but durg global localization, there may be multiple correct hypotheses that lead to correct terpretation of those sensor readgs. This means that durg global localization, there exists a dependence between the robot pose and the positions of people, which is not captured by Bel(s, u). Hence we mata a probability distribution Bel(s, P s( u)) of robot pose s and probability distribution P s( u) of positions u of people conditioned on robot pose s. P s( u) captures the dependence between the robot pose and the positions of people. Bel(s, P s( u)) can be thought of as a probability distribution over probability distributions. As robot localization, we update Bel(s, P s( u)) with a Bayes filter. The difference from robot localization is that the state is a pair of s and P s( u) and that the action model takes as put both an action a and a measurement m. We use update equations (3) and (4) Figure 2 (which are obtaed by replacg s by s, P s( u) and a by a, m update equations (1) and (2)). The action model A(s, P s( u) a, m, s, P s ( u )) requires us to generate s, P s( u) from s, P s ( u ) utilizg action a and measurement m. We generate first s and next P s( u) accordg to equation (5) Figure 2. We write the first Prob equation (5) as A robot (s a, m, s, P s ( u )). The second Prob equation (5) dicates that we have to generate P s( u) from P s ( u ) utilizg action a and measurement m, which is exactly a situation where we can use another Bayes filter. For this ner Bayes filter, we use update equations (6) and (7) Figure 2. We write Prob equation (6) as A people ( u a, u, s, s ); we simplify Prob equation (7) to Prob(m u, s) because m does not depend on s given s, and write it as P people (m u, s). Figure 3 shows the implementation of people trackg λ. M Arobot and M Apeople denote conditional probabilities A robot and A people, respectively. M Ppeople m s returns a function f( u) = κp people (m u, s) for a constant κ. In implementg update equation (4), we exploit the fact that P(m s, P s( u)) is the expectation of a function g( u) = P people (m u, s) with respect to P s( u): P(m s, P s( u)) = P people (m u, s)p s( u)d u We can further simplify the models used the update equations. For stance, we can use A robot (s a, s ) stead of A robot (s a, m, s, P s ( u )) as robot localization. In our implementation, we use A people ( u u ) on the assumption that the positions of people are not affected by the robot pose. 7.3 Robotic mappg Robotic mappg [31] is the problem of buildg a map (or a spatial model) of the environment from sensor readgs. Sce measurements are a sequence of accurate local snapshots of the environment, a robot must simultaneously localize itself as it explores the environment so that it can correct and align the local snapshots to construct a global map. For this reason, robotic mappg is also referred to as simultaneous localization and mappg (or SLAM). It is one of the most difficult problems robotics, and is under active research. We assume that the environment consists of an unknown number of stationary landmarks. Then the goal is to estimate the positions of landmarks as well as the robot pose.

10 Bel(s, P s( u)) A(s, P s( u) a, m, s, P s ( u ))Bel(s, P s ( u ))d(s, P s ( u )) (3) Bel(s, P s( u)) ηp(m s, P s( u))bel(s, P s( u)) (4) A(s, P s( u) a, m, s, P s ( u )) = Prob(s a, m, s, P s ( u )) Prob(P s( u) a, m, s, P s ( u ), s) (5) = A robot (s a, m, s, P s ( u )) Prob(P s( u) a, m, s, P s ( u ), s) P s( u) Prob( u a, u, s, s )P s ( u )d u = A people ( u a, u, s, s )P s ( u )d u (6) P s( u) η Prob(m u, s, s )P s( u) = η P people (m u, s)p s( u) (7) Figure 2: Equations used people trackg. (3) and (4) for the Bayes filter computg Bel(s, P s( u)). (5) for decomposg the action model. (6) and (7) for the ner Bayes filter computg P s( u). let Bel new = prob sample (s, P s ( u )) from Bel sample s from M Arobot (a, m, s, P s ( u )) let P s( u) = prob sample u from P s ( u ) sample u from M Apeople (a, u, s, s ) (6) (3) u (5) let P s( u) = bayes (M Ppeople m s) P s( u) } (7) (s, P s( u)) let Bel new = bayes λ(s, P s( u)):. (expectation (M Ppeople m s) P s( u)) Bel } (4) Figure 3: Implementation of people trackg λ. equations Figure 2. The key observation is that we can thk of landmarks as people who never move an empty environment. It means that the problem is a special case of people trackg and we can use all the equations Figure 2. Below we use subscript landmark stead of people for the sake of clarity. As people trackg, we mata a probability distribution Bel(s, P s( u)) of robot pose s and probability distribution P s( u) of positions u of landmarks conditioned on robot pose s. Sce landmarks are stationary and A landmark ( u a, u, s, s ) is non-zero if and only if u = u, we can skip update equation (6) implementg update equation (3). A robot equation (5) can use P landmark (m u, s) to test the likelihood of each new robot pose s with respect to old positions u of landmarks, as FastSLAM 2.0 [19]: A robot (s a, m, s, P s ( u )) (8) = Prob(s a, m, s, u )P s ( u )d u Prob(s a, u = )Prob(m, s s, a, u ) P s ( u Prob(m, s a, u )d u ) = η Prob(m, s s, a, u )P s ( u )d u where η = Prob(s a, u ) Prob(m, s a, u ) = η Prob(s s, a, u, m)prob(m s, a, u )P s ( u )d u = η Prob(s s, a)prob(m s, u )P s ( u )d u = η A robot (s a, s ) P landmark (m u, s)p s ( u )d u Given a and s, we implement the above equation with a Bayes operation on A robot ( a, s ). Figure 4 shows the implementation of robotic mappg λ. M Arobot and M Plandmark are terpreted the same way as people trackg. Sce landmarks are stationary, we no longer need M Alandmark. Numbers on the right-hand side show correspondg 7.4 Experimental results We have implemented the above three systems λ. To test the robot localizer and the people tracker, we use a mobile robot Nomad XR4000 Wean Hall at Carnegie Mellon University. We use CARMEN [18] for controllg the robot and collectg sensor readgs. To test the mapper, we use the a data set collected with an outdoor vehicle Victoria Park, Sydney [1]. All the systems run on Pentium III 500Mhz with 384 MBytes memory. We test the robot localizer for global localization with 8 runs Wean Hall (each run takes a different path). For an itial probability distribution of robot pose, we use a uniform distribution over the open space the map. In a test experiment, it succeeds to localize the robot on 5 runs and fails on 3 runs. As a comparison, the CARMEN robot localizer, which uses particle filters, succeeds on 3 runs and fails on 5 runs. The people tracker uses the implementation Figure 3 durg global localization, but once it succeeds to localize the robot and starts pose trackg, it matas an dependent probability distribution for each person sight (because there is no longer a dependence between the robot pose and the positions of people). We test the mapper with a data set which the vehicle moves approximately meters (accordg to the odometry readgs) seconds. Sce the vehicle is drivg over uneven terra, raw odometry readgs are noisy and do not reflect the true path of the vehicle, particular when the vehicle follows a loop. The mapper successfully closes the loop, buildg a map of the landmarks around the path. The experiment takes seconds. Our fdg is that the benefit of implementg probabilistic computations λ, such as readability and conciseness of code, outweighs its disadvantage speed. As a comparison (although not particularly meangful), our robot

A Probabilistic Language based upon Sampling Functions

A Probabilistic Language based upon Sampling Functions A Probabilistic Language based upon Sampling Functions Sungwoo Park Frank Pfenning Sebastian Thrun Computer Science Department Computer Science Department Carnegie Mellon University Stanford University

More information

A Probabilistic Language Based on Sampling Functions

A Probabilistic Language Based on Sampling Functions A Probabilistic Language Based on Sampling Functions SUNGWOO PARK Pohang University of Science and Technology FRANK PFENNING Carnegie Mellon University and SEBASTIAN THRUN Stanford University As probabilistic

More information

A Monadic Probabilistic Language

A Monadic Probabilistic Language A Monadic Probabilistic Language Sungwoo Park, Frank Pfenning, Sebastian Thrun School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA {gla,fp,thrun}@cs.cmu.edu ABSTRACT Motivated

More information

Lecture notes 1: ECEN 489

Lecture notes 1: ECEN 489 Lecture notes : ECEN 489 Power Management Circuits and Systems Department of Electrical & Computer Engeerg Texas A&M University Jose Silva-Martez January 207 Copyright Texas A&M University. All rights

More information

Incorporating uncertainty in the design of water distribution systems

Incorporating uncertainty in the design of water distribution systems European Water 58: 449-456, 2017. 2017 E.W. Publications Incorporatg uncertaty the design of water distribution systems M. Spiliotis 1* and G. Tsakiris 2 1 Division of Hydraulic Engeerg, Department of

More information

Rough Sets Used in the Measurement of Similarity of Mixed Mode Data

Rough Sets Used in the Measurement of Similarity of Mixed Mode Data Rough Sets Used the Measurement of Similarity of Mixed Mode Data Sarah Coppock Lawrence Mazlack Applied Artificial Intelligence Laboratory ECECS Department University of Ccnati Ccnati Ohio 45220 {coppocsmazlack}@ucedu

More information

A Functional Perspective

A Functional Perspective A Functional Perspective on SSA Optimization Algorithms Patryk Zadarnowski jotly with Manuel M. T. Chakravarty Gabriele Keller 19 April 2004 University of New South Wales Sydney THE PLAN ➀ Motivation an

More information

LINEAR COMPARTMENTAL MODELS: INPUT-OUTPUT EQUATIONS AND OPERATIONS THAT PRESERVE IDENTIFIABILITY. 1. Introduction

LINEAR COMPARTMENTAL MODELS: INPUT-OUTPUT EQUATIONS AND OPERATIONS THAT PRESERVE IDENTIFIABILITY. 1. Introduction LINEAR COMPARTMENTAL MODELS: INPUT-OUTPUT EQUATIONS AND OPERATIONS THAT PRESERVE IDENTIFIABILITY ELIZABETH GROSS, HEATHER HARRINGTON, NICOLETTE MESHKAT, AND ANNE SHIU Abstract. This work focuses on the

More information

Lecture Notes on Certifying Theorem Provers

Lecture Notes on Certifying Theorem Provers Lecture Notes on Certifying Theorem Provers 15-317: Constructive Logic Frank Pfenning Lecture 13 October 17, 2017 1 Introduction How do we trust a theorem prover or decision procedure for a logic? Ideally,

More information

GRW Theory. Roman Frigg, LSE

GRW Theory. Roman Frigg, LSE GRW Theory Roman Frigg, LE Forthcomg : Daniel Greenberger, Brigitte Falkenburg, Klaus Hentschel and Friedel Weert (eds.): Compendium of Quantum Physics: Concepts, Experiments, History and Philosophy. prger.

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

Mobile Robot Localization

Mobile Robot Localization Mobile Robot Localization 1 The Problem of Robot Localization Given a map of the environment, how can a robot determine its pose (planar coordinates + orientation)? Two sources of uncertainty: - observations

More information

7 RC Simulates RA. Lemma: For every RA expression E(A 1... A k ) there exists a DRC formula F with F V (F ) = {A 1,..., A k } and

7 RC Simulates RA. Lemma: For every RA expression E(A 1... A k ) there exists a DRC formula F with F V (F ) = {A 1,..., A k } and 7 RC Simulates RA. We now show that DRC (and hence TRC) is at least as expressive as RA. That is, given an RA expression E that mentions at most C, there is an equivalent DRC expression E that mentions

More information

Lecture Notes on The Curry-Howard Isomorphism

Lecture Notes on The Curry-Howard Isomorphism Lecture Notes on The Curry-Howard Isomorphism 15-312: Foundations of Programming Languages Frank Pfenning Lecture 27 ecember 4, 2003 In this lecture we explore an interesting connection between logic and

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

Introduction to Mobile Robotics Probabilistic Robotics

Introduction to Mobile Robotics Probabilistic Robotics Introduction to Mobile Robotics Probabilistic Robotics Wolfram Burgard 1 Probabilistic Robotics Key idea: Explicit representation of uncertainty (using the calculus of probability theory) Perception Action

More information

Denotational Semantics

Denotational Semantics 5 Denotational Semantics In the operational approach, we were interested in how a program is executed. This is contrary to the denotational approach, where we are merely interested in the effect of executing

More information

CSE 505, Fall 2009, Midterm Examination 5 November Please do not turn the page until everyone is ready.

CSE 505, Fall 2009, Midterm Examination 5 November Please do not turn the page until everyone is ready. CSE 505, Fall 2009, Midterm Examination 5 November 2009 Please do not turn the page until everyone is ready Rules: The exam is closed-book, closed-note, except for one side of one 85x11in piece of paper

More information

Autonomous Mobile Robot Design

Autonomous Mobile Robot Design Autonomous Mobile Robot Design Topic: Particle Filter for Localization Dr. Kostas Alexis (CSE) These slides relied on the lectures from C. Stachniss, and the book Probabilistic Robotics from Thurn et al.

More information

Typing λ-terms. Types. Typed λ-terms. Base Types. The Typing Relation. Advanced Formal Methods. Lecture 3: Simply Typed Lambda calculus

Typing λ-terms. Types. Typed λ-terms. Base Types. The Typing Relation. Advanced Formal Methods. Lecture 3: Simply Typed Lambda calculus Course 2D1453, 200607 Advanced Formal Methods Lecture 3: Simply Typed Lambda calculus Mads Dam KTH/CSC Some material from B. Pierce: TAPL + some from G. Klein, NICTA Typing λterms The uptyped λcalculus

More information

arxiv: v1 [cs.ai] 7 Sep 2016

arxiv: v1 [cs.ai] 7 Sep 2016 Equilibrium Graphs Pedro Cabalar, Carlos Pérez, and Gilberto Pérez Department of Computer Science University of Corunna, Spa {cabalar,c.pramil,gperez}@udc.es arxiv:1609.02010v1 [cs.ai] 7 Sep 2016 Abstract.

More information

2.4 Parsing. Computer Science 332. Compiler Construction. Chapter 2: A Simple One-Pass Compiler : Parsing. Top-Down Parsing

2.4 Parsing. Computer Science 332. Compiler Construction. Chapter 2: A Simple One-Pass Compiler : Parsing. Top-Down Parsing Computer Science 332 Compiler Construction Chapter 2: A Simple One-Pass Compiler 2.4-2.5: Parsg 2.4 Parsg Parsg : the process of determg whether a strg S is generated by a grammar G Short answer is yes/no

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 August 28, 2003 These supplementary notes review the notion of an inductive definition and give

More information

Discovery of Definition Patterns by Compressing Dictionary Sentences

Discovery of Definition Patterns by Compressing Dictionary Sentences Discovery of Defition Patterns by Compressg Dictionary Sentences Masatoshi Tsuchiya, Sadao Kurohashi, Satoshi Sato tsuchiya@pe.kuee.kyoto-u.ac.jp, kuro@kc.t.u-tokyo.ac.jp, sato@i.kyoto-u.ac.jp Graduate

More information

Lecture Notes on Heyting Arithmetic

Lecture Notes on Heyting Arithmetic Lecture Notes on Heyting Arithmetic 15-317: Constructive Logic Frank Pfenning Lecture 8 September 21, 2017 1 Introduction In this lecture we discuss the data type of natural numbers. They serve as a prototype

More information

Subtyping and Intersection Types Revisited

Subtyping and Intersection Types Revisited Subtyping and Intersection Types Revisited Frank Pfenning Carnegie Mellon University International Conference on Functional Programming (ICFP 07) Freiburg, Germany, October 1-3, 2007 Joint work with Rowan

More information

Notes on Inductive Sets and Induction

Notes on Inductive Sets and Induction Notes on Inductive Sets and Induction Finite Automata Theory and Formal Languages TMV027/DIT21 Ana Bove, March 15th 2018 Contents 1 Induction over the Natural Numbers 2 1.1 Mathematical (Simple) Induction........................

More information

Blame and Coercion: Together Again for the First Time

Blame and Coercion: Together Again for the First Time Blame and Coercion: Together Again for the First Time Supplementary Material Jeremy Siek Indiana University, USA jsiek@indiana.edu Peter Thiemann Universität Freiburg, Germany thiemann@informatik.uni-freiburg.de

More information

The Locally Nameless Representation

The Locally Nameless Representation Noname manuscript No. (will be inserted by the editor) The Locally Nameless Representation Arthur Charguéraud Received: date / Accepted: date Abstract This paper provides an introduction to the locally

More information

Computability Crib Sheet

Computability Crib Sheet Computer Science and Engineering, UCSD Winter 10 CSE 200: Computability and Complexity Instructor: Mihir Bellare Computability Crib Sheet January 3, 2010 Computability Crib Sheet This is a quick reference

More information

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms

L11. EKF SLAM: PART I. NA568 Mobile Robotics: Methods & Algorithms L11. EKF SLAM: PART I NA568 Mobile Robotics: Methods & Algorithms Today s Topic EKF Feature-Based SLAM State Representation Process / Observation Models Landmark Initialization Robot-Landmark Correlation

More information

Lecture Notes on Data Abstraction

Lecture Notes on Data Abstraction Lecture Notes on Data Abstraction 15-814: Types and Programming Languages Frank Pfenning Lecture 14 October 23, 2018 1 Introduction Since we have moved from the pure λ-calculus to functional programming

More information

MATHEMATICAL IN COMPUTER SCIENCE

MATHEMATICAL IN COMPUTER SCIENCE MATHEMATICAL IN COMPUTER SCIENCE Hieu Vu, Ph.D. American University of Nigeria 98 LamidoZubairu Way Yola by Pass, P.M.B. 2250, Yola, Adamawa State, Nigeria Abstract Mathematics has been known for provg

More information

1.2 Valid Logical Equivalences as Tautologies

1.2 Valid Logical Equivalences as Tautologies 1.2. VALID LOGICAL EUIVALENCES AS TAUTOLOGIES 15 1.2 Valid Logical Equivalences as Tautologies 1.2.1 The Idea, and Defition, of Logical Equivalence In lay terms, two statements are logically equivalent

More information

First-Order Logic. Chapter Overview Syntax

First-Order Logic. Chapter Overview Syntax Chapter 10 First-Order Logic 10.1 Overview First-Order Logic is the calculus one usually has in mind when using the word logic. It is expressive enough for all of mathematics, except for those concepts

More information

Consequence Relations and Natural Deduction

Consequence Relations and Natural Deduction Consequence Relations and Natural Deduction Joshua D. Guttman Worcester Polytechnic Institute September 9, 2010 Contents 1 Consequence Relations 1 2 A Derivation System for Natural Deduction 3 3 Derivations

More information

Thermodynamics [ENGR 251] [Lyes KADEM 2007]

Thermodynamics [ENGR 251] [Lyes KADEM 2007] CHAPTER V The first law of thermodynamics is a representation of the conservation of energy. It is a necessary, but not a sufficient, condition for a process to occur. Indeed, no restriction is imposed

More information

A Monadic Analysis of Information Flow Security with Mutable State

A Monadic Analysis of Information Flow Security with Mutable State A Monadic Analysis of Information Flow Security with Mutable State Karl Crary Aleksey Kliger Frank Pfenning July 2003 CMU-CS-03-164 School of Computer Science Carnegie Mellon University Pittsburgh, PA

More information

Markov localization uses an explicit, discrete representation for the probability of all position in the state space.

Markov localization uses an explicit, discrete representation for the probability of all position in the state space. Markov Kalman Filter Localization Markov localization localization starting from any unknown position recovers from ambiguous situation. However, to update the probability of all positions within the whole

More information

DNA Watson-Crick complementarity in computer science

DNA Watson-Crick complementarity in computer science DNA Watson-Crick computer science Shnosuke Seki (Supervisor) Dr. Lila Kari Department Computer Science, University Western Ontario Ph.D. thesis defense public lecture August 9, 2010 Outle 1 2 Pseudo-primitivity

More information

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras 1 Motivation Recall: Discrete filter Discretize the

More information

Static Program Analysis

Static Program Analysis Static Program Analysis Xiangyu Zhang The slides are compiled from Alex Aiken s Michael D. Ernst s Sorin Lerner s A Scary Outline Type-based analysis Data-flow analysis Abstract interpretation Theorem

More information

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics

Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Particle Filters; Simultaneous Localization and Mapping (Intelligent Autonomous Robotics) Subramanian Ramamoorthy School of Informatics Recap: State Estimation using Kalman Filter Project state and error

More information

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd

A version of for which ZFC can not predict a single bit Robert M. Solovay May 16, Introduction In [2], Chaitin introd CDMTCS Research Report Series A Version of for which ZFC can not Predict a Single Bit Robert M. Solovay University of California at Berkeley CDMTCS-104 May 1999 Centre for Discrete Mathematics and Theoretical

More information

Type Systems Winter Semester 2006

Type Systems Winter Semester 2006 Type Systems Winter Semester 2006 Week 7 November 29 November 29, 2006 - version 1.0 Plan PREVIOUSLY: 1. type safety as progress and preservation 2. typed arithmetic expressions 3. simply typed lambda

More information

Higher Order Containers

Higher Order Containers Higher Order Containers Thorsten Altenkirch 1, Paul Levy 2, and Sam Staton 3 1 University of Nottingham 2 University of Birmingham 3 University of Cambridge Abstract. Containers are a semantic way to talk

More information

The Complexity of Forecast Testing

The Complexity of Forecast Testing The Complexity of Forecast Testing LANCE FORTNOW and RAKESH V. VOHRA Northwestern University Consider a weather forecaster predicting the probability of rain for the next day. We consider tests that given

More information

Lecture Notes on Sequent Calculus

Lecture Notes on Sequent Calculus Lecture Notes on Sequent Calculus 15-816: Modal Logic Frank Pfenning Lecture 8 February 9, 2010 1 Introduction In this lecture we present the sequent calculus and its theory. The sequent calculus was originally

More information

Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions

Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions Chapter 1 Classical Program Logics: Hoare Logic, Weakest Liberal Preconditions 1.1 The IMP Language IMP is a programming language with an extensible syntax that was developed in the late 1960s. We will

More information

Preliminary Results on Social Learning with Partial Observations

Preliminary Results on Social Learning with Partial Observations Preliminary Results on Social Learning with Partial Observations Ilan Lobel, Daron Acemoglu, Munther Dahleh and Asuman Ozdaglar ABSTRACT We study a model of social learning with partial observations from

More information

Lecture Notes on Classical Linear Logic

Lecture Notes on Classical Linear Logic Lecture Notes on Classical Linear Logic 15-816: Linear Logic Frank Pfenning Lecture 25 April 23, 2012 Originally, linear logic was conceived by Girard [Gir87] as a classical system, with one-sided sequents,

More information

Programming Languages Fall 2013

Programming Languages Fall 2013 Programming Languages Fall 2013 Lecture 11: Subtyping Prof Liang Huang huang@qccscunyedu Big Picture Part I: Fundamentals Functional Programming and Basic Haskell Proof by Induction and Structural Induction

More information

Formal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University

Formal Epistemology: Lecture Notes. Horacio Arló-Costa Carnegie Mellon University Formal Epistemology: Lecture Notes Horacio Arló-Costa Carnegie Mellon University hcosta@andrew.cmu.edu Bayesian Epistemology Radical probabilism doesn t insists that probabilities be based on certainties;

More information

Outline. Introduction Delay-Locked Loops. DLL Applications Phase-Locked Loops PLL Applications

Outline. Introduction Delay-Locked Loops. DLL Applications Phase-Locked Loops PLL Applications Introduction Delay-Locked Loops DLL overview CMOS, refreshg your memory Buildg blocks: VCDL PD LF DLL analysis: Lear Nonlear Lock acquisition Charge sharg DLL Applications Phase-Locked Loops PLL Applications

More information

Mechanizing Optimization and Statistics

Mechanizing Optimization and Statistics Mechanizing Optimization and Statistics Ashish Agarwal Yale University IBM Programming Languages Day Watson Research Center July 29, 2010 Ashish Agarwal () 1 / 34 Acknowledgments Optimization: Ignacio

More information

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems

Modeling and state estimation Examples State estimation Probabilities Bayes filter Particle filter. Modeling. CSC752 Autonomous Robotic Systems Modeling CSC752 Autonomous Robotic Systems Ubbo Visser Department of Computer Science University of Miami February 21, 2017 Outline 1 Modeling and state estimation 2 Examples 3 State estimation 4 Probabilities

More information

Homotopy Type Theory

Homotopy Type Theory Homotopy Type Theory Jeremy Avigad Department of Philosophy and Department of Mathematical Sciences Carnegie Mellon University February 2016 Homotopy Type Theory HoTT relies on a novel homotopy-theoretic

More information

Propositional Logic Arguments (5A) Young W. Lim 11/8/16

Propositional Logic Arguments (5A) Young W. Lim 11/8/16 Propositional Logic (5A) Young W. Lim Copyright (c) 2016 Young W. Lim. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version

More information

Markov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials)

Markov Networks. l Like Bayes Nets. l Graphical model that describes joint probability distribution using tables (AKA potentials) Markov Networks l Like Bayes Nets l Graphical model that describes joint probability distribution using tables (AKA potentials) l Nodes are random variables l Labels are outcomes over the variables Markov

More information

COMPUTER SCIENCE TRIPOS

COMPUTER SCIENCE TRIPOS CST.2016.6.1 COMPUTER SCIENCE TRIPOS Part IB Thursday 2 June 2016 1.30 to 4.30 COMPUTER SCIENCE Paper 6 Answer five questions. Submit the answers in five separate bundles, each with its own cover sheet.

More information

Lecture Notes: Axiomatic Semantics and Hoare-style Verification

Lecture Notes: Axiomatic Semantics and Hoare-style Verification Lecture Notes: Axiomatic Semantics and Hoare-style Verification 17-355/17-665/17-819O: Program Analysis (Spring 2018) Claire Le Goues and Jonathan Aldrich clegoues@cs.cmu.edu, aldrich@cs.cmu.edu It has

More information

Notes from Yesterday s Discussion. Big Picture. CIS 500 Software Foundations Fall November 1. Some lessons.

Notes from Yesterday s  Discussion. Big Picture. CIS 500 Software Foundations Fall November 1. Some lessons. CIS 500 Software Foundations Fall 2006 Notes from Yesterday s Email Discussion November 1 Some lessons This is generally a crunch-time in the semester Slow down a little and give people a chance to catch

More information

AUTOMOTIVE ENVIRONMENT SENSORS

AUTOMOTIVE ENVIRONMENT SENSORS AUTOMOTIVE ENVIRONMENT SENSORS Lecture 5. Localization BME KÖZLEKEDÉSMÉRNÖKI ÉS JÁRMŰMÉRNÖKI KAR 32708-2/2017/INTFIN SZÁMÚ EMMI ÁLTAL TÁMOGATOTT TANANYAG Related concepts Concepts related to vehicles moving

More information

CSE 311: Foundations of Computing I Autumn 2014 Practice Final: Section X. Closed book, closed notes, no cell phones, no calculators.

CSE 311: Foundations of Computing I Autumn 2014 Practice Final: Section X. Closed book, closed notes, no cell phones, no calculators. CSE 311: Foundations of Computing I Autumn 014 Practice Final: Section X YY ZZ Name: UW ID: Instructions: Closed book, closed notes, no cell phones, no calculators. You have 110 minutes to complete the

More information

Efficient Heuristics for Two-Echelon Spare Parts Inventory Systems with An Aggregate Mean Waiting Time Constraint Per Local Warehouse

Efficient Heuristics for Two-Echelon Spare Parts Inventory Systems with An Aggregate Mean Waiting Time Constraint Per Local Warehouse Efficient Heuristics for Two-Echelon Spare Parts Inventory Systems with An Aggregate Mean Waitg Time Constrat Per Local Warehouse Hartanto Wong a, Bram Kranenburg b, Geert-Jan van Houtum b*, Dirk Cattrysse

More information

Applied Logic. Lecture 1 - Propositional logic. Marcin Szczuka. Institute of Informatics, The University of Warsaw

Applied Logic. Lecture 1 - Propositional logic. Marcin Szczuka. Institute of Informatics, The University of Warsaw Applied Logic Lecture 1 - Propositional logic Marcin Szczuka Institute of Informatics, The University of Warsaw Monographic lecture, Spring semester 2017/2018 Marcin Szczuka (MIMUW) Applied Logic 2018

More information

1. Introduction. Joshua Dunfield 31 May 2000 Thesis: Semi-Automatic Verification of Purely Functional Programs

1. Introduction. Joshua Dunfield 31 May 2000 Thesis: Semi-Automatic Verification of Purely Functional Programs Joshua Dunfield (jcd@cs.pdx.edu) 31 May 2000 Thesis: Semi-Automatic Verification of Purely Functional Programs 1. Introduction Sce the 1960s, computer scientists have been terested the verification of

More information

Interface for module Ptset

Interface for module Ptset Interface for module Ptset 1 1 Interface for module Ptset 1. Sets of tegers implemented as Patricia trees. The followg signature is exactly Set.S with type elt = t, with the same specifications. This is

More information

Preference, Choice and Utility

Preference, Choice and Utility Preference, Choice and Utility Eric Pacuit January 2, 205 Relations Suppose that X is a non-empty set. The set X X is the cross-product of X with itself. That is, it is the set of all pairs of elements

More information

PROBABILISTIC REASONING SYSTEMS

PROBABILISTIC REASONING SYSTEMS PROBABILISTIC REASONING SYSTEMS In which we explain how to build reasoning systems that use network models to reason with uncertainty according to the laws of probability theory. Outline Knowledge in uncertain

More information

Markov Networks. l Like Bayes Nets. l Graph model that describes joint probability distribution using tables (AKA potentials)

Markov Networks. l Like Bayes Nets. l Graph model that describes joint probability distribution using tables (AKA potentials) Markov Networks l Like Bayes Nets l Graph model that describes joint probability distribution using tables (AKA potentials) l Nodes are random variables l Labels are outcomes over the variables Markov

More information

Discrete Mathematics for CS Fall 2003 Wagner Lecture 3. Strong induction

Discrete Mathematics for CS Fall 2003 Wagner Lecture 3. Strong induction CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 3 This lecture covers further variants of induction, including strong induction and the closely related wellordering axiom. We then apply these

More information

Supplementary Notes on Inductive Definitions

Supplementary Notes on Inductive Definitions Supplementary Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 August 29, 2002 These supplementary notes review the notion of an inductive definition

More information

Random variables. DS GA 1002 Probability and Statistics for Data Science.

Random variables. DS GA 1002 Probability and Statistics for Data Science. Random variables DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Motivation Random variables model numerical quantities

More information

High-Level Small-Step Operational Semantics for Transactions (Technical Companion)

High-Level Small-Step Operational Semantics for Transactions (Technical Companion) High-Level Small-Step Operational Semantics for Transactions (Technical Companion) Katherine F. Moore, Dan Grossman July 15, 2007 Abstract This document is the technical companion to our POPL 08 submission

More information

Accelerating Data Regeneration for Distributed Storage Systems with Heterogeneous Link Capacities

Accelerating Data Regeneration for Distributed Storage Systems with Heterogeneous Link Capacities Acceleratg Data Regeneration for Distributed Storage Systems with Heterogeneous Lk Capacities Yan Wang, Xunrui Y, Dongsheng Wei, X Wang, Member, IEEE, Yucheng He, Member, IEEE arxiv:6030563v [csdc] 6 Mar

More information

Introduction to Artificial Intelligence (AI)

Introduction to Artificial Intelligence (AI) Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 9 Oct, 11, 2011 Slide credit Approx. Inference : S. Thrun, P, Norvig, D. Klein CPSC 502, Lecture 9 Slide 1 Today Oct 11 Bayesian

More information

Analog Circuit Verification by Statistical Model Checking

Analog Circuit Verification by Statistical Model Checking 16 th Asian Pacific Design Automation ti Conference Analog Circuit Verification by Statistical Model Checking Author Ying-Chih Wang (Presenter) Anvesh Komuravelli Paolo Zuliani Edmund M. Clarke Date 2011/1/26

More information

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard

More information

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181. Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität

More information

Value Range Propagation

Value Range Propagation Value Range Propagation LLVM Adam Wiggs Patryk Zadarnowski {awiggs,patrykz}@cse.unsw.edu.au 17 June 2003 University of New South Wales Sydney MOTIVATION The Problem: X Layout of basic blocks LLVM is bra-dead.

More information

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart

Robotics. Mobile Robotics. Marc Toussaint U Stuttgart Robotics Mobile Robotics State estimation, Bayes filter, odometry, particle filter, Kalman filter, SLAM, joint Bayes filter, EKF SLAM, particle SLAM, graph-based SLAM Marc Toussaint U Stuttgart DARPA Grand

More information

Logical specification and coinduction

Logical specification and coinduction Logical specification and coinduction Robert J. Simmons Carnegie Mellon University, Pittsburgh PA 15213, USA, rjsimmon@cs.cmu.edu, WWW home page: http://www.cs.cmu.edu/ rjsimmon Abstract. The notion of

More information

Mathematical Formulation of Our Example

Mathematical Formulation of Our Example Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot

More information

Type Soundness for Path Polymorphism

Type Soundness for Path Polymorphism Type Soundness for Path Polymorphism Andrés Ezequiel Viso 1,2 joint work with Eduardo Bonelli 1,3 and Mauricio Ayala-Rincón 4 1 CONICET, Argentina 2 Departamento de Computación, FCEyN, UBA, Argentina 3

More information

Modal Logic as a Basis for Distributed Computation

Modal Logic as a Basis for Distributed Computation Modal Logic as a Basis for Distributed Computation Jonathan Moody 1 jwmoody@cs.cmu.edu October 2003 CMU-CS-03-194 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 1 This material

More information

Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007

Dynamic Noninterference Analysis Using Context Sensitive Static Analyses. Gurvan Le Guernic July 14, 2007 Dynamic Noninterference Analysis Using Context Sensitive Static Analyses Gurvan Le Guernic July 14, 2007 1 Abstract This report proposes a dynamic noninterference analysis for sequential programs. This

More information

Kleene realizability and negative translations

Kleene realizability and negative translations Q E I U G I C Kleene realizability and negative translations Alexandre Miquel O P. D E. L Ō A U D E L A R April 21th, IMERL Plan 1 Kleene realizability 2 Gödel-Gentzen negative translation 3 Lafont-Reus-Streicher

More information

Automata Theory and Formal Grammars: Lecture 1

Automata Theory and Formal Grammars: Lecture 1 Automata Theory and Formal Grammars: Lecture 1 Sets, Languages, Logic Automata Theory and Formal Grammars: Lecture 1 p.1/72 Sets, Languages, Logic Today Course Overview Administrivia Sets Theory (Review?)

More information

2D Image Processing (Extended) Kalman and particle filter

2D Image Processing (Extended) Kalman and particle filter 2D Image Processing (Extended) Kalman and particle filter Prof. Didier Stricker Dr. Gabriele Bleser Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

More information

CS 6110 Lecture 28 Subtype Polymorphism 3 April 2013 Lecturer: Andrew Myers

CS 6110 Lecture 28 Subtype Polymorphism 3 April 2013 Lecturer: Andrew Myers CS 6110 Lecture 28 Subtype Polymorphism 3 April 2013 Lecturer: Andrew Myers 1 Introduction In this lecture, we make an attempt to extend the typed λ-calculus for it to support more advanced data structures

More information

Models of Computation,

Models of Computation, Models of Computation, 2010 1 Induction We use a lot of inductive techniques in this course, both to give definitions and to prove facts about our semantics So, it s worth taking a little while to set

More information

Complete Partial Orders, PCF, and Control

Complete Partial Orders, PCF, and Control Complete Partial Orders, PCF, and Control Andrew R. Plummer TIE Report Draft January 2010 Abstract We develop the theory of directed complete partial orders and complete partial orders. We review the syntax

More information

Ordinary Interactive Small-Step Algorithms, III

Ordinary Interactive Small-Step Algorithms, III Ordinary Interactive Small-Step Algorithms, III ANDREAS BLASS University of Michigan and YURI GUREVICH Microsoft Research This is the third in a series of three papers extending the proof of the Abstract

More information

Programming Languages and Types

Programming Languages and Types Programming Languages and Types Klaus Ostermann based on slides by Benjamin C. Pierce Subtyping Motivation With our usual typing rule for applications the term is not well typed. ` t 1 : T 11!T 12 ` t

More information

CONVERGENCE OF FOURIER SERIES

CONVERGENCE OF FOURIER SERIES CONVERGENCE OF FOURIER SERIES SOPHIA UE Abstract. The subject of Fourier analysis starts as physicist and mathematician Joseph Fourier s conviction that an arbitrary function f could be given as a series.

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017

EKF and SLAM. McGill COMP 765 Sept 18 th, 2017 EKF and SLAM McGill COMP 765 Sept 18 th, 2017 Outline News and information Instructions for paper presentations Continue on Kalman filter: EKF and extension to mapping Example of a real mapping system:

More information

Test Generation for Designs with Multiple Clocks

Test Generation for Designs with Multiple Clocks 39.1 Test Generation for Designs with Multiple Clocks Xijiang Lin and Rob Thompson Mentor Graphics Corp. 8005 SW Boeckman Rd. Wilsonville, OR 97070 Abstract To improve the system performance, designs with

More information

UNIVERSITY OF NOTTINGHAM. Discussion Papers in Economics CONSISTENT FIRM CHOICE AND THE THEORY OF SUPPLY

UNIVERSITY OF NOTTINGHAM. Discussion Papers in Economics CONSISTENT FIRM CHOICE AND THE THEORY OF SUPPLY UNIVERSITY OF NOTTINGHAM Discussion Papers in Economics Discussion Paper No. 0/06 CONSISTENT FIRM CHOICE AND THE THEORY OF SUPPLY by Indraneel Dasgupta July 00 DP 0/06 ISSN 1360-438 UNIVERSITY OF NOTTINGHAM

More information