CS 6112 (Fall 2011) Foundations of Concurrency

Similar documents
Clojure Concurrency Constructs, Part Two. CSCI 5828: Foundations of Software Engineering Lecture 13 10/07/2014

Multicore Semantics and Programming

Fault-Tolerant Consensus

Shared-Variable Concurrency

Lecture Notes on Ordered Proofs as Concurrent Programs

Valency Arguments CHAPTER7

Lock Inference for Atomic Sections

Assignment 5: Solutions

Separation Logic and Graphical Models

Technical Report Sequential Proximity

ENS Lyon Camp. Day 2. Basic group. Cartesian Tree. 26 October

Distributed Transactions. CS5204 Operating Systems 1

Distributed Computing in Shared Memory and Networks

Review Of Topics. Review: Induction

HW #4. (mostly by) Salim Sarımurat. 1) Insert 6 2) Insert 8 3) Insert 30. 4) Insert S a.

Introduction to Permission-Based Program Logics Part II Concurrent Programs

Temporal logics and explicit-state model checking. Pierre Wolper Université de Liège

Section 6 Fault-Tolerant Consensus

CS-206 Concurrency. Lecture 8 Concurrent. Data structures. a b c d. Spring 2015 Prof. Babak Falsafi parsa.epfl.ch/courses/cs206/ remove(c) remove(b)

Modeling and Analysis of Communicating Systems

The Underlying Semantics of Transition Systems

Binary Search Trees. Motivation

Outline. The Leader Election Protocol (IEEE 1394) IEEE 1394 High Performance Serial Bus (FireWire) Motivation. Background. Protocol Descriptions

Consensus and Universal Construction"

Denotational event structure for relaxed memory

Models of Computation,

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programming II Date: 10/12/17

CS Data Structures and Algorithm Analysis

List reversal: back into the frying pan

Lecture 22: Multithreaded Algorithms CSCI Algorithms I. Andrew Rosenberg

Towards a Practical Secure Concurrent Language

Static Program Analysis

1 Lamport s Bakery Algorithm

Concurrency, Rely/Guarantee and Separation Logic

Lecture 17: Trees and Merge Sort 10:00 AM, Oct 15, 2018

Binary Search Trees. Lecture 29 Section Robb T. Koether. Hampden-Sydney College. Fri, Apr 8, 2016

CSC236 Week 3. Larry Zhang

Type-Based Information Flow Analysis for the Pi-Calculus

INF2220: algorithms and data structures Series 1

Computational Complexity

Define Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 417: Algorithms and Computational Complexity. Winter 2007 Larry Ruzzo

1 Trees. Listing 1: Node with two child reference. public class ptwochildnode { protected Object data ; protected ptwochildnode l e f t, r i g h t ;

CS/IT OPERATING SYSTEMS

CS361 Homework #3 Solutions

Distributed Computing. Synchronization. Dr. Yingwu Zhu

Sorting Algorithms. We have already seen: Selection-sort Insertion-sort Heap-sort. We will see: Bubble-sort Merge-sort Quick-sort

Analysis of Algorithms CMPSC 565

Automatic Verification of Parameterized Data Structures

DR.RUPNATHJI( DR.RUPAK NATH )

CS477 Formal Software Dev Methods

CptS 464/564 Fall Prof. Dave Bakken. Cpt. S 464/564 Lecture January 26, 2014

Today. Vector Clocks and Distributed Snapshots. Motivation: Distributed discussion board. Distributed discussion board. 1. Logical Time: Vector clocks

Atomic snapshots in O(log 3 n) steps using randomized helping

Computer-Aided Program Design

Information System Design IT60105

INF Models of concurrency

CS357: CTL Model Checking (two lectures worth) David Dill

Defining Efficiency. 2: Analysis. Efficiency. Measuring efficiency. CSE 421: Intro Algorithms. Summer 2007 Larry Ruzzo

Lecture 1 : Data Compression and Entropy

: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8

Complex Systems Design & Distributed Calculus and Coordination

Notes on Inductive Sets and Induction

The Leader Election Protocol (IEEE 1394)

SFM-11:CONNECT Summer School, Bertinoro, June 2011

3 Random Samples from Normal Distributions

State of the art Image Compression Techniques

On the Effectiveness of Symmetry Breaking

Design and Analysis of Distributed Interacting Systems

Syntactic Analysis. Top-Down Parsing

Slides for Chapter 14: Time and Global States

Real Time Operating Systems

Advanced Implementations of Tables: Balanced Search Trees and Hashing

Syntax Analysis Part I

On the weakest failure detector ever

Syntax Analysis Part I

Process Algebras and Concurrent Systems

Introduction to Arti Intelligence

Algorithmic verification

Asynchronous group mutual exclusion in ring networks

Theory versus Experiment: Analysis and Measurements of Allocation Costs in Sorting (With Hints for Applying Similar Techniques to Garbage Collection)

CTL satisfability via tableau A C++ implementation

CSE 202 Homework 4 Matthias Springer, A

Lecture 8 : Structural Induction DRAFT

Machine Learning Methods and Real-Time Economics

Notes on Abstract Interpretation

INF 4140: Models of Concurrency Series 3

EDA045F: Program Analysis LECTURE 10: TYPES 1. Christoph Reichenbach

Shared on QualifyGate.com

Decidable Subsets of CCS

Outline F eria AADL behavior 1/ 78

Automata Theory CS Complexity Theory I: Polynomial Time

Liveness of Communicating Transactions

(a) Definition of TMs. First Problem of URMs

A Brief History of Shared memory C M U

Generalizing Data-flow Analysis

Constant Space and Non-Constant Time in Distributed Computing

CSC236H Lecture 2. Ilir Dema. September 19, 2018

Causal Dataflow Analysis for Concurrent Programs

Abstraction for Concurrent Objects

A Tight Lower Bound for Top-Down Skew Heaps

Transcription:

CS 6112 (Fall 2011) Foundations of Concurrency 29 November 2011 Scribe: Jean-Baptiste Jeannin 1 Readings The readings for today were: Eventually Consistent Transactions, by Sebastian Burckhardt, Manuel Fähndrich, Daan Leijen and Mooly Sagiv, available at http://researchmicrosoftcom/pubs/155638/msr-tr-2011-117pdf Transactional Boosting: A Methodology for Highly-Concurrent Transactional Objects, by Maurice Herlihy and Eric Kosniken, available at http://wwwclcamacuk/~ejk39/papers/boosting-ppopp08pdf Coarse-Grained Transactions, by Eric Koskinen, Matthew Parkinson and Maurice Herlihy, available at http://wwwclcamacuk/~ejk39/papers/cgtpdf Most of the discussion was based on this last paper 2 Skew heaps 21 De nitions Let us start with an example: parallelizing insertions in skew heaps A skew heap (http://enwikipediaorg/wiki/skew_heap) is a heap data structure implemented as a binary tree It is de ned recursively as: A heap with only one element is a skew heap; The result of skew merging two skew heaps is also a skew heap Let us just describe how to insert one element e into a skew heap h with root p, whose subtrees are l on the left and r on the right; the general merge of two heaps can be found on Wikipedia: ėp l r inserting e into an empty heap just results in the heap with just that element; if e > p, the resulting heap has root p, its left subtree is the result of recursively inserting e in r, and its right subtree is l Note how we swapped r and l in the process; 1

p insert(e,r) l if e < p, the resulting heap has e as a root, its left subtree is the result of inserting p in r and its right subtree is l e insert(p,r) l Remark: What we said in class does not seem to match the Wikipedia article For wikipedia, the resulting heap has root e, with the original heap h as its left subtree and an empty right subtree e p l r Now suppose we have the following heap and two concurrent threads, A adding 2 to it, and B adding 6 to it Several solutions are possible 0 1 3 4 5 22 Global lock The rst solution is to use a global lock on the heap This is not so nice because it is slow: you have to wait for one insertion to complete before the next one can start 2

23 Local locks The second solution is to use a ne grain locking procedure: use one lock per node, and acquire and release the locks as we traverse the tree Here is an example of an execution using such a procedure: Thread A Acquire lock for node 0 Swap children of 0 0 Thread B 3 1 4 5 Release lock for node 0 Acquire lock for node 1 Acquire lock for node 0 Insert 2 into 1 Swap left and right children of 0 0 1 3 2 4 5 Insert 6 into 3 0 1 3 2 5 4 6 This performs relatively well, but it is hard to write and get right 24 Transactions We could also use transactions, but in the previous examples, here is what would happen: Thread A Thread B Read 0 Read 0 Write 0{L, R} Write 0{L, R} 3

We will get a con ict when committing, and one of the threads will have to be rolled back It hinders concurrency, even though it is ef cient for the programmer Here is one more example: imagine trying to insert a few single-digit numbers and 6000 into a huge heap The insert of a single-digit number has a short log, and will always try to commit rst The insert of 6000, on the other hand, will always get rolled back If trying to parallelize it with several insertions of single-digit numbers, it will only actually happen at the very end 3 Coarse-Grained Transaction We now focus on Maurice Herlihy and Eric Koskinen s work, and especially their POPL 10 paper Coarse- Grained Transactions The basic idea is to have a transaction mechanism that is convenient to programmers but does not have the big con icts that normal transactions generate Example 1 global SkipList Set; T1 : atomic { Setadd(5); Setremove(3); } T2 : atomic { if(setcontains(6)){ Setremove(7); } } In this example using a ne-grained synchronization technique would produce a con ict, because adding 5 and removing 7 will read or write the same memory locations However at a coarse-grained level, these operations commute with each other as soon as their arguments are distinct We will consider two execution semantics: Pessimistic Execution Semantics (section 31 of the paper) prevent con icts by checking if there will be a con ict before doing anything They are called pessimistic because they act like there will always be a con ict Optimistic Execution Semantics (section 32 of the paper) detect con icts afterwards: they copy the initial state, and roll back if there is a con ict when committing They are called optimistic because they act hoping that there will not be a con ict 31 Syntax The language is given by: c ::= c; c e := e if b then c else c s ::= s; s c beg; t; end t ::= t; t c om beg; t; end is the syntax for a transaction om represents a method call m on a shared object o Note that the syntax purposely makes nested transactions impossible 4

The semantics is a labeled transition system, with transitions T, σ sh α A T, σ sh where T is a set of transactions t, σ sh is a global state, ie, a nite map from object ids to objects α (N (om {beg, end, cmt})) is a label beg is short for begin, and cmt is short for commit A just means atomic Each transaction is a tuple τ, p, where τ is a transaction identi er and p is the program text for the transaction The abridged atomic semantics is given by: ( τ, c; c T ), σ sh A ( τ, c T ), [[c]]σ sh α = (τ, beg) α α (τ, end) (τ, cmt) τ fresh ( τ, t T ), σ sh ( τ, skip T ), σ sh α (, beg; t; end T ), σ sh A (, skip T ), σ sh 32 Pessimistic semantics (section 31) A transaction is now a tuple τ, s, M, σ τ, where M is a sequence of object methods, and σ τ is a local state All the arrows are indexed with a P, for pessimistic ( τ, s {:=, if, }; s, M, σ τ T ), σ sh P ( τ, s, M, [[s]]σ τ T ), σ sh τ fresh τ,beg (, beg;s, M, σ τ T ), σ sh P ( τ, s, [ ], σ τ T ), σ sh {om} meths(t ) τ,om ( τ, x := om; s, M, σ τ T ), σ sh P ( τ, s, M :: ( x := om; s, σ τ, om ), σ τ [x rv([[o]]σ sh m)] T ), σ sh [o [[o]]σ sh m] The dif cult part is om being invoked has some global effect: each invokation of o returns a new object and a return value (accessed through rv) Under transactional semantics, there is a guarantee to the programmer that the actual execution will be equivalent to some serial execution To get this we need to know that it was ok to call om at that point This is the reason behind the introduction of the mover concept We write {om} meths(t ) for om is a left mover of T De nition 1 om pn if and only if {σ σ σ pn σ σ om σ } {σ σ σ om σ σ pn σ } ie, if the set of states obtained by running pn followed by om is included in the set of states obtained by running om followed by pn (left commutativity) One problem of this semantics is that we can get deadlocks with two transactions sending messages that are not inverse of each other A solution is running an om that has an inverse that can be rolled back, and then make some progress from there 5

33 Optimistic semantics (section 32) A transaction is now a tuple τ, s, σ τ, σ τ, l τ, where l τ is a log, a list of messages that have been sent already All the arrows are indexed with an O, for optimistic τ fresh (, beg; s, σ τ, σ τ, [ ] T ), σ sh, l sh (τ,beg) O ( τ, s, snap(σ τ, σ sh ), σ τ [stmt beg; s ], [ ] T ), σ sh, l sh Message call: ( τ, x := om; s, σ τ, σ τ, l τ T ), σ sh, l sh (τ,om) O ( τ, s, σ τ [o ([[o]]σ sh )m, x rv(([[o]]σ sh )m)], σ τ, l τ :: ( om ) T ), σ sh, l sh (τ cmt, l τ ) l sh τ cmt > τ l τ l τ ( τ, end; s, σ τ, σ τ, l τ T ), σ sh, l sh (τ,cmt) O ( τ, s, zap(σ τ ), zap( σ τ ), [ ] T ), merge(σ sh, l τ ), l sh :: (fresh(τ cmt ), l τ ) where the merge operation goes through the log where it nds operations that it applies in order to the global state In the last rule, the condition (τ cmt, l τ ) l sh τ cmt > τ l τ l τ makes sure the end operation is sensible 34 Conclusion Section 5 of the paper develops the theory of trace semantics of those programs, develops serial execution and shows both semantics are equivalent to some serialized execution 6