INF 4140: Models of Concurrency Series 3

Similar documents
INF 4140: Models of Concurrency Series 4

Improper Nesting Example

INF Models of concurrency

Clojure Concurrency Constructs, Part Two. CSCI 5828: Foundations of Software Engineering Lecture 13 10/07/2014

Lecture 4: Process Management

Safety and Liveness. Thread Synchronization: Too Much Milk. Critical Sections. A Really Cool Theorem

Correctness of Concurrent Programs

INF Models of concurrency

1 Lamport s Bakery Algorithm

Unit: Blocking Synchronization Clocks, v0.3 Vijay Saraswat

Distributed Systems Part II Solution to Exercise Sheet 10

Deadlock. CSE 2431: Introduction to Operating Systems Reading: Chap. 7, [OSC]

Shared resources. Sistemi in tempo reale. Giuseppe Lipari. Scuola Superiore Sant Anna Pisa -Italy

Real Time Operating Systems

CSC 5170: Theory of Computational Complexity Lecture 4 The Chinese University of Hong Kong 1 February 2010

The Weakest Failure Detector to Solve Mutual Exclusion

Math 38: Graph Theory Spring 2004 Dartmouth College. On Writing Proofs. 1 Introduction. 2 Finding A Solution

MA 1128: Lecture 08 03/02/2018. Linear Equations from Graphs And Linear Inequalities

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Lines and Their Equations

Real Time Operating Systems

INF2220: algorithms and data structures Series 1

Operating Systems. VII. Synchronization

1 Boolean Algebra Simplification

Lecture 9: Cri,cal Sec,ons revisited, and Reasoning about Programs. K. V. S. Prasad Dept of Computer Science Chalmers University Monday 23 Feb 2015

15-451/651: Design & Analysis of Algorithms September 13, 2018 Lecture #6: Streaming Algorithms last changed: August 30, 2018

Topic Contents. Factoring Methods. Unit 3: Factoring Methods. Finding the square root of a number

Formal Verification Techniques. Riccardo Sisto, Politecnico di Torino

INF Models of concurrency

CS162 Operating Systems and Systems Programming Lecture 7 Semaphores, Conditional Variables, Deadlocks"

Automata-Theoretic Model Checking of Reactive Systems

1 Terminology and setup

Q520: Answers to the Homework on Hopfield Networks. 1. For each of the following, answer true or false with an explanation:

Sequential Logic (3.1 and is a long difficult section you really should read!)

On the weakest failure detector ever

CSC501 Operating Systems Principles. Deadlock

CIS 4930/6930: Principles of Cyber-Physical Systems

Counters. We ll look at different kinds of counters and discuss how to build them

Lecture 6: Introducing Complexity

DRAFT - do not circulate

CSCI3390-Assignment 2 Solutions

Exercises Solutions. Automation IEA, LTH. Chapter 2 Manufacturing and process systems. Chapter 5 Discrete manufacturing problems

Elementary Algebra Study Guide Some Basic Facts This section will cover the following topics

1 Definition of a Turing machine

1 Introduction (January 21)

CMSC 451: Lecture 7 Greedy Algorithms for Scheduling Tuesday, Sep 19, 2017

INF Models of concurrency

CS 374: Algorithms & Models of Computation, Spring 2017 Greedy Algorithms Lecture 19 April 4, 2017 Chandra Chekuri (UIUC) CS374 1 Spring / 1

CS 453 Operating Systems. Lecture 7 : Deadlock

1 Introduction. 2 First Order Logic. 3 SPL Syntax. 4 Hoare Logic. 5 Exercises

Let s now begin to formalize our analysis of sequential machines Powerful methods for designing machines for System control Pattern recognition Etc.

Chapter 2. Mathematical Reasoning. 2.1 Mathematical Models

Lock using Bakery Algorithm

Concrete models and tight upper/lower bounds

Agreement. Today. l Coordination and agreement in group communication. l Consensus

CS 152 Computer Architecture and Engineering. Lecture 17: Synchronization and Sequential Consistency

Lecture Notes on Inductive Definitions

Chapter 1 Review of Equations and Inequalities

MATH 521, WEEK 2: Rational and Real Numbers, Ordered Sets, Countable Sets

Enrico Nardelli Logic Circuits and Computer Architecture

4. What is the probability that the two values differ by 4 or more in absolute value? There are only six

CISC 4090: Theory of Computation Chapter 1 Regular Languages. Section 1.1: Finite Automata. What is a computer? Finite automata

Scheduling I. Today. Next Time. ! Introduction to scheduling! Classical algorithms. ! Advanced topics on scheduling

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Digital Logic

A Short Introduction to Hoare Logic

CS505: Distributed Systems

Finding Limits Graphically and Numerically

Lecture 4: Constructing the Integers, Rationals and Reals

or 0101 Machine

NP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness

CSCI3390-Lecture 14: The class NP

The State Explosion Problem

Petri nets. s 1 s 2. s 3 s 4. directed arcs.

9.2 Multiplication Properties of Radicals

Proof Techniques (Review of Math 271)

CSCI 2150 Intro to State Machines

Algebra Year 10. Language

1 Maintaining a Dictionary

Tricks of the Trade in Combinatorics and Arithmetic

(a) Definition of TMs. First Problem of URMs

6.852: Distributed Algorithms Fall, Class 10

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Basics of Proofs. 1 The Basics. 2 Proof Strategies. 2.1 Understand What s Going On

DETERMINING THE VARIABLE QUANTUM TIME (VQT) IN ROUND ROBIN AND IT S IMPORTANCE OVER AVERAGE QUANTUM TIME METHOD

P (E) = P (A 1 )P (A 2 )... P (A n ).

UC Santa Barbara. Operating Systems. Christopher Kruegel Department of Computer Science UC Santa Barbara

3.5 Solving Equations Involving Integers II

Algorithms Exam TIN093 /DIT602

Latches. October 13, 2003 Latches 1

CSC : Homework #3

Che-Wei Chang Department of Computer Science and Information Engineering, Chang Gung University

CS-206 Concurrency. Lecture 8 Concurrent. Data structures. a b c d. Spring 2015 Prof. Babak Falsafi parsa.epfl.ch/courses/cs206/ remove(c) remove(b)

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Dartmouth Computer Science Technical Report TR Efficient Wait-Free Implementation of Multiword LL/SC Variables

HW8. Due: November 1, 2018

Trace semantics: towards a unification of parallel paradigms Stephen Brookes. Department of Computer Science Carnegie Mellon University

Math 31 Lesson Plan. Day 2: Sets; Binary Operations. Elizabeth Gillaspy. September 23, 2011

Clocks in Asynchronous Systems

const =Λ for simplicity, we assume N =2 L Λ= L = log N; =Λ (depth of arbitration tree) + 1 = O(log N) Λ= Tsize =2 L 1=N 1 =Λ size of arbitration tree

TDDB68 Concurrent programming and operating systems. Lecture: CPU Scheduling II

Logic. Propositional Logic: Syntax. Wffs

Transcription:

Universitetet i Oslo Institutt for Informatikk PMA Olaf Owe, Martin Steffen, Toktam Ramezani INF 4140: Models of Concurrency Høst 2016 Series 3 14. 9. 2016 Topic: Semaphores (Exercises with hints for solution) Issued: 14. 9. 2016 Exercise 1 (CS with coordinator) In the critical section protocols in the book, every process executes the same algorithm; these are symmetric solutions. It is also possible to solve the problem using a coordinator process. In particular, when a regular process CS[i] wants to enter its critical section, it tells the coordinator, then waits for the coordinator to grant permission. Assume there are n processes numbered 1 to n. Develop entry and exit protocols for the regular processes and code for the coordinator process. Use flags and await-statements for synchronization. The solution must work, if regular processes terminate outside the critical section. Solution: [of Exercise 1] As usual for mutex and critical sections, the focus is on the entry protocol, the exit protocol is more or less simple. Again, the skeleton of the processes and (and of the coordinator) is as always: a big while-loop. Now: the presence of a coordinator makes the design actually pretty simple: each process, in its entry protocol, has to go through the stages: apply for the entry to the CS, and way for being granted access. For expressing the wish, the protocol uses, of course, shared variables. The easiest way to arrange that, it seems is to arrange for each participant for a separate, private channel with the coordinator. The arrangement indeed works a bit like channel communication, where the await-statement is used for synchronization. Furthermore, the communication (or at least signalling/sycnhronization) between the coordinator and each participant can be seen as bi-directional: eeach process communicates to the coordinator its intention to enter, and then waits until the coordinator gives the green light. The back-channel go is shared among all processes, and its the identity which indicates who is allowed to continue. At the exit protocol, the exiting process does not to indicate its identity, as there only one process that exits. The stage of expressing one s wish to enter is, of course, present in many CS protocols. Often, the tricky part of CS/mutex is: given a number of processes that want to enter, for instance by having expressed their which to enter using for instance a particular shared variable as flag, decide, which one is allowed to enter (without of course making the basic error of letting more than one enter, or introducing a deadlock, but in particular, without being unfair and allowing progress/liveness). Breaking the symmetry becomes much easier with a coordinator. 1 1 The general problem of breaking the symmetry in a set of symmetric processes such that they agree on a common solution is known as distributed consensus and is notoriously complex. Here, in a way, the specific consensus to agree upon is: when having more than one process wishing to enter, find a consensus about who is the one and only one who is allowed to enter. An additional complication of CS is that this is done repeatedly and for dealing with this repetition, fairness becomes important.

Each process indicates its wish to enter by communicating its identity to the coodinator, and waits. The coordinator picks one it acts thereby like a scheduler deblocks the picked process, and the whole thing continues like that. 1 int try [ 1 : n ] = ( [ n ] 0 ) ; 2 int go = 0 ; 3 4 process CS [ i =1 to n ] { 5 while ( true ){ 6 try [ i ] := 1 ; // i n d i c a t e i n t e n t i o n 7 <await ( go = i ) >; // wait for being granted 8 critical section 9 go := 0 ; 10 } 11 } 12 process c o o r d i n a t o r { 13 while ( true ) { 14 for [ i = 1 to n ] { // round robin 15 i f ( try [ i ] = 1) { 16 try [ i ] := 0 ; 17 go := i ; 18 < await ( go = 0 ) ; > 19 } 20 } 21 } 22 } This solution is fair. The coordinator checks the processes in a round robin manner, which means that a given process can be passed by at most n 1 other processes. By this, a given user process is guaranteed eventual entry. Many symmetric protocol do the following: when trying to enter and seeing a conflict in that someone else whats to enter to, they retract their wish temporarily (perhaps to avoid deadlock) and try again. That makes liveness (eventual entry) tricky (in particular if we don t have strong fairness). Here, it s pretty simple: a process indicates its wish, suspends, i.e., it never retracts it wish and the loop of the coordinator at some point, latest after going through the whole array of processes. Exercise 2 (Semaphores to pass control) Given the following routine: 1 p r i n t ( ) { 2 3 process P1 { 4 w r i t e ( l i n e 1 ) ; w r i t e ( l i n e 2 ) ; 5 } 6 7 process P2 { 8 w r i t e ( l i n e 3 ) ; w r i t e ( l i n e 4 ) ; 9 } 10 11 process P3 { 12 w r i t e ( l i n e 5 ) ; w r i t e ( l i n e 6 ) ; 13 } 14 15 } 1. How many different outputs could this program produce? Explain your reasoning. 2

2. Add semaphores to the program so that the six lines of output are printed in the order 1, 2, 3, 4, 5, 6. Declare and initialize any semaphores you need and add P and V operations to the above processes. Solution: Perhaps one should explain what control is. 1. For n processes doing m atomic statements each, the number of different runs are: In this case, n = 3 and m = 2 which gives: (n m)! m! n (3 2)! 2! 3 = 6! 2 3 = 720 8 = 90 2. Analysing the problem should be quite straightforward: We must have P 2 wait until P 1 is finished (terminated) and same for P 3, which must wait until P 2 is finished. There are therefore 2 signalling or synchronization needs. It is therefore natural to use two semaphores: 1 p r i n t ( ) { 2 sem go2 = 0, go3 = 0 ; 3 4 process P1 { 5 w r i t e ( l i n e 1 ) ; w r i t e ( l i n e 2 ) ; 6 V( go2 ) ; 7 } 8 9 process P2 { 10 P( go2 ) ; 11 w r i t e ( l i n e 3 ) ; w r i t e ( l i n e 4 ) ; 12 V( go3 ) ; 13 } 14 15 process P3 { 16 P( go3 ) ; 17 w r i t e ( l i n e 5 ) ; w r i t e ( l i n e 6 ) ; 18 } 19 } Exercise 3 (Semaphores for synchronization) Several processes share a resource that has U units. Processes request one unit at a time, but may release several. The routines request and release are atomic operations as shown below. 1 int f r e e := U; 2 3 r e q u e s t ( ) : # < await ( f r e e > 0) f r e e := f r e e 1 ; > 4 5 r e l e a s e ( int number ) : # < f r e e := f r e e + number ; > Develop implementations of request and release. Use semaphores for synchronization. Be sure to declare and initialize additional variables you may need. Solution: Solution due to Andrews. It uses split binary semaphores. See also the split-binary sem solution at the end of the semaphore slides (split semaphores for readers/writers). It s split into enter and delay. It s a sanity check to see that it always goes between 1 and 0 (the ). The solution is perhaps more complex than the text of the exercise required. 3

1 int f r e e := U; 2 sem enter := 1, delay := 0 ; 3 int cnt = 0 ; # pending r e q u e s t s 4 5 r e q u e s t ( ) { # < await ( f r e e > 0) f r e e = f r e e 1 ; > 6 7 P( enter ) ; 8 i f ( f r e e = 0) { # No u n i t s l e f t 9 cnt := cnt + 1 ; # one more r e q u e s t pending 10 V( enter ) ; # Release CS 11 P( delay ) ; # Wait to u n i t s are r e l e a s e d 12 cnt := cnt 1 ; 13 } 14 f r e e := f r e e 1 ; # Take one u n i t 15 i f ( f r e e > 0 and cnt > 0) { # More than one u n i t was r e l e a s e d 16 V( delay ) ; 17 } 18 else V( enter ) ; # No u n i t s l e f t or no w a i t i n g u s e r s 19 } 20 21 22 r e l e a s e ( int number ) { # < f r e e = f r e e + number ; > 23 24 P( enter ) ; 25 f r e e := f r e e + number ; 26 i f ( cnt > 0) { # Some p r o c e s s i s w a i t i n g 27 V( delay ) ; # Give them p r i o r i t y 28 } 29 else V( enter ) ; # Else : open f o r new u s e r s 30 } The counter counts whether there are some requests pending. This solution uses split binary semaphores as described for the readers/writers problem in Andrews Section 4.4.3. Requests are delayed if no units are left, and in release, we do V(delay) only if we know that there actually are one or more delayed requests. A simpler solution, still fulfilling the specification (?), might be to declare a semaphore counting the number of free resources: sem free = U The two routines could then be written as: request() { P(free) } # take one unit release(int number) { # free number units for[i=number to 1 by -1] { V(free)} } However, this is probably not what the author had in mind since release is no longer atomic. Exercise 4 (Termination, deadlock, interleaving) Consider the following program: 4

1 int x = 0, y = 0, z = 0 ; 2 sem lock1 = 1, lock2 = 1 ; 3 4 process f o o { process bar { 5 z := z + 2 ; P( l o c k 2 ) ; 6 P( lock1 ) ; y := y + 1 ; 7 x := x + 2 ; P( l o c k 1 ) ; 8 P( lock2 ) ; x := x + 1 ; 9 V( lock1 ) ; V( l o c k 1 ) ; 10 y := y + 2 ; V( l o c k 2 ) ; 11 V( lock2 ) ; z := z + 1 ; 12 } } 1. This program might deadlock. How? 2. What are the possible final values of x,y, and z in the deadlock state? 3. What are the possible final values of x,y, and z if the program terminates? (Remember that an assignment z := z + 1 consists of two atomic operations on z.) Solution: 1. Deadlock: The processes execute the first P operation. Then both will be stuck trying to execute the second. It s the classical situation (as in the symmetric philosphers) where the lock-taking of two processes (or more) goes in different orders. If one has to look for deadlock, that s where one has to look for, P -operations of different order. The V - operations are irrelevant for deadlocks. Of course, the processes may also not deadlock. 2. The state at the deadlocked point is (x, y, z) = (2, 1, 2). Up-to that point (if the program reaches the deadlock), there had been no races and therefore the result is unique. 3. In case of (proper) termination (i.e., without running in to the deadlock, the final values are (x, y, z) = (3, 3, {1, 2, 3}) Note that z is unprotected. Remember that the assignments to z are thereby not atomic. The assignments to the other two variables are protected by the mutex-locks (binary locks), even if the locks are not very smartly arranged (deadlock). Therefore the assignments themselves are atomic, and since + is commutative, the order in which the processes do their atomic increments does not matter. Exercise 5 (Fetch-and-add ([?, Exercisise 4.3])) Implement P and V with fetch-and-add (FA). The behavior of fetch-and-add is given as follows: 1 2 FA( var, i n c r ) : 3 <int tmp := var ; 4 var := var+i n c r ; 5 return (tmp ) ; > Note: the inc may be a negative integer, which is being added. Side remark: fetch-and-add is, in some HW architectures an atomic instruction (for instance, variants in X86-architectures). Atomic instructions such as fetch-and-add, which are more powerful than simple loads and stores (= reading and writing) are offered in the instruction set 5

with the purpose to allow efficient implementation of synchronization primitives in operating systems running on that platform (for instance semaphore operations). Fetch-and-add is only one example of HW-supported atomic synchronization operations. Solution: [of Exercise 5] 1 P( s ) { # <await ( s >0) s := s 1; > 2 while ( s <= 0) s k i p ; # spin 3 while ( FA( s, 1) <= 0 ) { # decrement + check 4 FA( s, 1 ) ; # undo 5 while ( s <= 0) s k i p ; 6 } 7 } 8 9 V( s ) { # <s := s +1; > 10 FA( s, 1 ) ; 11 } The first thing to observe is: FA has no synchronization power in the sense that it can delay a process, which of course is needed for the P -operation. Therefore, we have to do it ourselves. The standard way to do that is spinning. That s done, for a start, in the first loop. We let P-processes spin while the semaphore value is zero. When s is increased by a V- process, several P-processes may leave the first loop and enter the second. Independent of whether the FA-test succeeds or not: the test always decrements, so it s slighty different than an awaits > 0; s := s 1. If the test do not succeed for a given process, the process must increment s. Thus, an unsuccessful decrement in P must be followed by an increment in order to maintain the correct value of the semaphore. The following version of P is incorrect: 1 P( s ) : 2 while ( FA( s, 1) <= 0 ) { #1 3 FA( s, 1 ) ; 4 } The latter code with only one loop may lead to a livelock, illustrated by an example: Consider the case where semaphores are used to synchronize access to a critical section (which is one very standard application of P and V ). Furthermore, consider the case where s is 0, i.e. one process is inside the critical section. Assume now that there are two processes executing P(s). Assume that both of these processes are waiting to execute the loop body (i.e. both are at #1). The value of s must therefore be 2. Now, the process inside the critical section wants to leave and executes V(s). Thus, the value of s is increased to 1. Now, one of the waiting processes may execute FA(s,1) (setting s to 0) and immediately proceed with executing the test, setting s back to 1. Now both processes are at #1 and the value of s is again 1. The processes may continue to alternate on executing the loop, leading to a livelock. Exercise 6 (Precedence graph ([?, Exercise 4.4a])) Use semaphores to implement the shown precedence/dependence graph. T1 -> T2 -> T4 -> T5 T1 ----> T3 ----> T5 Solution: [of Exercise 6] 6

1 2 3 sem FIN1 = 0, FIN2 = 0, FIN3 = 0, FIN4 = 0 ; 4 5 process T1 { process T2 { process T3 { 6 task 1 ; P( FIN1 ) ; P( FIN1 ) ; 7 V( FIN1 ) ; task 2 ; task 3 ; 8 V( FIN1 ) ; V( FIN2 ) ; V( FIN3 ) ; 9 } } } 10 11 process T4 { process T5 { 12 P( FIN2 ) ; P( FIN3 ) ; 13 task 4 ; P( FIN4 ) ; 14 V( FIN4 ) ; task 5 ; 15 } } The trick is: to signal 2 times. 2 Exercise 7 (Implementing await ([?, Exercise 4.13])) Consider the following piece of code, which is intended as implementation of the await-statement. 1 sem e := 1, d := 0 # entry and d e l a y sem. 2 int nd := 0 # d e l a y counter 3 4 P( e ) ; 5 6 while (B = f a l s e ) { 7 nd := nd+1; 8 V( e ) ; 9 P( d ) ; 10 P( e ) } ; 11 12 S ; # p r o t e c t e d statement 13 14 while ( nd > 0) 15 { nd := nd 1; V( d ) } ; 16 V( e ) ; 1. Is the code executed atomically? 2. Is it deadlock free? 3. Does the code guarantee, that B is true before S is executed? Solution: [of Exercise 7] 1. Atomic? Yes, only one process can hold e. For question 3: This also guarantees that B holds when S starts execution: The entry semaphore e is not released between (the last) testing of B and execution of S. 2. In order for deadlock to occur, all processes must halt on a P operation (that s a general fact, V s don t block ). This can only happen if all processes are delayed on P(d). In order for this to happen, all processes must test the condition B and see that it is false. 2 NB: it works almost like Petri-nets... 7

Hence, the algorithm will not deadlock unless there are a possibility of deadlock in the surrounding program. (Note that even though the while loop after S is executed as many times as there are delayed processes, we are not guaranteed that all these processes will execute P(d) before new processes captures the delay semaphore e at the first line of the implementation. However, this will not affect the deadlock argument since the waiting processes cannot affect the trueness of B.) Exercise 8 (Exchange function ([?, Exercise 4.29])) Impement the exchange function. The use is exchange (value). It supposed t communicate with another process which calls the same function (with some value v 2 ) and the function here is supposed to return v 2, and the other one symmetrically. So exchanging 2 values requires a form of rendez-vouz. Solution: [of Exercise 8] A rendez-vous we had also in the lecture about monitors. And exchange as intended here may be compared with an exchange (in a sequential setting) of the contents of 2 variables. To do that, one needs a auxiliary memory cell. For swapping x and y, one would typically do 1 buf :=x 1 ; 2 x 1 := x 2 ; 3 x 2 := buf Now, the setting is slightly different. The variables x and y are somehow local to the 2 processes (and the correspond to the input parameters value 1 and value 2 of the two procedures. That implies we cannot do x 1 := x 2. To communicate the value in both directions, we need two write/read the buffer two times (if we want to use just one buffer). To avoid overwriting the value prematurely, we need further auxiliary variables, here x 1 and x 2 (in the code later, they are called tmp) 1 buf :=x 1 ; 2 x 2 := buf ; 3 buf := x 2 ; 4 x 1 := x 2 ; Now, the code is still sequential, what we need therefore is, make it parallel and synchronize properly. Since the two processes are uncorrdinated, we do not know which one is first, what we need to program therefore is a rendez-vouz. 1 sem continue := 0 ; # Waiting sem. f o r the f i r s t p r o c e s s 2 sem e := 1 ; # Mutex semphore 3 bool waiting := f a l s e ; # t r u e when one p r o c e s s i s w a i t i n g 4 int b u f f e r ; # Comm. b u f f e r between p a i r s o f p r o c e s s e s 5 int exchange ( int value ) { 6 int tmp ; 7 P( e ) ; 8 i f (! waiting ) { # f i r s t p r o c e s s ( in a p a i r ) 9 b u f f e r := value ; # s t o r e own v a l u e (1) 10 waiting := true ; 11 V( e ) ; 12 P( continue ) ; 13 tmp := b u f f e r ; # read o t h e r v a l u e (4) 14 waiting := f a l s e ; 15 V( e ) ; # S i g n a l t h a t new exchange p o s s i b l e 16 } 17 else { # second p r o c e s s ( in a p a i r ) 18 tmp := b u f f e r ; # read o t h e r v a l u e (2) 8

19 b u f f e r := value ; # s t o r e own v a l u e (3) 20 V( continue ) ; 21 } 22 return tmp ; 23 } We need two semaphores in order to make sure that the second arriving process will signal the one that is waiting. The synchronization enusres that the numbered statements are executed in the order 1,2,3,4. The local variable tmp is needed in order to avoid interference since the V operations must be done before the return statment. Exercise 9 (Request and release ([?, Exercise 4.34a])) Request and release, sharing two printers. The request should return the identity of a free printer, if available (otherwise block). The identity of the free printer is given as argument to the release-procedure. Solution: [of Exercise 9] 1 sem s = 2 ; # w r i t e r s 2 sem e = 1 ; # mutex 3 bool t [ 1 : 2 ] = ( true, true ) ; # a v a i l a b l e 4 5 int r e q u e s t ( ) { 6 int tmp 7 P( s ) ; 8 P( e ) ; 9 i f ( t [ 1 ] ) then tmp := 1 ; else tmp := 2 ; 10 t [ tmp ] := f a l s e ; // taken 11 V( e ) ; 12 return tmp ; 13 } 14 15 r e l e a s e ( int i ) { 16 P( e ) ; 17 t [ i ] := true ; 18 V( e ) ; 19 V( s ) ; 20 } Note that the semaphore s is initiate with 2, since we have 2 printers. Basically, we use a counting semaphore, It is not strictly needed to protect the assignment to t[i] in release. (Why?) A solution using two boolean variables is also ok. It leads to a test in release. Note that Andrews sometimes use the notation procedure request(int &tmp). declaration of tmp and the return statement is then not needed. The Exercise 10 (Bear and honeybees 4.36) Program the synchronization problems of one bear + n bees 9

Solution: [of Exercise 10] The problem is a variant of the producer/consumer problem in a way. Perhaps best to start thinking is the bees, because there are more than one. The bees have to be under mutex, but bees and the bear are also mutex. When a bee is finished, it can therefore either signal 3 another bee to enter, or the bear. That can be done by a split semaphore. 1 sem e = 1 # Mutex semaphore 2 sem eat = 0 # Raise when the bear can eat 3 # e and eat forms a s p l i t binary semaphore 4 int p o r t i o n s = 0 # Number o f a v a i l a b l e p o r t i o n s in the pot 5 6 process bee [ i = 1 to n ] { 7 while ( true ) { 8 # c o l l e c t honey 9 P( e ) ; 10 p o r t i o n s = p o r t i o n s + 1 ; 11 i f ( p o r t i o n s == H) { 12 V( eat ) ; 13 } else { 14 V( e ) ; 15 } } } 16 17 process bear { 18 while ( true ) { 19 P( eat ) ; 20 # eat 21 p o r t i o n s = 0 ; 22 V( e ) ; 23 } 24 } References 3 We don t have signal and wait here, though. 10