14 October 2009 1 BASICS'09, Shanghai On the expressive power of synchronization primitives in the...
-
Upload
rosalind-kelly -
Category
Documents
-
view
213 -
download
0
Transcript of 14 October 2009 1 BASICS'09, Shanghai On the expressive power of synchronization primitives in the...
14 October 200914 October 2009 11BASICS'09, ShanghaiBASICS'09, Shanghai
On the expressive power of On the expressive power of synchronization primitives synchronization primitives
in the in the ππ-calculus-calculus
Catuscia Palamidessi, INRIA Saclay, Catuscia Palamidessi, INRIA Saclay, FranceFrance
14 October 2009 BASICS'09, Shanghai 2
Focus on the -calculus
Contents
The -calculus with mixed choice () Expressive power of the -calculus and problems with its fully distributed implementation
The asynchronous -calculus (a) The π hierarchy Towards a randomized fully distributed implementation of The probabilistic asynchronous -calculus (pa)
Encoding into pa using a generalized dining cryptographers algorithm
14 October 2009 BASICS'09, Shanghai 3
The -calculus Proposed by [Milner, Parrow, Walker ‘92] as a formal language
to reason about concurrent systems Concurrent: several processes running in parallel
Asynchronous cooperation: every process proceeds at its own speed
Synchronous communication: handshaking, input and output prefix
Mixed guarded choice: input and output guards like in CSP and CCS. The implementation of guarded choice is aka the binary interaction problem
Dynamic generation of communication channels
Scope extrusion: a channel name can be communicated and its scope extended to include the recipient
x y
zz
zR
Q
P
14 October 2009 BASICS'09, Shanghai 4
: the -calculus with mixed choice
Syntax
g ::= x(y) | x^y | prefixes (input, output, silent)
P ::= i gi . Pi mixed guarded choice
| P | P parallel
| (x) P new name
| recA P recursion
| A procedure name
€
g ::= x(y) | x y | τ prefixes(input,output,silent)
P ::= gi
i
∑ .Pi mixed guarded choice
| P | P parallel operator
| (x)P new name
| recA P recursion
| A process name
14 October 2009 BASICS'09, Shanghai 5
Operational semantics
Transition system P -a Q
Rules
Choice i gi . Pi –gi Pi
P -x^y P’Open ___________________
(y) P -x^(y) P’
€
Transition system P a ⏐ → ⏐ Q
Rules
Choice gi .Pig i ⏐ → ⏐ P
i
∑
OpenP x y ⏐ → ⏐ ′ P
(y)P x (y ) ⏐ → ⏐ ′ P x ≠ y
14 October 2009 BASICS'09, Shanghai 6
Operational semantics
Rules (continued)
P -x(y) P’ Q -x^z Q’
Com ________________________ P | Q - P’ [z/y] | Q’
P -x(y) P’ Q -x^(z) Q’
Close _________________________ P | Q - (z) (P’ [z/y] | Q’)
P -g P’ Par _________________ f(Q), b(g)
disjoint
Q | P -g Q | P
€
Rules (continued)
ComP x(y ) ⏐ → ⏐ ′ P Q x z ⏐ → ⏐ ′ Q
P | Q τ ⏐ → ⏐ ′ P [z / y] | ′ Q
CloseP x(y ) ⏐ → ⏐ ′ P Q x (z) ⏐ → ⏐ ′ Q
P | Q τ ⏐ → ⏐ (z)( ′ P | ′ Q )
ParP g ⏐ → ⏐ ′ P
P | Q g ⏐ → ⏐ ′ P | Qfn(Q) ∩ bn(g) =∅
14 October 2009 BASICS'09, Shanghai 7
Features which make very expressive- and cause difficulty in its
distributed implementation
(Mixed) Guarded choice Symmetric solution to certain distributed problems
involving distributed agreement
Link mobility Network reconfiguration It allows expressing HO (e.g. calculus) in a natural
way In combination with guarded choice, it allows solving
more distributed problems than those solvable by guarded choice alone
14 October 2009 BASICS'09, Shanghai 8
Example of distributed agreement: The leader election problem in a symmetric networkTwo symmetric processes must elect one of them as the leader In a finite amount of time The two processes must agree
x.Pwins + y^.Ploses | y.Qwins + x^.Qloses
The expressive power of
Ploses | Qwins
P Q
y
x
Pwins | Qloses
€
A symmetric and fully distributed solution using guarded choice
τ ⏐ → ⏐ Pwins | Qlooses
x.Pwins + y .Plooses | y.Qwins + x .Qloosesτ ⏐ → ⏐ Plooses | Qwins
14 October 2009 BASICS'09, Shanghai 9
Example of a network where the leader election problem cannot be solved by
guarded choice alone
For the following network there is no (fully distributed and symmetric) solution in CCS, or in CSP
14 October 2009 BASICS'09, Shanghai 10
A solution to the leader election
problem in
looser
winner
looser
winnerlooser winner
14 October 2009 BASICS'09, Shanghai 11
Approaches to the implementation of guarded choice in literature
[Parrow and Sjodin 92], [Knabe 93], [Tsai and Bagrodia 94]: asymmetric solution based on introducing an order on processes
Other asymmetric solutions based on differentiating the initial state
Plenty of centralized solutions [Joung and Smolka 98] proposed a randomized solution to
the multiway interaction problem, but it works only under an assumption of partial synchrony among processes
In this talk we propose an implementation fully distributed, symmetric, and using no synchronous hypotheses.
14 October 2009 BASICS'09, Shanghai 12
State of the art in Formalisms able to express distributed agreement are difficult to implement in a distributed fashion
For this reason, the field has evolved towards variants of which retain mobility, but have no guarded choice
One example of such variant is the asynchronous calculus proposed by [Honda-Tokoro’91, Boudol, ’92] (Asynchronous = Asynchronous communication)
14 October 2009 BASICS'09, Shanghai 13
a : the Asynchonous Version of [Amadio, Castellani, Sangiorgi ’97]
Syntax
g ::= x(y) | prefixes
P ::= i gi . Pi input guarded choice
| x^y output action
| P | P parallel
| (x) P new name
| recA P recursion
| A procedure name
€
Syntax
g ::= x(y) | τ prefixes (input,silent)
P ::= gi .Pi
i
∑ input guarded choice
| x y output action
| P | Q parallel
| (x)P new name
| recAP recursion
| A process identifier
14 October 2009 BASICS'09, Shanghai 14
Characteristics of a
Asynchronous communication: we can’t write a continuation after an output,
i.e. no x^y.P, but only x^y | P so P will proceed without waiting for the actual
delivery of the message
Input-guarded choice: only input prefixes are allowed in a choice.
Note: the original asynchronous calculus did not contain a choice construct. However the version presented here was shown by [Nestmann and Pierce, ’96] to be equivalent to the original asynchronous calculus
It can be implemented in a fully distributed fashion (see for instance Odersky’s group’s project PiLib)
14 October 2009 BASICS'09, Shanghai 15
The π hierarchy
We can relate the various sublanguages of π by using encodings
Preserving certain observable properties of runs. Here we will consider as observable properties the presence/absence of certain actions.
Existence of such encoding represented by
€
[ ][ ] : L → ′ L
€
L → ′ L
14 October 2009 BASICS'09, Shanghai 16
The π hierarchy
asynchronous
mixed choice
Separate choice
Internal mobility Value-passing CCS
Input guarded choice output prefix
14 October 2009 BASICS'09, Shanghai 17
The π hierarchy
asynchronous
mixed choice
Separate choice
Internal mobility Value-passing CCS
Input guarded choice output prefixNestmann
Palamidessi
14 October 2009 BASICS'09, Shanghai 18
Separation result 1
It is not possible to encode mixed-choice π into separate-choice π Homomorphically wrt |: Preserving 2 distinct observable actions
This result is based on a sort of confluence property, which holds for the separate-choice π and not for the separate-choice π
The proof proceeds by showing that the separate-choice π cannot solve the leader election problem for 2 nodes
€
P | Q[ ][ ] = P[ ][ ] | Q[ ][ ]
14 October 2009 BASICS'09, Shanghai 19
Separation result 2
It is not possible to encode mixed-choice π into value-passing ccs or π with internal mobil. Homomorphically wrt |: Without introducing extra channels Preserving 2 distinct observable actions
The proof proceeds by showing that the separate-choice π cannot solve the leader election problem for certain kinds of graphs
€
P | Q[ ][ ] = P[ ][ ] | Q[ ][ ]
14 October 2009 BASICS'09, Shanghai 20
Towards a fully distributed implementation of The results of previous pages show that a fully
distributed implementation of must necessarily be randomized
A two-steps approach:
probabilistic asynchronous
distributed machine
[[ ]]
<< >>
Advantages: the correctness proof is easier since [[ ]] (which is the difficult part of the implementation) is between two similar languages
14 October 2009 BASICS'09, Shanghai 21
pa: the Probabilistic Asynchonous
Syntax
g ::= x(y) | prefixes
P ::= i pi gi . Pi pr. inp.
guard. choice i pi = 1
| x^y output action
| P | P parallel
| (x) P new name
| recA P recursion
| A procedure name
€
g ::= x(y) | τ prefixes (input,silent)
P ::= pigi .Pi
i
∑ probabilistic input guarded choice pi =1i
∑| x y output action
| P | Q parallel
| (x)P new name
| recAP recursion
| A process identifier
14 October 2009 BASICS'09, Shanghai 22
1/2
1/21/3
1/31/3
1/32/3
1/2
1/21/3
1/31/3
1/32/3
1/2
1/21/3
1/31/3
1/32/3
The operational semantics of pa
Based on the Probabilistic Automata of Segala and Lynch Distinction between
nondeterministic behavior (choice of the scheduler) and
probabilistic behavior (choice of the process)
Scheduling Policy:The scheduler chooses the group of transitionsExecution:The process chooses probabilistically the transition within the group
14 October 2009 BASICS'09, Shanghai 23
The operational semantics of pa
Representation of a group of transition
P { --gi-> pi Pi } i
Rules
Choice i pi gi . Pi {--gi-> pi Pi }i
P {--gi-> piPi }i
Par ____________________
Q | P {--gi-> piQ | Pi }i
€
Representation of a group of transitions
P{ g i ⏐ → ⏐ pi Pi}i
Rules
Choice ( pi gi .Pi
i
∑ ){ g i ⏐ → ⏐ pi Pi}i
ParP{ g i ⏐ → ⏐ pi Pi}i
(P | Q){ g i ⏐ → ⏐ pi Pi | Q}i
14 October 2009 BASICS'09, Shanghai 24
The operational semantics of pa
P {--xi(yi)-> piPi }i Q {--x^z-> 1 Q’ }i
Com ____________________________________ P | Q {---> pi
Pi[z/yi] | Q’ }xi=x U { --xi(yi)-> pi
Pi | Q }xi=/=x
P {--xi(yi)-> piPi }i
Res ___________________ qi renormalized
(x) P { --xi(yi)-> qi (x) Pi }xi =/= x
€
Rules (continued)
Out x y { x y ⏐ → ⏐ 1 Nil}
ComP{ xi (yi ) ⏐ → ⏐ ⏐ piPi}i Q { x z ⏐ → ⏐ 1 ′ Q }
(P | Q){ τ ⏐ → ⏐ Pi[z / y i] | Q}xi = x ∪ { xi (yi ) ⏐ → ⏐ ⏐ Pi | Q}xi ≠x
RstP{ xi (yi ) ⏐ → ⏐ ⏐ piPi}i
(x)P{ xi (yi ) ⏐ → ⏐ ⏐ qiPi}i
qi = pi / p j
x j ≠x
∑
14 October 2009 BASICS'09, Shanghai 25
Implementation of pa
Compilation in a DM << >> : pa DM
Distributed<< P | Q >> = << P >>.start() | << Q >>.start();
Compositional<< P op Q >> = << P >> jop << Q >> for all op
Channels are buffers with test-and-set (synchronized) methods for input and output. The input-guarded choice selects probabilistically one of the channels with available data
14 October 2009 BASICS'09, Shanghai 26
Encoding into pa
[[ ]] : pa
Fully distributed [[ P | Q ]] = [[ P ]] | [[ Q ]]
Preserves the communication structure[[ P ]] = [[ P ]]
Compositional[[ P op Q ]] = Cop [ [[ P ]] , [[ Q ]] ]
Correct wrt a notion of probabilistic testing semantics
P must O iff [[ P ]] must [[ O ]] with prob 1
14 October 2009 BASICS'09, Shanghai 27
Encoding into pa
Idea (from an idea of Uwe Nestmann): Every mixed choice is translated into a parallel
comp. of processes corresponding to the branches, plus a lock f
The input processes compete for acquiring both its own lock and the lock of the partner
The input process which succeeds first, establishes the communication. The other alternatives are discarded
The problem is reduced to a dining philosophers problem: each lock is a fork, each input process is a philosopher, and enters a competition to get his adjacent forks. The winners of the competition can synchronize, which corresponds to eating in the DP. There can be more than one winner
Generalized DP: each fork can be adjacent to more than two Philosophers
P
Q
R
Pi
QiRi
f
f
f
S
R’i
f Si
14 October 2009 BASICS'09, Shanghai 28
Dining Philosophers: classic case
Each fork is shared by exactly two philosophers
14 October 2009 BASICS'09, Shanghai 29
Dining Philosophers: generalized case
•Each fork can be shared by more than two philosophers
14 October 2009 BASICS'09, Shanghai 30
Intended properties of solution
Deadlock freedom (aka progress): if there is a hungry philosopher, a philosopher will eventually eat
Starvation freedom: every hungry philosopher will eventually eat (but we won’t consider this property here)
Robustness wrt a large class of schedulers: A scheduler decides who does the next move , not necessarily in cooperation with the program, maybe even against it
Fully distributed: no centralized control or memory Symmetric:
All philosophers run the same code and are in the same initial state
The same holds for the forks
14 October 2009 BASICS'09, Shanghai 31
The Dining Philosophers - a brief history
Problem proposed by Edsger Dijkstra in 1965 (actually the popular formulation is due to Tony Hoare)
Many solutions had been proposed for the DP, but none of them satisfied all requirements
In 1981, Lehmann and Rabin proved that There was no “deterministic” solution satisfying all
requirements
They proposed a randomized solution and proved that it satisfies all requirement. Progress is satisfied in the probabilistic sense, I.e. there is probability 1 that a philosopherwill eventually eat.
Meanwhile, Francez and Rodeh had come out in 1980 with solution to the DC written in CSP
The controversy was solved by Lehmann and Rabin who proved that CSP (with guarded choice) is not implementable in a distributed fashion (deterministically).
14 October 2009 BASICS'09, Shanghai 32
The algorithm of Lehmann and Rabin
1) think;2) choose probabilistically first_fork in
{left,right}; 3) if not taken(first_fork) then
take(first_fork) else goto 3;4) if not taken(second_fork) then
take(second_fork); else { release(first_fork); goto 2 }
5) eat;6) release(second_fork);7) release(first_fork);
8) goto 1
14 October 2009 BASICS'09, Shanghai 33
Problems
Wrt to our encoding goal, the algorithm of Lehmann and Rabin has two problems:
1. It only works for the classical case (not for the generalized one)
2. It works only for fair schedulers
14 October 2009 BASICS'09, Shanghai 34
Conditions on the graph
Theorem: The algorithm of Lehmann and Rabin is deadlock-free if and only if all cycles are pairwise disconnected
There are essentially three ways in which two cycles can be connected:
14 October 2009 BASICS'09, Shanghai 35
Proof of the theorem
If part) Each cycle can be considered separately. On each of them the classic algorithm is deadlock-free. Some additional care must be taken for the arcs that are not part of the cycle.
Only if part) By analysis of the three possible cases. Actually they are all similar. We illustrate the first case
taken
committed
14 October 2009 BASICS'09, Shanghai 36
Proof of the theorem
The initial situation has probability p > 0 The scheduler forces the processes to loop Hence the system has a deadlock (livelock)
with probability p
Note that this scheduler is not fair. However we can define even a fair scheduler which induces an infinite loop with probability > 0. The idea is to have a scheduler that “gives up” after n attempts when the process keep choosing the “wrong” fork, but that increases (by f) its “stubborness” at every round.
With a suitable choice of n and f we have that the probability of a loop is p/4
14 October 2009 BASICS'09, Shanghai 37
Solution for the Generalized DP
As we have seen, the algorithm of Lehmann and Rabin does not work on general graphs
However, it is easy to modify the algorithm so that it works in general
The idea is to reduce the problem to the pairwise disconnetted cycles case: Each fork is initially associated with one token.
Each philosopher needs to acquire a token in order to participate to the competition. After this initial phase, the algorithm is the same as the Lehman & Rabin’s
Theorem: The competing philosophers determine a graph in which all cycles are pairwise disconnected
Proof: By case analysis. To have a situation with two connected cycles we would need a node with two tokens.
14 October 2009 BASICS'09, Shanghai 38
Generalized philosophers
The other problem we had to face: the solution of Lehmann and Rabin works only for fair schedulers, while pa does not provide any guarantee of fairness
Fortunately, it turns out that the fairness is required only in order to avoid a busy-waiting livelock at instruction 3. If we replace busy-waiting with suspension, then the algorithm works for any scheduler
This result was achieved independently also by [Duflot, Fribourg, Picarronny 02].
14 October 2009 BASICS'09, Shanghai 39
1) think;2) choose probabilistically first_fork in
{left,right};
3) if not taken(first_fork) then take(first_fork) else wait;
4) if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 }
5) eat;6) release(second_fork);7) release(first_fork);8) goto 1
1) (second_fork);
2) release(first_fork);
3) goto 1 think;
4) choose probabilistically first_fork in {left,right};
5) if not taken(first_fork) then take(first_fork) else goto 3;
6) if not taken(second_fork) then take(second_fork); else { release(first_fork); goto 2 }
7) Eat;
8) release
The algorithm of Lehmann and RabinModified so to avoid the need for fairnessThe algorithm of Lehmann and Rabin
14 October 2009 BASICS'09, Shanghai 40
The encoding
[[ (x) P ]] = (x) [[ P ]] [[P | Q ]] = [[ P ]] | [[ Q ]] [[ ∑ gi.Pi ]] = the translation we have just seen
Theorem: For every P, [[P]] and P are testing-equivalent. Namely for every test T,
inf (Prob (succ, [[ P ]] | [[T ]] ) = inf (Prob (succ , P | T))
sup (Prob (succ, [[ P ]] | [[T ]] ) = sup (Prob (succ , P | T))
14 October 2009 BASICS'09, Shanghai 41
Conclusion
We have provided an encoding of the calculus into a probabilistic version of its asynchronous fragment fully distributed compositional correct wrt a notion of testing semantics
Advantages: high-level solutions to distributed algorithms Easier to prove correct (no reasoning about
randomization required)