Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative...

233
Encodings into Asynchronous Pi Calculus Uwe Nestmann (Technische Universität Berlin) 1 Wednesday, October 14, 2009

Transcript of Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative...

Page 1: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings into Asynchronous Pi Calculus

Uwe Nestmann(Technische Universität Berlin)

1Wednesday, October 14, 2009

Page 2: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Mobility in Pi-Calculus

! ≪ ≫ "

Oregon Summer School

The essence of name mobility

P!"#$%&'( x

z!!

!!!!

!!! Q!"#$%&'(

R!"#$%&'((νz)(P | R) | Q

Suppose P = x〈z〉.P ′ (x not in P ′) and Q = x(y).Q′

Global Computing – pp.18/224

! ≪ ≫ "

Oregon Summer School

The essence of name mobility

P!"#$%&'( x

z!!

!!!!

!!! Q!"#$%&'(

R!"#$%&'((νz)(P | R) | Q

P!"#$%&'( xQ!"#$%&'(

z""

""""

"""

R!"#$%&'(P ′ | (νz)(R | Q′)

New scope of z

Global Computing – pp.18/224! ≪ ≫ "

Oregon Summer School

The essence of name mobility

P!"#$%&'( x

z!!

!!!!

!!! Q!"#$%&'(

R!"#$%&'((νz)(P | R) | Q

Suppose P = x〈z〉.P ′ (x not in P ′) and Q = x(y).Q′

Global Computing – pp.18/224

! ≪ ≫ "

Oregon Summer School

The essence of name mobility

P!"#$%&'( x

z!!

!!!!

!!! Q!"#$%&'(

R!"#$%&'((νz)(P | R) | Q

Suppose P = x〈z〉.P ′ (x not in P ′) and Q = x(y).Q′

Global Computing – pp.18/224

! ≪ ≫ "

Oregon Summer School

The essence of name mobility

P!"#$%&'( x

z!!

!!!!

!!! Q!"#$%&'(

R!"#$%&'((νz)(P | R) | Q

Suppose P = x〈z〉.P ′ (x not in P ′) and Q = x(y).Q′

Global Computing – pp.18/224

2Wednesday, October 14, 2009

Page 3: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Pi-Calculus

“I reject the idea that there can be a unique conceptual model, or one preferred formalism, for all aspects of

something as large as concurrent computation.”(Robin Milner, 1993)

3Wednesday, October 14, 2009

Page 4: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Pi-Calculus

“I reject the idea that there can be a unique conceptual model, or one preferred formalism, for all aspects of

something as large as concurrent computation.”(Robin Milner, 1993)

“Pi Calculus is better than Process Algebra”(Bill Gates, 2003?)

3Wednesday, October 14, 2009

Page 5: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A Jungle ?many members of the family Paola Quaglia’s note “Which Pi Calculus are you talking about?” π, ν, γ, Aπ, Lπ Pπ, πI, HOπ, λπ, πξ, χ, ρ, sπ, κ, Blue, Fusion, Applied, ... polyadic, polymorphic, polynomic, polarized, dyadic, ...

4Wednesday, October 14, 2009

Page 6: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A Jungle ?many members of the family Paola Quaglia’s note “Which Pi Calculus are you talking about?” π, ν, γ, Aπ, Lπ Pπ, πI, HOπ, λπ, πξ, χ, ρ, sπ, κ, Blue, Fusion, Applied, ... polyadic, polymorphic, polynomic, polarized, dyadic, ...

many dimensions communication models (applicability, minimality, ... ) comparisons / encodings implementations semantics / models / types / proof techniques / tools applications

4Wednesday, October 14, 2009

Page 7: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

OverviewPart 0a: Encodings vs Full Abstraction Comparison of Languages/Calculi Correctness of Encodings

Part 0b: Asynchronous Pi Calculus

Part 1: Input-Guarded Choice Encoding (Distributed Implementation) Decoding (Correctness Proof)

Part 2: Output-Guarded Choice Encoding Separate Choice Encoding Mixed Choice

Conclusions5Wednesday, October 14, 2009

Page 8: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

OverviewPart 0a: Encodings vs Full Abstraction Comparison of Languages/Calculi Correctness of Encodings

Part 0b: Asynchronous Pi Calculus

Part 1: Input-Guarded Choice Encoding (Distributed Implementation) Decoding (Correctness Proof)

Part 2: Output-Guarded Choice Encoding Separate Choice Encoding Mixed Choice

Conclusions6Wednesday, October 14, 2009

Page 9: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Comparison of Languages/Calculi

7Wednesday, October 14, 2009

Page 10: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Absolute Expressiveness

Given a single process calculus, what can it express?

What objects are expressible (as closed terms)?

What operators are expressible(as contexts, or open terms)?

What problems can be solved?(leader election, matching systems, …)

(* see also Joachim Parrow @ LIX Colloquium 2006 *)

8Wednesday, October 14, 2009

Page 11: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Given two (process) calculi S and T,

say that “T is as least as expressive as S”

to mean that

“T can express anything that S” can.

Relative Expressiveness (I)

9Wednesday, October 14, 2009

Page 12: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Relative Expressiveness (II)This can also be formulated

without actually saying what is being expressed by exhibiting a (syntactic) encoding [[-]] : S → T

Such an encoding shall be good/reasonable/compositional/…

10Wednesday, October 14, 2009

Page 13: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Relative Expressiveness (II)This can also be formulated

without actually saying what is being expressed by exhibiting a (syntactic) encoding [[-]] : S → T

Daniele Gorla @ EXPRESS 2006: “Everybody seems to have his/her own idea about which properties to check for.”

Such an encoding shall be good/reasonable/compositional/…

10Wednesday, October 14, 2009

Page 14: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Relative Expressiveness (II)This can also be formulated

without actually saying what is being expressed by exhibiting a (syntactic) encoding [[-]] : S → T

Daniele Gorla @ EXPRESS 2006: “Everybody seems to have his/her own idea about which properties to check for.”

both for positive statements (correctness, “goodness”, ...)and negative statements (separation, “badness”, ...)

Such an encoding shall be good/reasonable/compositional/…

10Wednesday, October 14, 2009

Page 15: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

18

!" calculus with mixed choice

!" calculusAsynchronous

!" calculus with separate choice

!" calculus with input choice !" calculus without choice

Palamidessi 97

(no choice, no output prefix)

Nestmann 97

(no output prefix) (output prefix)

Identity encoding Identity encoding

Honda!Tokoro 91,Boudol 92Nestmann!Pierce 96

Fig. 2.2. The π-calculus hierarchy. The dashed line represents an identity encoding.11Wednesday, October 14, 2009

Page 16: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

IMHO

All of the current proposals of goodness are ad-hoc;we do not yet have a proper theory of encodings.

12Wednesday, October 14, 2009

Page 17: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

13Wednesday, October 14, 2009

Page 18: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

14Wednesday, October 14, 2009

Page 19: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

An encoding is a (total) function

[[-]] : S → Tthat translates

the syntax of language S (the source) intothe syntax of language T (the target).

14Wednesday, October 14, 2009

Page 20: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

An encoding is a (total) function

[[-]] : S → Tthat translates

the syntax of language S (the source) intothe syntax of language T (the target).

Many encodings are injective, i.e,

P ≠ Q implies [[P]] ≠ [[Q]]

14Wednesday, October 14, 2009

Page 21: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

An encoding is a (total) function

[[-]] : S → Tthat translates

the syntax of language S (the source) intothe syntax of language T (the target).

Many encodings are injective, i.e,

P ≠ Q implies [[P]] ≠ [[Q]]

and we only consider compositional definitions,

14Wednesday, October 14, 2009

Page 22: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

An encoding is a (total) function

[[-]] : S → Tthat translates

the syntax of language S (the source) intothe syntax of language T (the target).

Many encodings are injective, i.e,

P ≠ Q implies [[P]] ≠ [[Q]]

and we only consider compositional definitions, following the syntactic structure of source terms.

14Wednesday, October 14, 2009

Page 23: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Correctness of Encodings

15Wednesday, October 14, 2009

Page 24: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (I)

16Wednesday, October 14, 2009

Page 25: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (I)

Let P and [[P]] live in the same calculus.

The encoding shall be “unnoticable”:

16Wednesday, October 14, 2009

Page 26: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (I)

Let P and [[P]] live in the same calculus.

The encoding shall be “unnoticable”:

P ≅ [[P]]

• The choice of ≅ captures the expressible artifacts that one considers worth comparing …

16Wednesday, October 14, 2009

Page 27: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (I)

Let P and [[P]] live in the same calculus.

The encoding shall be “unnoticable”:

P ≅ [[P]]

• The choice of ≅ captures the expressible artifacts that one considers worth comparing …

• [[-]] and ≅ are closely related.

16Wednesday, October 14, 2009

Page 28: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (I)

Let P and [[P]] live in the same calculus.

The encoding shall be “unnoticable”:

P ≅ [[P]]

• The choice of ≅ captures the expressible artifacts that one considers worth comparing …

• [[-]] and ≅ are closely related.

• Different results are often comparable.⇒ Seek the strongest equivalence that holds.

16Wednesday, October 14, 2009

Page 29: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (I)

Let P and [[P]] live in the same calculus.

The encoding shall be “unnoticable”:

P ≅ [[P]]

• The choice of ≅ captures the expressible artifacts that one considers worth comparing …

• [[-]] and ≅ are closely related.

• Different results are often comparable.⇒ Seek the strongest equivalence that holds.

• Encodings are often not injective.

16Wednesday, October 14, 2009

Page 30: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (II)

17Wednesday, October 14, 2009

Page 31: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (II)

Let P and [[P]] live in two completely different calculi.Then

P ≅ [[P]]

is no longer possible as a requirement.

17Wednesday, October 14, 2009

Page 32: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Indistinguishability (II)

Let P and [[P]] live in two completely different calculi.Then

P ≅ [[P]]

is no longer possible as a requirement.

Full abstraction helps !?

17Wednesday, October 14, 2009

Page 33: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (I)Notion to capture the quality of denotational models

(of programming languages) [Plotkin, TCS 1977].

18Wednesday, October 14, 2009

Page 34: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (I)Notion to capture the quality of denotational models

(of programming languages) [Plotkin, TCS 1977].

Let P be the syntax of a programming language.Let D be some mathematical domain.Let [[-]] : P → D be the denotational semantics of P.Let ≅P be an (operational) equivalence on P.

18Wednesday, October 14, 2009

Page 35: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (I)Notion to capture the quality of denotational models

(of programming languages) [Plotkin, TCS 1977].

Let P be the syntax of a programming language.Let D be some mathematical domain.Let [[-]] : P → D be the denotational semantics of P.Let ≅P be an (operational) equivalence on P.

Then, [[-]] is called fully abstract w.r.t. ≅P, iffor all P,Q in P: P ≅P Q iff [[P]] = [[Q]]

18Wednesday, October 14, 2009

Page 36: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (II)Let [[-]] : S → T .Let ≅S and ≅T be respective equivalences on S and T.

19Wednesday, October 14, 2009

Page 37: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (II)Let [[-]] : S → T .Let ≅S and ≅T be respective equivalences on S and T.

Then, take (T, ≅T) as the respective denotational model,and take the encoding [[-]] as the denotation function.

19Wednesday, October 14, 2009

Page 38: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (II)Let [[-]] : S → T .Let ≅S and ≅T be respective equivalences on S and T.

Then, take (T, ≅T) as the respective denotational model,and take the encoding [[-]] as the denotation function.

Then [[-]] is called fully abstract w.r.t. ≅S and ≅T, if it preserves and reflects the equivalences of S and T:

for all P,Q in S: P ≅S Q iff [[P]] ≅T [[Q]]

19Wednesday, October 14, 2009

Page 39: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (III)

20Wednesday, October 14, 2009

Page 40: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (III)Problems:• on what basis to choose ≅S and ≅T

(cf. Rob van Glabbeek’s talk)

• various ways to have results for congruences

– all target contexts– only translated contexts (respecting the protocol)– only well-typed contexts (w.r.t. a target type system)

20Wednesday, October 14, 2009

Page 41: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Full Abstraction (III)Problems:• on what basis to choose ≅S and ≅T

(cf. Rob van Glabbeek’s talk)

• various ways to have results for congruences

– all target contexts– only translated contexts (respecting the protocol)– only well-typed contexts (w.r.t. a target type system)

Observation:• full abstraction results are “easy to get”• full abstraction results are hard to compare

20Wednesday, October 14, 2009

Page 42: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence (I)

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

21Wednesday, October 14, 2009

Page 43: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence (I)

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

21Wednesday, October 14, 2009

Page 44: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence (I)

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

For example, nondeterministic or divergent executions, sometimes regarded asundesirable, could although starting from a translation !S" never again reach astate that is a translation !S$". A refined property may consider the behavior ofintermediate states to some extent:

if !S"! t T then there is S! s S$ such that T!"t !S$" (I)

says that initial steps of a translation can be simulated by the source term such thatthe target-level derivative is equivalent to the translation of the source-levelderivative.Let us call a target-level step committing if it directly corresponds to some source-

level step. It should be clear that only prompt encodings, i.e., those where initialsteps of literal translations are committing, will satisfy I. As a matter of fact, mostencodings studied up to now in the literature are prompt. Promptness also leads to``nice'' proof obligations since it requires case analysis over single computationsteps.However, nonprompt encodings do not satisfy I; like choice encodings, they

allow administrative (or book-keeping) steps to precede a committing step. Some-times (cf. [Ama94]), these administrative steps are well behaved in that they canbe captured by a confluent and strongly normalizing reduction relation. Then, theencoding is optimized to perform the initial administrative overhead itself by map-ping source terms onto administrative normal forms to satisfy I. A satisfyinglygeneral approach to take administrative steps into account is

if !S"O t T then there is SO s S$ such that TO t !S$" (S)

which says that arbitrary sequences of target steps are simulated (up to completion)by the source term. It takes all derivatives T!!including intermediate states!!intoaccount and does not depend on the encoding being prompt or normalizable. Thus,S is rather appealing. However, it only states correspondence between sequences oftransitions and is therefore, in general, rather hard to prove, since it involvesanalyzing arbitrarily long transition sequences between !S" and T (see [Wal95] fora successful proof).Finally, note that a proof that source terms and their translations are the same

up to some operationally defined notion of equivalence gives full abstraction up tothat equivalence and operational correspondence for free (see Corollaries 5.6.4 and5.7.6, and the discussion at the end of Section 5.4).

4. ENCODING INPUT-GUARDED CHOICE, ASYNCHRONOUSLY

This section defines encodings of the asynchronous ?-calculus with input-guardedchoice P7 into its choice-free fragment P (Section 4.1): a divergence-free choiceencoding, and a divergent variant (Section 4.2). The essential idea of the encodingsis that a branch may consume a message before it checks whether it was allowedto do so; if yes, then it may proceed, otherwise it simply resends the consumedmessage. Such protocols are only correct with respect to asynchronous observation,

14 NESTMANN AND PIERCE

21Wednesday, October 14, 2009

Page 45: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence (I)

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

In general, however, we cannot assume that we have a formal setting at handthat allows us to compare terms and their translations directly. The notion of fullabstraction has been developed to get around this problem. Here, correctness isexpressed as the preservation and reflection of equivalence of source terms. Let !"s

and !"t denote equivalences of the source and the target language, respectively.Then, the full abstraction property is formulated as:

S1 !"s S2 if and only if !S1" !"t !S2".

Up to the chosen notions of equivalence, fully abstract encodings allow us!!forreasoning about terms!!to freely switch between the source and target languages inboth directions. Note that, usually, the reflection (if ) of equivalence, often calledadequacy, is relatively easy to establish. In contrast, to prove the preservation (onlyif ) of equivalence is not an easy task: the chosen notions of equivalence should beinsensitive to the additional computation steps that are introduced by the encoding;moreover, when one is interested in congruences, the equivalence has to be pre-served in not only translated high-level contexts, but arbitrary low-level contexts.Yet, only when both reflection and preservation hold the theory and proof techni-ques of a low-level language can always be used for reasoning about high-levelterms; preservation then provides behavioral completeness and reflection providesbehavioral soundness.Often, e.g., for encodings of object-oriented languages, the source language is not

a priori equipped with a notion of equivalence. Thus, we may not be able to checkthe encoding's correctness via a full abstraction result. The notion of operationalcorrespondence was therefore designed to capture correctness as the preservationand reflection of execution steps as defined by an operational semantics of thesource and the target languages and expressed in the model of transition systemswhich specify the execution of terms. Let ! s and ! t denote transition relationson the source and target language, respectively, and let O s and O t denote theirreflexive transitive closure. Then, operational correspondence is characterized bytwo complementary propositions, which we briefly call completeness (C) andsoundness (S).

Completeness (Preservation of execution step). The property

if S! s S$, then !S"O t !S$" (C)

states that all possible executions of S may be simulated by its translation, whichis naturally desirable for most encodings.

Soundness (Reflection of execution steps). The converse of completeness, i.e., theproperty

if !S"O t !S$" then SO s S$,

is, in general, not strong enough since it deals neither with all possible executionsof translations nor with the behavior of intermediate states between !S" and !S$".

13DECODING CHOICE ENCODINGS

For example, nondeterministic or divergent executions, sometimes regarded asundesirable, could although starting from a translation !S" never again reach astate that is a translation !S$". A refined property may consider the behavior ofintermediate states to some extent:

if !S"! t T then there is S! s S$ such that T!"t !S$" (I)

says that initial steps of a translation can be simulated by the source term such thatthe target-level derivative is equivalent to the translation of the source-levelderivative.Let us call a target-level step committing if it directly corresponds to some source-

level step. It should be clear that only prompt encodings, i.e., those where initialsteps of literal translations are committing, will satisfy I. As a matter of fact, mostencodings studied up to now in the literature are prompt. Promptness also leads to``nice'' proof obligations since it requires case analysis over single computationsteps.However, nonprompt encodings do not satisfy I; like choice encodings, they

allow administrative (or book-keeping) steps to precede a committing step. Some-times (cf. [Ama94]), these administrative steps are well behaved in that they canbe captured by a confluent and strongly normalizing reduction relation. Then, theencoding is optimized to perform the initial administrative overhead itself by map-ping source terms onto administrative normal forms to satisfy I. A satisfyinglygeneral approach to take administrative steps into account is

if !S"O t T then there is SO s S$ such that TO t !S$" (S)

which says that arbitrary sequences of target steps are simulated (up to completion)by the source term. It takes all derivatives T!!including intermediate states!!intoaccount and does not depend on the encoding being prompt or normalizable. Thus,S is rather appealing. However, it only states correspondence between sequences oftransitions and is therefore, in general, rather hard to prove, since it involvesanalyzing arbitrarily long transition sequences between !S" and T (see [Wal95] fora successful proof).Finally, note that a proof that source terms and their translations are the same

up to some operationally defined notion of equivalence gives full abstraction up tothat equivalence and operational correspondence for free (see Corollaries 5.6.4 and5.7.6, and the discussion at the end of Section 5.4).

4. ENCODING INPUT-GUARDED CHOICE, ASYNCHRONOUSLY

This section defines encodings of the asynchronous ?-calculus with input-guardedchoice P7 into its choice-free fragment P (Section 4.1): a divergence-free choiceencoding, and a divergent variant (Section 4.2). The essential idea of the encodingsis that a branch may consume a message before it checks whether it was allowedto do so; if yes, then it may proceed, otherwise it simply resends the consumedmessage. Such protocols are only correct with respect to asynchronous observation,

14 NESTMANN AND PIERCE

For example, nondeterministic or divergent executions, sometimes regarded asundesirable, could although starting from a translation !S" never again reach astate that is a translation !S$". A refined property may consider the behavior ofintermediate states to some extent:

if !S"! t T then there is S! s S$ such that T!"t !S$" (I)

says that initial steps of a translation can be simulated by the source term such thatthe target-level derivative is equivalent to the translation of the source-levelderivative.Let us call a target-level step committing if it directly corresponds to some source-

level step. It should be clear that only prompt encodings, i.e., those where initialsteps of literal translations are committing, will satisfy I. As a matter of fact, mostencodings studied up to now in the literature are prompt. Promptness also leads to``nice'' proof obligations since it requires case analysis over single computationsteps.However, nonprompt encodings do not satisfy I; like choice encodings, they

allow administrative (or book-keeping) steps to precede a committing step. Some-times (cf. [Ama94]), these administrative steps are well behaved in that they canbe captured by a confluent and strongly normalizing reduction relation. Then, theencoding is optimized to perform the initial administrative overhead itself by map-ping source terms onto administrative normal forms to satisfy I. A satisfyinglygeneral approach to take administrative steps into account is

if !S"O t T then there is SO s S$ such that TO t !S$" (S)

which says that arbitrary sequences of target steps are simulated (up to completion)by the source term. It takes all derivatives T!!including intermediate states!!intoaccount and does not depend on the encoding being prompt or normalizable. Thus,S is rather appealing. However, it only states correspondence between sequences oftransitions and is therefore, in general, rather hard to prove, since it involvesanalyzing arbitrarily long transition sequences between !S" and T (see [Wal95] fora successful proof).Finally, note that a proof that source terms and their translations are the same

up to some operationally defined notion of equivalence gives full abstraction up tothat equivalence and operational correspondence for free (see Corollaries 5.6.4 and5.7.6, and the discussion at the end of Section 5.4).

4. ENCODING INPUT-GUARDED CHOICE, ASYNCHRONOUSLY

This section defines encodings of the asynchronous ?-calculus with input-guardedchoice P7 into its choice-free fragment P (Section 4.1): a divergence-free choiceencoding, and a divergent variant (Section 4.2). The essential idea of the encodingsis that a branch may consume a message before it checks whether it was allowedto do so; if yes, then it may proceed, otherwise it simply resends the consumedmessage. Such protocols are only correct with respect to asynchronous observation,

14 NESTMANN AND PIERCE

21Wednesday, October 14, 2009

Page 46: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence (II)

Obviously (?), some form of operational correspondence

is often employed as a means to support the proof of full abstraction w.r.t. bisimulation-based equivalences.

Think about how to prove:

22Wednesday, October 14, 2009

Page 47: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence (II)

Obviously (?), some form of operational correspondence

is often employed as a means to support the proof of full abstraction w.r.t. bisimulation-based equivalences.

Think about how to prove:

P ≈S Q iff [[P]] ≈T [[Q]]

22Wednesday, October 14, 2009

Page 48: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

QuestionAssume an encoding [[ - ]] : S → T.Assume ≡S1 and ≡S2 are equivalences on S.Assume ≡T1 and ≡T2 are equivalences on T.

Assume ≡S1 ⊆ ≡S2 and ≡T1 ⊆ ≡T2 .

23Wednesday, October 14, 2009

Page 49: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

QuestionAssume an encoding [[ - ]] : S → T.Assume ≡S1 and ≡S2 are equivalences on S.Assume ≡T1 and ≡T2 are equivalences on T.

Assume ≡S1 ⊆ ≡S2 and ≡T1 ⊆ ≡T2 .

23Wednesday, October 14, 2009

Page 50: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

QuestionAssume an encoding [[ - ]] : S → T.Assume ≡S1 and ≡S2 are equivalences on S.Assume ≡T1 and ≡T2 are equivalences on T.

Assume ≡S1 ⊆ ≡S2 and ≡T1 ⊆ ≡T2 .

Does F.A.w.r.t. (≡S1, ≡T1)

implyF.A.w.r.t. (≡S2, ≡T2) ?

23Wednesday, October 14, 2009

Page 51: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

QuestionAssume an encoding [[ - ]] : S → T.Assume ≡S1 and ≡S2 are equivalences on S.Assume ≡T1 and ≡T2 are equivalences on T.

Assume ≡S1 ⊆ ≡S2 and ≡T1 ⊆ ≡T2 .

• Special case: consider universal relations• Special case: identity embeddings

Does F.A.w.r.t. (≡S1, ≡T1)

implyF.A.w.r.t. (≡S2, ≡T2) ?

23Wednesday, October 14, 2009

Page 52: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Then, [[-]] is fully abstract w.r.t. (Ker( [[-]] ), IdT).

Observation

Assume an arbitrary encoding [[-]] : S → T.

24Wednesday, October 14, 2009

Page 53: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

TaskAssume an encoding [[ - ]] : S → T.Assume ≡S1 and ≡S2 are equivalences on S.Assume ≡T1 and ≡T2 are equivalences on T.

25Wednesday, October 14, 2009

Page 54: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

TaskAssume an encoding [[ - ]] : S → T.Assume ≡S1 and ≡S2 are equivalences on S.Assume ≡T1 and ≡T2 are equivalences on T.

Identify “reasonable” conditions to state that:

F.A.w.r.t. (≡S1, ≡T1)

is better thanF.A.w.r.t. (≡S2, ≡T2)

25Wednesday, October 14, 2009

Page 55: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Example Encodings

26Wednesday, October 14, 2009

Page 56: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Tuples

... all other operators translated homomorphically

27Wednesday, October 14, 2009

Page 57: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Tuples

... all other operators translated homomorphically

- operational correspondence- no “obvious” F.A.- F.A. w.r.t. translated contexts- F.A. w.r.t. typed barbed congruence

27Wednesday, October 14, 2009

Page 58: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Functions as Processes

November 1997 page 25

28Wednesday, October 14, 2009

Page 59: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Functions as Processes

November 1997 page 25

various 1/2 F.A. results for various variantsused to compare Lambda and Pi Calculi

inspired Lambda Calculus theoryused to prove new equations

28Wednesday, October 14, 2009

Page 60: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Objects as Processes

29Wednesday, October 14, 2009

Page 61: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

30Wednesday, October 14, 2009

Page 62: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

!!

!!!!

!!!!

""!!!!!!!

##

"""""""

$$"""""""

%%

&&

##

!"#$#''##########

##

&&$#"!"(($$$$$$$$$$

!!

!!!!

!!!!

""!!!!!!!"""""""

$$"""""""

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %&&

&&

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

30Wednesday, October 14, 2009

Page 63: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

!!

!!!!

!!!!

""!!!!!!!

##

"""""""

$$"""""""

%%

&&

##

!"#$#''##########

##

&&$#"!"(($$$$$$$$$$

!!

!!!!

!!!!

""!!!!!!!"""""""

$$"""""""

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %&&

&&

% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %

used to provide a formal semanticsused to prove an equation correct

used for debugging an existing compiler

30Wednesday, October 14, 2009

Page 64: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

OverviewPart 0a: Encodings vs Full Abstraction Comparison of Languages/Calculi Correctness of Encodings

Part 0b: Asynchronous Pi Calculus

Part 1: Input-Guarded Choice Encoding (Distributed Implementation) Decoding (Correctness Proof)

Part 2: Output-Guarded Choice Encoding Separate Choice Encoding Mixed Choice

Conclusions31Wednesday, October 14, 2009

Page 65: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intuition

Send operations are non-blocking.

Thus, there is neither need nor useto consider send prefixes.

Instead, include just messages ... without continuation.

32Wednesday, October 14, 2009

Page 66: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Syntax

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

33Wednesday, October 14, 2009

Page 67: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Semantics (I)TABLE 1

Transition Semantics

OUT: y! z w!y! z

0

INP: y(x) .P w!yz

P[ z"x]

R-INP: !y(x) .P w!yz

P[ z"x] | !y(x) .P

COM1*:P1 w!

y! z P $1 P2 w!yz P $2

P1 | P2 w!{ P $1 | P $2

OPEN:P w!y! z P $

(z) P w!y! (z) P $if y{z

CLOSE1*:P1 ww!y! (z) P $1 P2 w!

yz P $2

P1 | P2 w!{ (z)(P $1 | P $2)

if z ! fn(P2)

PAR1*:P1 w!

+ P $1P1 | P2 w!

+ P $1 | P2if bn(+)& fn(P2)=<

RES:Pw!+ P $

(x) P w!+ (x) P $if x ! n(+)

ALPHA:P# w!+ P# $P w!+ P# $

if P= : P#

*: and the evident symmetric rules COM2 , CLOSE2 , and PAR2

results. Rule OPEN prepares for scope extrusion, whereas in CLOSE the previouslyopened scope of a bound name is closed upon its reception.Weak arrows (O ) denote the reflexive and transitive closure of internal trans-

itions; arrows with hats denote that two processes are either related by a particulartransition or else equal in the case of an internal transition.

O =def

w!{

* w!+

=def {

w!+

w!{

_=if +{{if +={

O+!

=def

O w!+

O

O+

=def

O w!+

O

2.3. Bisimulation

Two process systems are equivalent when they allow us to observe the sameoperational behavior. Bisimulation equivalence is defined as the mutual simulationof single computation steps resulting in equivalent system states. In the standardliterature on bisimulations, e.g. [Mil89, MPW92], a simulation is a binary relationS on processes such that (P, Q) #S implies, for arbitrary label +:

v if Pw!+ P $, then there is Q$ such that Q w!+ Q$ and (P $, Q$) #S.

5DECODING CHOICE ENCODINGS

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

34Wednesday, October 14, 2009

Page 68: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Semantics (II)

TABLE 1Transition Semantics

OUT: y! z w!y! z

0

INP: y(x) .P w!yz

P[ z"x]

R-INP: !y(x) .P w!yz

P[ z"x] | !y(x) .P

COM1*:P1 w!

y! z P $1 P2 w!yz P $2

P1 | P2 w!{ P $1 | P $2

OPEN:P w!y! z P $

(z) P w!y! (z) P $if y{z

CLOSE1*:P1 ww!y! (z) P $1 P2 w!

yz P $2

P1 | P2 w!{ (z)(P $1 | P $2)

if z ! fn(P2)

PAR1*:P1 w!

+ P $1P1 | P2 w!

+ P $1 | P2if bn(+)& fn(P2)=<

RES:Pw!+ P $

(x) P w!+ (x) P $if x ! n(+)

ALPHA:P# w!+ P# $P w!+ P# $

if P= : P#

*: and the evident symmetric rules COM2 , CLOSE2 , and PAR2

results. Rule OPEN prepares for scope extrusion, whereas in CLOSE the previouslyopened scope of a bound name is closed upon its reception.Weak arrows (O ) denote the reflexive and transitive closure of internal trans-

itions; arrows with hats denote that two processes are either related by a particulartransition or else equal in the case of an internal transition.

O =def

w!{

* w!+

=def {

w!+

w!{

_=if +{{if +={

O+!

=def

O w!+

O

O+

=def

O w!+

O

2.3. Bisimulation

Two process systems are equivalent when they allow us to observe the sameoperational behavior. Bisimulation equivalence is defined as the mutual simulationof single computation steps resulting in equivalent system states. In the standardliterature on bisimulations, e.g. [Mil89, MPW92], a simulation is a binary relationS on processes such that (P, Q) #S implies, for arbitrary label +:

v if Pw!+ P $, then there is Q$ such that Q w!+ Q$ and (P $, Q$) #S.

5DECODING CHOICE ENCODINGS

35Wednesday, October 14, 2009

Page 69: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Semantics (III)

TABLE 1Transition Semantics

OUT: y! z w!y! z

0

INP: y(x) .P w!yz

P[ z"x]

R-INP: !y(x) .P w!yz

P[ z"x] | !y(x) .P

COM1*:P1 w!

y! z P $1 P2 w!yz P $2

P1 | P2 w!{ P $1 | P $2

OPEN:P w!y! z P $

(z) P w!y! (z) P $if y{z

CLOSE1*:P1 ww!y! (z) P $1 P2 w!

yz P $2

P1 | P2 w!{ (z)(P $1 | P $2)

if z ! fn(P2)

PAR1*:P1 w!

+ P $1P1 | P2 w!

+ P $1 | P2if bn(+)& fn(P2)=<

RES:Pw!+ P $

(x) P w!+ (x) P $if x ! n(+)

ALPHA:P# w!+ P# $P w!+ P# $

if P= : P#

*: and the evident symmetric rules COM2 , CLOSE2 , and PAR2

results. Rule OPEN prepares for scope extrusion, whereas in CLOSE the previouslyopened scope of a bound name is closed upon its reception.Weak arrows (O ) denote the reflexive and transitive closure of internal trans-

itions; arrows with hats denote that two processes are either related by a particulartransition or else equal in the case of an internal transition.

O =def

w!{

* w!+

=def {

w!+

w!{

_=if +{{if +={

O+!

=def

O w!+

O

O+

=def

O w!+

O

2.3. Bisimulation

Two process systems are equivalent when they allow us to observe the sameoperational behavior. Bisimulation equivalence is defined as the mutual simulationof single computation steps resulting in equivalent system states. In the standardliterature on bisimulations, e.g. [Mil89, MPW92], a simulation is a binary relationS on processes such that (P, Q) #S implies, for arbitrary label +:

v if Pw!+ P $, then there is Q$ such that Q w!+ Q$ and (P $, Q$) #S.

5DECODING CHOICE ENCODINGS

36Wednesday, October 14, 2009

Page 70: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Asynchronous Bisimulation

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

37Wednesday, October 14, 2009

Page 71: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Asynchronous Bisimulation

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

37Wednesday, October 14, 2009

Page 72: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Asynchronous Bisimulation

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

37Wednesday, October 14, 2009

Page 73: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Asynchronous Bisimulation

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

37Wednesday, October 14, 2009

Page 74: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Asynchronous Bisimulation

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

.......

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

37Wednesday, October 14, 2009

Page 75: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Asynchronous Bisimulation

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

.......

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

A bisimulation is a simulation whose opposite is again a simulation. In this sub-section, we review various refinements of standard bisimulation: we define itsasynchronous variant, identify a few structural laws, refine it to a preorder thattakes efficiency into account, and finally refine it to deal with divergence.

Asynchrony

In calculi with synchronous message-passing, output-and input-transitions aredealt with symmetrically in the definition of bisimulation. In contrast, the conceptof asynchronous messages suggests a nonstandard way of observing processes. Sincethe sending of a message to an observed system is not blocking for the observer,the latter cannot immediately detect whether the message was actually received ornot. The only possible observations are messages eventually coming back from thesystem. Different formulations of asynchronous bisimulation (all inducing the sameequivalence) have been proposed in the literature, based on a modified labeledinput rule [HT92], on output-only barbed congruences [HY95] and on a standardlabeled semantics with asynchronous observers [ACS98]. Here, we follow the latterapproach. Unless otherwise stated, we implicitly assume an asynchronous inter-pretation of observation throughout the paper.

Definition 2.3.1 (Simulation, bisimulation). A binary relation S on processesis a strong simulation if (P, Q) #S implies:

v if Pw!+ P $, where + is either { or output with bn(+)& fn(P |Q)=<, thenthere is Q$ such that Qw!+ Q$ and (P $, Q$) #S

v (a! z | P, a! z |Q) #S for arbitrary messages a! z.

B is called a strong bisimulation if both B and B&1 are strong simulations. Twoprocesses P and Q are strongly bisimilar, written PtQ, if they are related by somestrong bisimulation.Replacing Q w!+ Q$ with QO

+Q$ in this definition yields the weak versions of the

corresponding simulations. Write r for weak bisimulation. Process Q weaklysimulates P, written PPQ, if there is a weak simulation S with (P, Q) #S.

Fact 2.3.2. r is a congruence on P [Hon92, ACS98].

Structural Laws

Certain laws on processes have been recognized as having merely ``structural''content; they are valid with respect to all different kinds of behavioral congruences,equivalence, and preorders, including strong bisimulation (the finest ``reasonable''equivalence). In this paper, we use a few structural laws (indicated by the symbol# ) listed in Table 2 in order to simplify the presentation of some derivationsequences of transitions in the proofs of Section 5.

Fact 2.3.3. # / t.

In particular, we work up to :-conversion, where appropriate, i.e., we omit tomention the implicit application of rule ALPHA since it may be captured by a

6 NESTMANN AND PIERCE

37Wednesday, October 14, 2009

Page 76: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ... (* better control on τ-moves *)

38Wednesday, October 14, 2009

Page 77: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ...

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

(* better control on τ-moves *)

38Wednesday, October 14, 2009

Page 78: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ...

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

(* better control on τ-moves *)

38Wednesday, October 14, 2009

Page 79: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ...

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

(* better control on τ-moves *)

at least one τ

38Wednesday, October 14, 2009

Page 80: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ...

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

(* better control on τ-moves *)

at least one τ

38Wednesday, October 14, 2009

Page 81: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ...

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

(* better control on τ-moves *)

at least one τ

at most one τ

38Wednesday, October 14, 2009

Page 82: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Weak ...

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

(* better control on τ-moves *)

at least one τ

at most one τ

38Wednesday, October 14, 2009

Page 83: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (I): Expansion[Arun-Kumar, Hennessy 1991]

39Wednesday, October 14, 2009

Page 84: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (I): Expansion

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

[Arun-Kumar, Hennessy 1991]

39Wednesday, October 14, 2009

Page 85: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (I): Expansion

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

[Arun-Kumar, Hennessy 1991]

39Wednesday, October 14, 2009

Page 86: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (I): Expansion

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

[Arun-Kumar, Hennessy 1991]

39Wednesday, October 14, 2009

Page 87: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (II): Eventual Progress

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

40Wednesday, October 14, 2009

Page 88: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (II): Eventual Progress

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

40Wednesday, October 14, 2009

Page 89: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Too Weak (II): Eventual Progress

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCEprocess P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

40Wednesday, October 14, 2009

Page 90: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Structural LawsTABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

41Wednesday, October 14, 2009

Page 91: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Strong ...

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

42Wednesday, October 14, 2009

Page 92: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Strong ...

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

42Wednesday, October 14, 2009

Page 93: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Strong ...

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

42Wednesday, October 14, 2009

Page 94: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

When Weak is Too Strong ...

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

where i, i1 , i2 ! fn(A, B, C). The example was originally introduced in CCS, usingchoice operators [PS92]. In fact, both P and Q implement two different versionsof some behavior that performs an internal choice among the processes A, B, andC by the concurrent race of inputs for an output on some shared internal channel.The difference is that P only uses one internal channel (i), whereas Q uses two ofthem (i1 and i2) and, as a result, needs two internal steps in order to decide whichof B or C is chosen in the case that A was preempted. Hence, the choice imple-mented by P is atomic, whereas the choice of Q is gradual.Although intuitively P and Q might be regarded as observably equivalent by dis-

regarding the internal choices, which are present in Q but not in P, they are notweakly bisimilar. According to the derivation trees up to t (using the law(i)(i .P |Q)tQ, if i ! fn(Q)) and with BC :=(i2)(i2 | i2 .B | i2 .C),

the state BC cannot be related to any of the states beneath P, which would bothsimulate BC and be simulated by BC. At best, we can find two contrary simulationsS1 and S&1

2 with

S =def [(P, Q), (A, A), (B, B), (C, C), ...]

S1 =def

S_ [(B, BC), (C, BC)]

S2 =def

S_ [(P, BC)]

which, unfortunately, do not coincide. The distinguishing pairs express the problemwith the partially committed {-derivative BC of Q: on the one hand, it cannot besimulated by any {-derivative of P due to their lack of ability to reach both B andC; on the other hand, it can itself no longer simulate P, because it has lost theability to reach A.As an appropriate mathematical tool to handle situations as the above example,

Parrow and Sjo! din developed the notion of coupled simulation [PS92]: two con-trary simulations are no longer required to coincide, but only to be coupled in acertain way. Several candidates have been presented for what it means to becoupled. No coupling at all would just lead to the notion of mutual simulation. Anontrivial notion of coupling was based on the property of stability by requiring thecoincidence of two contrary simulations in at least the stable states. This styleinduces a relation which is an equivalence only for convergent processes, and it hasbeen proven to be strictly weaker than bisimulation and strictly stronger than test-ing equivalence [PS92]; indeed, the two processes P and Q of the introductoryexample are equivalent in that sense, as formalized in Definition 2.4.1 below.

9DECODING CHOICE ENCODINGS

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

process P is said to be divergent, written P ! , if there is an infinite sequence of{-steps starting at P; otherwise it is called convergent, written P- .Weak (bi-)simulation is insensitive to the divergence of processes. This lack of

expressiveness arises from the definition of weak simulation, which may alwayschoose to mimic {-steps trivially. Weak bisimulation, or observation equivalence,may therefore equate two processes, exactly one of which is diverging: it simplyignores the existence of infinite {-sequences. Enhancements of bisimulation havebeen investigated that take divergence behavior explicitly into account, resulting inpreorders among bisimilar processes [Wal90].For our purposes in Section 5.8, the simpler property of preserving divergence will

suffice. A weak simulation S has this property if P ! implies Q ! for all (P, Q) #S.Intuitively, when required to weakly simulate an infinite {-sequence, S mustprogress infinitely often. Let us introduce some further notation (inspired by Priese[Pri78]) to make precise what this means.Let w!{ n denote a {-sequence of length n and w!{ + =

defw!{ n for some n>0

denote a nonempty, but arbitrarily long, finite sequence of {-steps.

Definition 2.3.7 (Eventually progressing simulation). A weak simulation S iscalled eventually progressing if, for all (P, Q) #S, there is a natural number kP #Nsuch that Pw!{ n P $ with n>kP implies that there is Q$ with Qw!{ + Q$ such that(P $, Q$) #S.

According to the definition, a simulation S is eventually progressing if, for suf-ficiently long finite {-sequences, S must eventually reproduce a {-step. In thisrespect, kP is to be understood as an upper bound for the number of {-steps startingfrom P that may be trivially simulated. Note that every progressing simulation is,by definition, eventually progressing with upper bound 0.With a progressing simulation, every infinite sequence may be simulated by sub-

sequently simulating sufficiently long finite subsequences nontrivially, i.e., such thatthey represent progress. This resembles the chunk-by-chunk idea of simulation dueto Gammelgaard [Gam91].

Lemma 2.3.8. Eventually progressing simulations preserve divergence.

Proof. Let S be a progressing simulation and (P, Q) #S. If P ! then there isPw!{ |. Since S is progressing, there is kP #N such that Pw!{ kP+1 P$ w!{ | andQw!{ + Q$ with (P$, Q$) #S. Since now P$ ! , we can repeat the procedure infinitelyoften. K

2.4. When Weak Bisimulation Is Too Strong

Every bisimulation B can be regarded as a pair (S1 , S2) of contrary simulationsS1 and S&1

2 , where S1 and S2 contain exactly the same pairs of processes, i.e.,S1=B=S2 . For some applications, this requirement is too strong. For example,consider the following P-processes,

P =def (i) ( i! | i .A | i .B | i .C)

Q =def (i1)(i2) ( i1 | i2 | i1 .A | i1 . (i2 .B | i2 .C)),

8 NESTMANN AND PIERCE

P and Q are not weakly bisimilar.They just simulate each other.

However, we can do much better ...42Wednesday, October 14, 2009

Page 95: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Coupled Simulation (I)[Parrow, Sjödin 1994]

43Wednesday, October 14, 2009

Page 96: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

Coupled Simulation (I)[Parrow, Sjödin 1994]

43Wednesday, October 14, 2009

Page 97: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

Coupled Simulation (I)In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

[Parrow, Sjödin 1994]

43Wednesday, October 14, 2009

Page 98: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

Coupled Simulation (I)In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

[Parrow, Sjödin 1994]In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

43Wednesday, October 14, 2009

Page 99: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

Coupled Simulation (I)In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

[Parrow, Sjödin 1994]In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

43Wednesday, October 14, 2009

Page 100: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

TABLE 2Structural Laws

:-conversion P#Q if P= : Q

associativity P | (Q | R)#(P |Q) | R

commutativity P | Q#Q | P

neutrality P | 0#P

scope extrusion ( y) P | Q#( y)(P |Q) if y ! fn(Q)

scope elimination ( y) Q#Q if y ! fn(Q)

simple structural transformation. Thus, we silently identify processes or actionswhich only differ in the choice of bound names. Furthermore, due to theassociativity law for composition, we omit brackets in multiple parallel compositionand use finite parallel composition > with the usual meaning.

Efficiency

Often, weakly bisimilar processes differ only in the number of internal steps. Theexpansion preorder [AH92] takes this into account by stating that one processengages in at least as many internal actions as another. We use expansions to definean up-to technique in Section 2.5 and to precisely formulate the correctness of anencoding in Section 5.5.Deviating from the standard presentation of expansion, we introduce some

auxiliary terminology that will be useful in the next subsection for capturing aspectsof divergence. It is partly inspired by the notion of progressing bisimulation in CCS[MS92b].

Definition 2.3.4. A weak simulation S is called

v progressing, if Pw!+ P $ implies that there is Q$ with Q O+Q$ such that

(P $, Q$) #S

v strict, if Pw!+ P $ implies that there is Q$ with Qw!+! Q$ such that(P $, Q$) #S

for all (P, Q) #S and for all + being { or output with bn(+)& fn(P |Q)=<.Note that a weak simulation is strong if it is both progressing and strict.

Definition 2.3.5 (Expansion). A binary relation E on processes is an expansionif E is a progressing simulation and E&1 is a strict simulation. Process Q expandsP, written P"Q, if there is an expansion E with (P, Q) # E.

Fact 2.3.6. t / " / r.

Divergence

Since we are going to formally reason about divergence properties of encodingsin Section 5.8, we need some basic definitions. According to standard intuitions, a

7DECODING CHOICE ENCODINGS

Coupled Simulation (I)In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

[Parrow, Sjödin 1994]In this paper, we use a generalization for divergent processes, as suggested by van

Glabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

43Wednesday, October 14, 2009

Page 101: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Coupled Simulation (II)

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

44Wednesday, October 14, 2009

Page 102: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Coupled Simulation (III)

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

45Wednesday, October 14, 2009

Page 103: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Coupled Simulation (III)

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

In this paper, we use a generalization for divergent processes, as suggested by vanGlabbeek [Gla93, PS94], where coupling requires the ability of a simulatingprocess to evolve into a simulated process by internal action. We recapitulate theformal definition:

Definition 2.4.1 (Coupled simulation). A mutual simulation is a pair (S1 , S2),where S1 and S&1

2 are weak simulations. A coupled simulation is a mutual simula-tion (S1 , S2) satisfying

v if (P, Q) #S1 , then there is some Q$ such that QOQ$ and (P, Q$) #S2 ;

v if (P, Q$) #S2 , then there is some P$ such that POP$ and (P$, Q$) #S1 .

Processes P and Q are coupled simulation equivalent (or coupled similar), writtenP$Q, if they are related by both components of some coupled simulation.

Using dotted lines to represent the simulations, the coupling property of (S1 , S2)may be depicted as an internally out-of-step bisimulation by:

Of two processes contained in one component relation of some coupled simulation,the simulated (more committed) process is always a bit ahead of its simulating (lesscommitted) counterpart. Intuitively, ``Q coupled simulates P'' means that ``Q is atmost as committed as P '' with respect to internal choices and that Q may internallyevolve to a state Q$ where it is at least as committed as P, i.e., where P coupledsimulates Q$.

Fact 2.4.2. Let (S1 , S2) be a coupled simulation. Then (S&12 , S&1

1 ) is acoupled simulation.

Observe that the pair (r, r ) is a coupled simulation by trivial couplingsequences, as motivated at the beginning of the section. Furthermore, the motivat-ing example witnesses that coupled simulation equivalence is strictly coarser thanweak bisimilarity.

Fact 2.4.3. r / $ .

On processes without infinite {-sequences, coupled simulation is strictly finerthan testing equivalence, so it represents a reasonable (and coinductively defined)candidate in the lattice of process equivalences [Gla93]. In general, it is compatiblewith all operators of P.

Proposition 2.4.4. $ is a congruence on P.

Proof. Reflexivity is immediate by definition.For symmetry, let P1 $ P2 by (S1 , S2) with (P1 , P2) #S1 &S2 . This is equivalent

to (P2 , P1) #S&12 &S&1

1 , and, by Fact .4.2, we have P2 $ P1 .

10 NESTMANN AND PIERCE

45Wednesday, October 14, 2009

Page 104: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

OverviewPart 0a: Encodings vs Full Abstraction Comparison of Languages/Calculi Correctness of Encodings

Part 0b: Asynchronous Pi Calculus

Part 1: Input-Guarded Choice Encoding (Distributed Implementation) Decoding (Correctness Proof)

Part 2: Output-Guarded Choice Encoding Separate Choice Encoding Mixed Choice

Conclusions46Wednesday, October 14, 2009

Page 105: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Choice in Asynchronous Contexts

The intuition of asynchronous send operations is thatthe send actions happen instantaneously,

without interference with the environment.

Having such a send compete with an receive,which itself is to wait for the availability of a message,

will always make the send action win over the receive action.

So, better not mix up send and receive within a choice ...

47Wednesday, October 14, 2009

Page 106: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Syntax + Semantics

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

where replication is restricted to input processes and evaluated lazily [HY95,MP95]; i.e., copies of a replicated input are spawned during communication.

2.1. Syntax

Let N be a countable set of names. Then, the set P of processes P is defined by

P ::=(x) P | P | P | y! z | 0 | R | !R

R ::=y(x) .P,

where x, y, z #N. The semantics of restriction, parallel composition, input, and out-put is standard. The form !( y)(x) .P is the replication operator restricted to input-prefixes. In y! z and y(x), y is called the subject, whereas x, z are called objects. Werefer to outputs as messages and to (ordinary or replicated) inputs as receptors. Aterm is guarded when it occurs as a subterm of an input prefix R. As a notationalabbreviation, if the object in some input or output is of no importance in someterm, then it may be completely omitted, i.e., u(x) .P with x ! P and y! z are writteny .P and y! . The usual constant 0 denoting the inactive process can be derived as(x) x! , but we prefer to include it explicitly for a clean specification of structural andoperational semantics rules.The definitions of name substitution and :-conversion are standard. A name x is

bound in P, if P contains an x-binding operator, i.e., either a restriction (x) P or aninput prefix y(x) .P as a subterm. A name x is free in P if it occurs outside the scopeof an x-binding operator. We write bn(P) and fn(P) for the sets of P 's bound andfree names; n(P) is their union. Renaming of bound names by :-conversion =: isas usual. Substitution P[ z"x] is given by replacing all free occurrences of x in Pwith z, first :-converting P to avoid capture.Operator precedence is, in decreasing order of binding strength: (1) prefixing,

restriction, (2) substitution, (3) parallel composition.

2.2. Operational Semantics

Let y, z #N be arbitrary names. Then, the set L of labels + is generated by

+ ::=y! (z) | y! z | yz | {

representing bound and free output, early input, and internal action. The functionsbn and fn yield the bound names, i.e., the objects in bound (bracketed) outputs,and free names (all others) of a label. We write n(+) for their union bn(+)_ fn(+).The operational semantics for processes is given as a transition system with P as

its set of states, The transition relation !!P_L_P is defined as the smallestrelation generated by the set of rules in Table 1. We use an early instantiationscheme as expressed in the INP and COM"CLOSE-rules, since it allows us to definebisimulation without clauses for name instantiation and since it allows for a moreintuitive modeling in Section 4, but this decision does not affect the validity of our

4 NESTMANN AND PIERCE

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

48Wednesday, October 14, 2009

Page 107: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Choice

49Wednesday, October 14, 2009

Page 108: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (I)

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

50Wednesday, October 14, 2009

Page 109: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (I)

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

TABLE 4Operational Semantics for test-expressions

TRUE: test l then P1 else P2 w!lt P1

FALSE: test l then P1 else P2 w!lf P2

Target language(s). Instead of directly defining the encodings from P7 into P,we use an intermediate language Ptest that provides special Boolean names and alsoa conditional form test l then P else Q for testing the (Boolean) value of messagesalong some channel l.Let B :=[f, t] be the set of special names that we interpret as Boolean values. Let

V :=N_+ B be referred to as the set of values. Then the set Ptest of processes withconditionals is defined by adding a clause for test-expressions to the grammar of P:

P ::= } } } | test y then P else P.

In addition, in order to forbid restriction on, communication on, and substitutionfor Booleans, we adopt some naming conventions concerning their admissibleoccurrences in the clauses of the grammar of P (cf. Section 2) by requiring thatx, y #N and z #V. Thus, the only use of Booleans in Ptest is as objects in messages.Note that test-expressions can be regarded as abbreviations of if-expressions thatare prefixed with some input of a Boolean message.

test y then P1 else P2!y(b) . if b then P1 else P2

The reason for using test instead of if is, on the one hand, better readability and,on the other hand, that if, like matching operators, destroys some congruenceproperties of the language. In order to model the behavior of test-expressions thatcan interact with messages, we supply an operational semantics by the rules inTable 4 that properly fits with the labeled transition system semantics of P and theexplanation in terms of if.2 The labeled transition semantics of Ptest is then deter-mined as the smallest relation generated by the rules in Tables 4 and 1, where weadd the side-condition z #N to the rules INP and R-INP; this additional side-con-dition prevents the standard input forms from receiving special names and thus alsofrom unintendedly using special names as channels.After having presented the encodings into T in Section 4.2, we formalize in

Section 4.4 the interpretation of those encodings as encodings into P. The readermight object encodings cannot be regarded as identical since T contains test-expressions. However, we can safely make this identification up to some notion ofequivalence. As one might expect, this depends on the way that we actually use the

16 NESTMANN AND PIERCE

2 By using an early instantiation scheme, the interaction between Boolean messages and test-expres-sions is handled by the standard COM-rule; otherwise, we would have to supply some refined interac-tion rule for controlling the admissible transmission of Booleans.

For notational convenience, the target language contains:TABLE 4

Operational Semantics for test-expressions

TRUE: test l then P1 else P2 w!lt P1

FALSE: test l then P1 else P2 w!lf P2

Target language(s). Instead of directly defining the encodings from P7 into P,we use an intermediate language Ptest that provides special Boolean names and alsoa conditional form test l then P else Q for testing the (Boolean) value of messagesalong some channel l.Let B :=[f, t] be the set of special names that we interpret as Boolean values. Let

V :=N_+ B be referred to as the set of values. Then the set Ptest of processes withconditionals is defined by adding a clause for test-expressions to the grammar of P:

P ::= } } } | test y then P else P.

In addition, in order to forbid restriction on, communication on, and substitutionfor Booleans, we adopt some naming conventions concerning their admissibleoccurrences in the clauses of the grammar of P (cf. Section 2) by requiring thatx, y #N and z #V. Thus, the only use of Booleans in Ptest is as objects in messages.Note that test-expressions can be regarded as abbreviations of if-expressions thatare prefixed with some input of a Boolean message.

test y then P1 else P2!y(b) . if b then P1 else P2

The reason for using test instead of if is, on the one hand, better readability and,on the other hand, that if, like matching operators, destroys some congruenceproperties of the language. In order to model the behavior of test-expressions thatcan interact with messages, we supply an operational semantics by the rules inTable 4 that properly fits with the labeled transition system semantics of P and theexplanation in terms of if.2 The labeled transition semantics of Ptest is then deter-mined as the smallest relation generated by the rules in Tables 4 and 1, where weadd the side-condition z #N to the rules INP and R-INP; this additional side-con-dition prevents the standard input forms from receiving special names and thus alsofrom unintendedly using special names as channels.After having presented the encodings into T in Section 4.2, we formalize in

Section 4.4 the interpretation of those encodings as encodings into P. The readermight object encodings cannot be regarded as identical since T contains test-expressions. However, we can safely make this identification up to some notion ofequivalence. As one might expect, this depends on the way that we actually use the

16 NESTMANN AND PIERCE

2 By using an early instantiation scheme, the interaction between Boolean messages and test-expres-sions is handled by the standard COM-rule; otherwise, we would have to supply some refined interac-tion rule for controlling the admissible transmission of Booleans.

TABLE 4Operational Semantics for test-expressions

TRUE: test l then P1 else P2 w!lt P1

FALSE: test l then P1 else P2 w!lf P2

Target language(s). Instead of directly defining the encodings from P7 into P,we use an intermediate language Ptest that provides special Boolean names and alsoa conditional form test l then P else Q for testing the (Boolean) value of messagesalong some channel l.Let B :=[f, t] be the set of special names that we interpret as Boolean values. Let

V :=N_+ B be referred to as the set of values. Then the set Ptest of processes withconditionals is defined by adding a clause for test-expressions to the grammar of P:

P ::= } } } | test y then P else P.

In addition, in order to forbid restriction on, communication on, and substitutionfor Booleans, we adopt some naming conventions concerning their admissibleoccurrences in the clauses of the grammar of P (cf. Section 2) by requiring thatx, y #N and z #V. Thus, the only use of Booleans in Ptest is as objects in messages.Note that test-expressions can be regarded as abbreviations of if-expressions thatare prefixed with some input of a Boolean message.

test y then P1 else P2!y(b) . if b then P1 else P2

The reason for using test instead of if is, on the one hand, better readability and,on the other hand, that if, like matching operators, destroys some congruenceproperties of the language. In order to model the behavior of test-expressions thatcan interact with messages, we supply an operational semantics by the rules inTable 4 that properly fits with the labeled transition system semantics of P and theexplanation in terms of if.2 The labeled transition semantics of Ptest is then deter-mined as the smallest relation generated by the rules in Tables 4 and 1, where weadd the side-condition z #N to the rules INP and R-INP; this additional side-con-dition prevents the standard input forms from receiving special names and thus alsofrom unintendedly using special names as channels.After having presented the encodings into T in Section 4.2, we formalize in

Section 4.4 the interpretation of those encodings as encodings into P. The readermight object encodings cannot be regarded as identical since T contains test-expressions. However, we can safely make this identification up to some notion ofequivalence. As one might expect, this depends on the way that we actually use the

16 NESTMANN AND PIERCE

2 By using an early instantiation scheme, the interaction between Boolean messages and test-expres-sions is handled by the standard COM-rule; otherwise, we would have to supply some refined interac-tion rule for controlling the admissible transmission of Booleans.

with:

(* for the special names t and f *)50Wednesday, October 14, 2009

Page 110: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (I)

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

TABLE 4Operational Semantics for test-expressions

TRUE: test l then P1 else P2 w!lt P1

FALSE: test l then P1 else P2 w!lf P2

Target language(s). Instead of directly defining the encodings from P7 into P,we use an intermediate language Ptest that provides special Boolean names and alsoa conditional form test l then P else Q for testing the (Boolean) value of messagesalong some channel l.Let B :=[f, t] be the set of special names that we interpret as Boolean values. Let

V :=N_+ B be referred to as the set of values. Then the set Ptest of processes withconditionals is defined by adding a clause for test-expressions to the grammar of P:

P ::= } } } | test y then P else P.

In addition, in order to forbid restriction on, communication on, and substitutionfor Booleans, we adopt some naming conventions concerning their admissibleoccurrences in the clauses of the grammar of P (cf. Section 2) by requiring thatx, y #N and z #V. Thus, the only use of Booleans in Ptest is as objects in messages.Note that test-expressions can be regarded as abbreviations of if-expressions thatare prefixed with some input of a Boolean message.

test y then P1 else P2!y(b) . if b then P1 else P2

The reason for using test instead of if is, on the one hand, better readability and,on the other hand, that if, like matching operators, destroys some congruenceproperties of the language. In order to model the behavior of test-expressions thatcan interact with messages, we supply an operational semantics by the rules inTable 4 that properly fits with the labeled transition system semantics of P and theexplanation in terms of if.2 The labeled transition semantics of Ptest is then deter-mined as the smallest relation generated by the rules in Tables 4 and 1, where weadd the side-condition z #N to the rules INP and R-INP; this additional side-con-dition prevents the standard input forms from receiving special names and thus alsofrom unintendedly using special names as channels.After having presented the encodings into T in Section 4.2, we formalize in

Section 4.4 the interpretation of those encodings as encodings into P. The readermight object encodings cannot be regarded as identical since T contains test-expressions. However, we can safely make this identification up to some notion ofequivalence. As one might expect, this depends on the way that we actually use the

16 NESTMANN AND PIERCE

2 By using an early instantiation scheme, the interaction between Boolean messages and test-expres-sions is handled by the standard COM-rule; otherwise, we would have to supply some refined interac-tion rule for controlling the admissible transmission of Booleans.

For notational convenience, the target language contains:TABLE 4

Operational Semantics for test-expressions

TRUE: test l then P1 else P2 w!lt P1

FALSE: test l then P1 else P2 w!lf P2

Target language(s). Instead of directly defining the encodings from P7 into P,we use an intermediate language Ptest that provides special Boolean names and alsoa conditional form test l then P else Q for testing the (Boolean) value of messagesalong some channel l.Let B :=[f, t] be the set of special names that we interpret as Boolean values. Let

V :=N_+ B be referred to as the set of values. Then the set Ptest of processes withconditionals is defined by adding a clause for test-expressions to the grammar of P:

P ::= } } } | test y then P else P.

In addition, in order to forbid restriction on, communication on, and substitutionfor Booleans, we adopt some naming conventions concerning their admissibleoccurrences in the clauses of the grammar of P (cf. Section 2) by requiring thatx, y #N and z #V. Thus, the only use of Booleans in Ptest is as objects in messages.Note that test-expressions can be regarded as abbreviations of if-expressions thatare prefixed with some input of a Boolean message.

test y then P1 else P2!y(b) . if b then P1 else P2

The reason for using test instead of if is, on the one hand, better readability and,on the other hand, that if, like matching operators, destroys some congruenceproperties of the language. In order to model the behavior of test-expressions thatcan interact with messages, we supply an operational semantics by the rules inTable 4 that properly fits with the labeled transition system semantics of P and theexplanation in terms of if.2 The labeled transition semantics of Ptest is then deter-mined as the smallest relation generated by the rules in Tables 4 and 1, where weadd the side-condition z #N to the rules INP and R-INP; this additional side-con-dition prevents the standard input forms from receiving special names and thus alsofrom unintendedly using special names as channels.After having presented the encodings into T in Section 4.2, we formalize in

Section 4.4 the interpretation of those encodings as encodings into P. The readermight object encodings cannot be regarded as identical since T contains test-expressions. However, we can safely make this identification up to some notion ofequivalence. As one might expect, this depends on the way that we actually use the

16 NESTMANN AND PIERCE

2 By using an early instantiation scheme, the interaction between Boolean messages and test-expres-sions is handled by the standard COM-rule; otherwise, we would have to supply some refined interac-tion rule for controlling the admissible transmission of Booleans.

TABLE 4Operational Semantics for test-expressions

TRUE: test l then P1 else P2 w!lt P1

FALSE: test l then P1 else P2 w!lf P2

Target language(s). Instead of directly defining the encodings from P7 into P,we use an intermediate language Ptest that provides special Boolean names and alsoa conditional form test l then P else Q for testing the (Boolean) value of messagesalong some channel l.Let B :=[f, t] be the set of special names that we interpret as Boolean values. Let

V :=N_+ B be referred to as the set of values. Then the set Ptest of processes withconditionals is defined by adding a clause for test-expressions to the grammar of P:

P ::= } } } | test y then P else P.

In addition, in order to forbid restriction on, communication on, and substitutionfor Booleans, we adopt some naming conventions concerning their admissibleoccurrences in the clauses of the grammar of P (cf. Section 2) by requiring thatx, y #N and z #V. Thus, the only use of Booleans in Ptest is as objects in messages.Note that test-expressions can be regarded as abbreviations of if-expressions thatare prefixed with some input of a Boolean message.

test y then P1 else P2!y(b) . if b then P1 else P2

The reason for using test instead of if is, on the one hand, better readability and,on the other hand, that if, like matching operators, destroys some congruenceproperties of the language. In order to model the behavior of test-expressions thatcan interact with messages, we supply an operational semantics by the rules inTable 4 that properly fits with the labeled transition system semantics of P and theexplanation in terms of if.2 The labeled transition semantics of Ptest is then deter-mined as the smallest relation generated by the rules in Tables 4 and 1, where weadd the side-condition z #N to the rules INP and R-INP; this additional side-con-dition prevents the standard input forms from receiving special names and thus alsofrom unintendedly using special names as channels.After having presented the encodings into T in Section 4.2, we formalize in

Section 4.4 the interpretation of those encodings as encodings into P. The readermight object encodings cannot be regarded as identical since T contains test-expressions. However, we can safely make this identification up to some notion ofequivalence. As one might expect, this depends on the way that we actually use the

16 NESTMANN AND PIERCE

2 By using an early instantiation scheme, the interaction between Boolean messages and test-expres-sions is handled by the standard COM-rule; otherwise, we would have to supply some refined interac-tion rule for controlling the admissible transmission of Booleans.

with:

(* for the special names t and f *)50Wednesday, October 14, 2009

Page 111: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (II)

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

51Wednesday, October 14, 2009

Page 112: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (II)

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

51Wednesday, October 14, 2009

Page 113: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (II)

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGSadditional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

51Wednesday, October 14, 2009

Page 114: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (II)

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGSadditional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

51Wednesday, October 14, 2009

Page 115: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (III)

Divergence-Free Protocol

In the Introduction, we presented the following algorithm (note the use of test):

Branchl( yj (x) .Pj) =def yj (x) . test l then(C!Pj" | l! f) else(yjx | l! f).

Every branch tests the lock after having received a value on its channel. If the lockcarries t, then the testing branch proceeds with its continuation; otherwise itresends the message that it has consumed and terminates. Each time the lock isread, it is immediately reinstalled!!independent of its former value!!with the valuef. Thus, at most one branch will ever be chosen, since only the first branch thatreads the lock will read t.In order to conveniently denote intermediate states in the branches of an encod-

ing, we use the following abbreviations, where R on the left-hand side matches theterm y(x) .P such that y, x, P may be used on the right-hand side of the definitions.

Readl(R) =def y(x) .Testl(R)

Testl(R) =def test l then Commitl(R) else Abortl(R)

Commitl(R) =def l! f | P

Abortl(R) =def l! f | y" x

Observe that Branchl(R)=Readl(R) by definition, so a choice expression istranslated by

C !:j # J Rj"=def (l ) \l! t | `j # J Readl(C!Rj")+ where l is fresh

into the composition of its branches in Read-state and the lock carrying t.Note that the C-encoding does not add divergence to the behavior of source

terms, as can be observed by inspection of the abbreviations: only finite {-sequencesare used in order to implement a choice, whereas {-loops are not possible (we proveit formally in Section 5.8).

Protocol with Undo-loops

We now define the other choice encoding. Its main difference from the encodingC! " is that a supposedly committed branch may change its mind and deny thecommitment, releasing the lock and giving back the value it consumed from theenvironment. Let internal choice be:

P#Q=def (i) (i! | i .P | i .Q) where i is fresh.

18 NESTMANN AND PIERCE

Divergence-Free Protocol

In the Introduction, we presented the following algorithm (note the use of test):

Branchl( yj (x) .Pj) =def yj (x) . test l then(C!Pj" | l! f) else(yjx | l! f).

Every branch tests the lock after having received a value on its channel. If the lockcarries t, then the testing branch proceeds with its continuation; otherwise itresends the message that it has consumed and terminates. Each time the lock isread, it is immediately reinstalled!!independent of its former value!!with the valuef. Thus, at most one branch will ever be chosen, since only the first branch thatreads the lock will read t.In order to conveniently denote intermediate states in the branches of an encod-

ing, we use the following abbreviations, where R on the left-hand side matches theterm y(x) .P such that y, x, P may be used on the right-hand side of the definitions.

Readl(R) =def y(x) .Testl(R)

Testl(R) =def test l then Commitl(R) else Abortl(R)

Commitl(R) =def l! f | P

Abortl(R) =def l! f | y" x

Observe that Branchl(R)=Readl(R) by definition, so a choice expression istranslated by

C !:j # J Rj"=def (l ) \l! t | `j # J Readl(C!Rj")+ where l is fresh

into the composition of its branches in Read-state and the lock carrying t.Note that the C-encoding does not add divergence to the behavior of source

terms, as can be observed by inspection of the abbreviations: only finite {-sequencesare used in order to implement a choice, whereas {-loops are not possible (we proveit formally in Section 5.8).

Protocol with Undo-loops

We now define the other choice encoding. Its main difference from the encodingC! " is that a supposedly committed branch may change its mind and deny thecommitment, releasing the lock and giving back the value it consumed from theenvironment. Let internal choice be:

P#Q=def (i) (i! | i .P | i .Q) where i is fresh.

18 NESTMANN AND PIERCE

52Wednesday, October 14, 2009

Page 116: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (IV)

Divergence-Free Protocol

In the Introduction, we presented the following algorithm (note the use of test):

Branchl( yj (x) .Pj) =def yj (x) . test l then(C!Pj" | l! f) else(yjx | l! f).

Every branch tests the lock after having received a value on its channel. If the lockcarries t, then the testing branch proceeds with its continuation; otherwise itresends the message that it has consumed and terminates. Each time the lock isread, it is immediately reinstalled!!independent of its former value!!with the valuef. Thus, at most one branch will ever be chosen, since only the first branch thatreads the lock will read t.In order to conveniently denote intermediate states in the branches of an encod-

ing, we use the following abbreviations, where R on the left-hand side matches theterm y(x) .P such that y, x, P may be used on the right-hand side of the definitions.

Readl(R) =def y(x) .Testl(R)

Testl(R) =def test l then Commitl(R) else Abortl(R)

Commitl(R) =def l! f | P

Abortl(R) =def l! f | y" x

Observe that Branchl(R)=Readl(R) by definition, so a choice expression istranslated by

C !:j # J Rj"=def (l ) \l! t | `j # J Readl(C!Rj")+ where l is fresh

into the composition of its branches in Read-state and the lock carrying t.Note that the C-encoding does not add divergence to the behavior of source

terms, as can be observed by inspection of the abbreviations: only finite {-sequencesare used in order to implement a choice, whereas {-loops are not possible (we proveit formally in Section 5.8).

Protocol with Undo-loops

We now define the other choice encoding. Its main difference from the encodingC! " is that a supposedly committed branch may change its mind and deny thecommitment, releasing the lock and giving back the value it consumed from theenvironment. Let internal choice be:

P#Q=def (i) (i! | i .P | i .Q) where i is fresh.

18 NESTMANN AND PIERCE

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

53Wednesday, October 14, 2009

Page 117: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (IV)

Divergence-Free Protocol

In the Introduction, we presented the following algorithm (note the use of test):

Branchl( yj (x) .Pj) =def yj (x) . test l then(C!Pj" | l! f) else(yjx | l! f).

Every branch tests the lock after having received a value on its channel. If the lockcarries t, then the testing branch proceeds with its continuation; otherwise itresends the message that it has consumed and terminates. Each time the lock isread, it is immediately reinstalled!!independent of its former value!!with the valuef. Thus, at most one branch will ever be chosen, since only the first branch thatreads the lock will read t.In order to conveniently denote intermediate states in the branches of an encod-

ing, we use the following abbreviations, where R on the left-hand side matches theterm y(x) .P such that y, x, P may be used on the right-hand side of the definitions.

Readl(R) =def y(x) .Testl(R)

Testl(R) =def test l then Commitl(R) else Abortl(R)

Commitl(R) =def l! f | P

Abortl(R) =def l! f | y" x

Observe that Branchl(R)=Readl(R) by definition, so a choice expression istranslated by

C !:j # J Rj"=def (l ) \l! t | `j # J Readl(C!Rj")+ where l is fresh

into the composition of its branches in Read-state and the lock carrying t.Note that the C-encoding does not add divergence to the behavior of source

terms, as can be observed by inspection of the abbreviations: only finite {-sequencesare used in order to implement a choice, whereas {-loops are not possible (we proveit formally in Section 5.8).

Protocol with Undo-loops

We now define the other choice encoding. Its main difference from the encodingC! " is that a supposedly committed branch may change its mind and deny thecommitment, releasing the lock and giving back the value it consumed from theenvironment. Let internal choice be:

P#Q=def (i) (i! | i .P | i .Q) where i is fresh.

18 NESTMANN AND PIERCE

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

53Wednesday, October 14, 2009

Page 118: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (IV)

Divergence-Free Protocol

In the Introduction, we presented the following algorithm (note the use of test):

Branchl( yj (x) .Pj) =def yj (x) . test l then(C!Pj" | l! f) else(yjx | l! f).

Every branch tests the lock after having received a value on its channel. If the lockcarries t, then the testing branch proceeds with its continuation; otherwise itresends the message that it has consumed and terminates. Each time the lock isread, it is immediately reinstalled!!independent of its former value!!with the valuef. Thus, at most one branch will ever be chosen, since only the first branch thatreads the lock will read t.In order to conveniently denote intermediate states in the branches of an encod-

ing, we use the following abbreviations, where R on the left-hand side matches theterm y(x) .P such that y, x, P may be used on the right-hand side of the definitions.

Readl(R) =def y(x) .Testl(R)

Testl(R) =def test l then Commitl(R) else Abortl(R)

Commitl(R) =def l! f | P

Abortl(R) =def l! f | y" x

Observe that Branchl(R)=Readl(R) by definition, so a choice expression istranslated by

C !:j # J Rj"=def (l ) \l! t | `j # J Readl(C!Rj")+ where l is fresh

into the composition of its branches in Read-state and the lock carrying t.Note that the C-encoding does not add divergence to the behavior of source

terms, as can be observed by inspection of the abbreviations: only finite {-sequencesare used in order to implement a choice, whereas {-loops are not possible (we proveit formally in Section 5.8).

Protocol with Undo-loops

We now define the other choice encoding. Its main difference from the encodingC! " is that a supposedly committed branch may change its mind and deny thecommitment, releasing the lock and giving back the value it consumed from theenvironment. Let internal choice be:

P#Q=def (i) (i! | i .P | i .Q) where i is fresh.

18 NESTMANN AND PIERCE

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

53Wednesday, October 14, 2009

Page 119: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Two Choice Encodings (IV)

Divergence-Free Protocol

In the Introduction, we presented the following algorithm (note the use of test):

Branchl( yj (x) .Pj) =def yj (x) . test l then(C!Pj" | l! f) else(yjx | l! f).

Every branch tests the lock after having received a value on its channel. If the lockcarries t, then the testing branch proceeds with its continuation; otherwise itresends the message that it has consumed and terminates. Each time the lock isread, it is immediately reinstalled!!independent of its former value!!with the valuef. Thus, at most one branch will ever be chosen, since only the first branch thatreads the lock will read t.In order to conveniently denote intermediate states in the branches of an encod-

ing, we use the following abbreviations, where R on the left-hand side matches theterm y(x) .P such that y, x, P may be used on the right-hand side of the definitions.

Readl(R) =def y(x) .Testl(R)

Testl(R) =def test l then Commitl(R) else Abortl(R)

Commitl(R) =def l! f | P

Abortl(R) =def l! f | y" x

Observe that Branchl(R)=Readl(R) by definition, so a choice expression istranslated by

C !:j # J Rj"=def (l ) \l! t | `j # J Readl(C!Rj")+ where l is fresh

into the composition of its branches in Read-state and the lock carrying t.Note that the C-encoding does not add divergence to the behavior of source

terms, as can be observed by inspection of the abbreviations: only finite {-sequencesare used in order to implement a choice, whereas {-loops are not possible (we proveit formally in Section 5.8).

Protocol with Undo-loops

We now define the other choice encoding. Its main difference from the encodingC! " is that a supposedly committed branch may change its mind and deny thecommitment, releasing the lock and giving back the value it consumed from theenvironment. Let internal choice be:

P#Q=def (i) (i! | i .P | i .Q) where i is fresh.

18 NESTMANN AND PIERCE

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

53Wednesday, October 14, 2009

Page 120: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

54Wednesday, October 14, 2009

Page 121: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

54Wednesday, October 14, 2009

Page 122: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

54Wednesday, October 14, 2009

Page 123: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCEwhere (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

54Wednesday, October 14, 2009

Page 124: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCEwhere (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

54Wednesday, October 14, 2009

Page 125: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCEwhere (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

54Wednesday, October 14, 2009

Page 126: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

Encoding an Example Process (I)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

54Wednesday, October 14, 2009

Page 127: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding an Example Process (II)

The encoding D! " is then defined by modifying just the clause Testl(R) of theC-encoding by inserting an internal choice as follows:

Testl(R) =def test l then Commitl(R)#Undol(R) else Abortl(R)

Undol(R) =def l! t | y" x.

Note that in contrast to the cases of Commit#Abort, where the lock's value is set tof, in the case of Undo it is crucial to set it to t, because in this case the choice isstill not resolved. In order to have a fresh copy of the branch in initial stateavailable after having undone an activity, we use replication of the Read-agents asthe translation of branches: with Branchl(R)=! Readl(R) a choice expression istranslated by:

D ! :j # J Rj"=def (l ) \l! t } `j # J !Readl(D!R j")+ where l is fresh.

In their D-translation, convergent branches of a choice term possibly engage ininternal loops that may be used to restart a possibly committing branch from aninitial state. This behavior might be considered problematic from an implementa-tion point of view, but in the next subsection, we shall see that the D-encoding isinteresting despite its divergence.

4.3. An Example

We highlight the difference between the two choice encodings by comparing aparticular source term with both of its translations. Let

S=y2z |N where N=R1+R2 with Ri= y i (x) .Pi

describe a binary choice in the presence of a single message matching the secondof the branches, where P1 , P2 are arbitrary terms in P (i.e., not containing choicesor test-expressions). It turns out that both Sr3 C!S" and SrD!S" hold, whichimply that the C-encoding neither preserves nor reflects weak bisimulation(Lemmas 4.3.3 and 4.3.4).

Divergence-free Encoding

The transition systems of the above S and C!S" can be depicted as follows.Since, for establishing bisimulations in the asynchronous ?-calculus, input tran-sitions are only considered in the context of arbitrary messages, we do not mentionthem explicitly in the pictures, but just the internal and output transitions. Thedotted lines representing simulation relations are to be read from left to right; whenthe lines are vertical, simulation holds in both directions. We get

19DECODING CHOICE ENCODINGS

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

where (letting Bi=!Readl(D!Ri") ):

D!S"=y2z |D!N"

D!N"=(l )(l! t | B1 | B2)

D =(l )(l! t | B1 | B2 | Testl(D!R2")[ z"x])

D$ =(l )(0 | B1 | B2 | (Commit l(D!R2")#Undo l(D!R2") )[ z"x])

=(l )(0 | B1 | B2 | (D!P2"[ z"x] | l! f)# (y2z | l! t))

D2 # (l )(l! f | B1 | B2)

r0 (see Lemma 5.7.1)

|D!P2"[ z"x].

The internal undo-transition from D$ back to D!S" is essential in proving that theintermediate states D and D$ are actually weakly bisimilar to S, since only by inter-nally looping back to the initial state can they simulate all of S 's behavior thatpossibility is lacking in the C-encoding.

Proposition 4.3.2. SrD!S".

Note another aspect of the above computation paths of C!S" and D!S":Whereas the step from C!S" to C uses up the component B2 , its correspondent sur-vives in the step from D!S" to D since, there, it is a replication.

Full Abstraction?

We can now use S to prove that the C-encoding is not fully abstract with respectto weak bisimulation. Note that both C! " and D! ", extended in the obvious wayfor test-clauses, act as an identity on terms in T, which implies thatC!C!S""=C!S" and C!D!S""=D!S".Weak bisimulation is not reflected by the C-encoding.

Lemma 4.3.3. C!S1"rC!S2" does not imply S1rS2 .

Proof. Let S1=S be the example above and S2=C!S" its translation. Bydefinition, we have C!S2"=C!C!S""=C!S"=C!S1", but S2r3 S. K

Weak bisimulation is also not preserved by the C-encoding.

Lemma 4.3.4. S1rS2 does not imply C!S1"rC!S2".

Proof. Let S1=S and S2=D!S", so that S1rS2 . Suppose, for a contradiction,that C! " preserved r; then, by definition, we would have C!S"=C!S1"rC!S2"=C!D!S""=D!S"rS, which contradicts Fact 4.3.1. K

4.4. Expanding the Encodings

In Section 4.1, we introduced the intermediate language Ptest for the convenientpresentation of choice encodings. We justify the use of Ptest by translating it intothe language P.

21DECODING CHOICE ENCODINGS

where (letting Bi=!Readl(D!Ri") ):

D!S"=y2z |D!N"

D!N"=(l )(l! t | B1 | B2)

D =(l )(l! t | B1 | B2 | Testl(D!R2")[ z"x])

D$ =(l )(0 | B1 | B2 | (Commit l(D!R2")#Undo l(D!R2") )[ z"x])

=(l )(0 | B1 | B2 | (D!P2"[ z"x] | l! f)# (y2z | l! t))

D2 # (l )(l! f | B1 | B2)

r0 (see Lemma 5.7.1)

|D!P2"[ z"x].

The internal undo-transition from D$ back to D!S" is essential in proving that theintermediate states D and D$ are actually weakly bisimilar to S, since only by inter-nally looping back to the initial state can they simulate all of S 's behavior thatpossibility is lacking in the C-encoding.

Proposition 4.3.2. SrD!S".

Note another aspect of the above computation paths of C!S" and D!S":Whereas the step from C!S" to C uses up the component B2 , its correspondent sur-vives in the step from D!S" to D since, there, it is a replication.

Full Abstraction?

We can now use S to prove that the C-encoding is not fully abstract with respectto weak bisimulation. Note that both C! " and D! ", extended in the obvious wayfor test-clauses, act as an identity on terms in T, which implies thatC!C!S""=C!S" and C!D!S""=D!S".Weak bisimulation is not reflected by the C-encoding.

Lemma 4.3.3. C!S1"rC!S2" does not imply S1rS2 .

Proof. Let S1=S be the example above and S2=C!S" its translation. Bydefinition, we have C!S2"=C!C!S""=C!S"=C!S1", but S2r3 S. K

Weak bisimulation is also not preserved by the C-encoding.

Lemma 4.3.4. S1rS2 does not imply C!S1"rC!S2".

Proof. Let S1=S and S2=D!S", so that S1rS2 . Suppose, for a contradiction,that C! " preserved r; then, by definition, we would have C!S"=C!S1"rC!S2"=C!D!S""=D!S"rS, which contradicts Fact 4.3.1. K

4.4. Expanding the Encodings

In Section 4.1, we introduced the intermediate language Ptest for the convenientpresentation of choice encodings. We justify the use of Ptest by translating it intothe language P.

21DECODING CHOICE ENCODINGS

where (letting Bi=!Readl(D!Ri") ):

D!S"=y2z |D!N"

D!N"=(l )(l! t | B1 | B2)

D =(l )(l! t | B1 | B2 | Testl(D!R2")[ z"x])

D$ =(l )(0 | B1 | B2 | (Commit l(D!R2")#Undo l(D!R2") )[ z"x])

=(l )(0 | B1 | B2 | (D!P2"[ z"x] | l! f)# (y2z | l! t))

D2 # (l )(l! f | B1 | B2)

r0 (see Lemma 5.7.1)

|D!P2"[ z"x].

The internal undo-transition from D$ back to D!S" is essential in proving that theintermediate states D and D$ are actually weakly bisimilar to S, since only by inter-nally looping back to the initial state can they simulate all of S 's behavior thatpossibility is lacking in the C-encoding.

Proposition 4.3.2. SrD!S".

Note another aspect of the above computation paths of C!S" and D!S":Whereas the step from C!S" to C uses up the component B2 , its correspondent sur-vives in the step from D!S" to D since, there, it is a replication.

Full Abstraction?

We can now use S to prove that the C-encoding is not fully abstract with respectto weak bisimulation. Note that both C! " and D! ", extended in the obvious wayfor test-clauses, act as an identity on terms in T, which implies thatC!C!S""=C!S" and C!D!S""=D!S".Weak bisimulation is not reflected by the C-encoding.

Lemma 4.3.3. C!S1"rC!S2" does not imply S1rS2 .

Proof. Let S1=S be the example above and S2=C!S" its translation. Bydefinition, we have C!S2"=C!C!S""=C!S"=C!S1", but S2r3 S. K

Weak bisimulation is also not preserved by the C-encoding.

Lemma 4.3.4. S1rS2 does not imply C!S1"rC!S2".

Proof. Let S1=S and S2=D!S", so that S1rS2 . Suppose, for a contradiction,that C! " preserved r; then, by definition, we would have C!S"=C!S1"rC!S2"=C!D!S""=D!S"rS, which contradicts Fact 4.3.1. K

4.4. Expanding the Encodings

In Section 4.1, we introduced the intermediate language Ptest for the convenientpresentation of choice encodings. We justify the use of Ptest by translating it intothe language P.

21DECODING CHOICE ENCODINGS

55Wednesday, October 14, 2009

Page 128: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Expanded Encodings

Obviously, a language with Boolean messages and test-expressions as primitiveswill be much clearer than the B-translations, when used in order to write down anencoding function.Finally, we are ready to give the full encodings C! ! " and D! ! " by composing

the earlier encodings into the intermediate target language T with the expandingencoding B! ",

C! ! " =def (B bC)! "

D! ! " =def (B bD)! ",

which are indeed encodings of P7 into its choice-free fragment P.

We can ``safely'' use primitive test-forms within the encoding functions since, as wewill show in Section 5.5, the following property holds:

for all S #S : for all C!S"!* T: T!B!T";

i.e., the encoding B! " ``expands'' certain T-terms in the sense of Definition 2.3.5: Tand B!T" are weakly bisimilar, but the latter uses more {-steps. The proof of thisproperty relies on the fact that any use of Booleans within the encoding C! " isrestricted for an outside observer of a choice expression. The same property holdsof the encoding D! ".

4.5. Discussion

As the distinguishing example in Section 4.3 indicates, for reasoning about thechoice encoding's behavioral correctness, we are able to compare source terms Sand their translations !S" directly; the visible labels of source and target transitionsare the same. Furthermore, by the arguments of the previous section, bothencodings can be regarded as endomorphic mappings where the target T is afragment of the source S, at least up to expansion of translations.With respect to operational correspondence, the choice encodings represent the

class of nonprompt translations where, as in the motivation of the soundnessproperty S, committing steps of translations are preceded by administrative steps.Those preadministrative steps cannot be simply defined away (by usingadministrative normal forms) since, in general, they are not confluent: imagine aprocess containing two choices that compete for a single message; both choicescould evolve by consuming the message, but each would preempt the other.

23DECODING CHOICE ENCODINGS

Obviously, a language with Boolean messages and test-expressions as primitiveswill be much clearer than the B-translations, when used in order to write down anencoding function.Finally, we are ready to give the full encodings C! ! " and D! ! " by composing

the earlier encodings into the intermediate target language T with the expandingencoding B! ",

C! ! " =def (B bC)! "

D! ! " =def (B bD)! ",

which are indeed encodings of P7 into its choice-free fragment P.

We can ``safely'' use primitive test-forms within the encoding functions since, as wewill show in Section 5.5, the following property holds:

for all S #S : for all C!S"!* T: T!B!T";

i.e., the encoding B! " ``expands'' certain T-terms in the sense of Definition 2.3.5: Tand B!T" are weakly bisimilar, but the latter uses more {-steps. The proof of thisproperty relies on the fact that any use of Booleans within the encoding C! " isrestricted for an outside observer of a choice expression. The same property holdsof the encoding D! ".

4.5. Discussion

As the distinguishing example in Section 4.3 indicates, for reasoning about thechoice encoding's behavioral correctness, we are able to compare source terms Sand their translations !S" directly; the visible labels of source and target transitionsare the same. Furthermore, by the arguments of the previous section, bothencodings can be regarded as endomorphic mappings where the target T is afragment of the source S, at least up to expansion of translations.With respect to operational correspondence, the choice encodings represent the

class of nonprompt translations where, as in the motivation of the soundnessproperty S, committing steps of translations are preceded by administrative steps.Those preadministrative steps cannot be simply defined away (by usingadministrative normal forms) since, in general, they are not confluent: imagine aprocess containing two choices that compete for a single message; both choicescould evolve by consuming the message, but each would preempt the other.

23DECODING CHOICE ENCODINGS

essentially the “classical” encoding of booleans

56Wednesday, October 14, 2009

Page 129: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Correctness Proof

57Wednesday, October 14, 2009

Page 130: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Correctness Proof Strategy (I)Provide a notational framework that allows

to decode any derivative of an encoded termto “the” corresponding state in the source.

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

58Wednesday, October 14, 2009

Page 131: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Correctness Proof Strategy (I)Provide a notational framework that allows

to decode any derivative of an encoded termto “the” corresponding state in the source.

which is insensitive to the temporary buffering of messages. An example processterm (Section 4.3) exhibits the essential difference between the two choice encodingsand prepares the ground for a discussion of possible correctness statements.In the above-mentioned protocols, mutual exclusion among the branches of a

choice is implemented by a concurrent race for a shared lock. Since the lock maybe most succinctly expressed by means of some form of Boolean values, we intro-duce (in Section 4) an intermediate language that provides a convenient primitiveoperator for testing Boolean-valued messages, Section 4.4 gives a careful treatmentof the expansion of those intermediate terms.

4.1. The Setting

An encoding is a function from some source syntax into some target syntax. Thissubsection introduces the language P7 that represents the source syntax. In addi-tion, we refine the setting by an intermediate language Ptest to be used instead ofthe intended target language P.

In diagrams like the above, dotted arrows are just placeholders for encodingfunctions.

Source language. We introduce choice as a finitely indexed operator on inputterms. Its behavior is specified by the operational semantics rule in Table 3, whichformally describes that each branch in a choice may be selected and consume anexternal message, preempting all of its competing branches. The set P7 of processeswith input-guarded choice is generated by adding a clause for !-expressions to thegrammar of P,

P ::= } } } | :j # J

Rj ,

where J is some finite indexing set. We also use the abbreviation R1+R2 to denotebinary input-guarded choice. The labeled transition semantics of P7 is the relationgenerated by the rules in Table 1 and rule C-INP in Table 3.

TABLE 3Operational Semantics for !-expressions

C-INP: :j # J

yj (x) .Pj ww!ykz Pk if k # J

15DECODING CHOICE ENCODINGS

58Wednesday, October 14, 2009

Page 132: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Correctness Proof Strategy (II)

where (letting Bi=Readl(C!Ri") ):

C!S"= y2z |C!N"

C!N" =(l )(l! t | B1 | B2)

C =(l )(l! t | B1 | Testl(C!RR")[z"x])

C2 =(l )(0 | B1 | Commitl(C!R2")[z"x])

# (l )(l! f | B1)

r0 (see Lemma 5.2.2)

|C!P2"[z"x].

We may phrase the intermediate state C as having partially committed: at that stageof commitment, it is already clear that one of the branches will eventually bechosen, but it is not yet clear if it will be the activated branch (as we chose in thecomputation path with C2) or if it will be the competing branch (still waiting at y1for some value), since there might still be a suitable message provided by the con-text which could activate the latter and afterward preempt the former. As a conse-quence, the state C does not directly correspond to any of the source terms withrespect to weak bisimulation, which implies:

Fact 4.3.1. Sr3 C!S".

On the other hand, the observation that SpCpP2[z"x] with SOP2[z"x] in theabove example suggests coupled simulation as an appropriate notion of correctnessfor the C-encoding.

Divergent Encoding

The corresponding transition systems of S and D!S", again omitting inputtransitions, are

20 NESTMANN AND PIERCE

For the encoding with partial commitments, we have to denote two corresponding states!

59Wednesday, October 14, 2009

Page 133: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Factorization

60Wednesday, October 14, 2009

Page 134: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

61Wednesday, October 14, 2009

Page 135: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 136: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

To preserve—and track—this source-level structure under target-level computation,

we introduce an annotated source language.

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 137: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

To preserve—and track—this source-level structure under target-level computation,

we introduce an annotated source language.

In the first author's PhD thesis [Nes96], the discussion of appropriate notionsof correctness as well as a discussion of possible (correct) and impossible (incorrect)variants of the proposed choice encodings are carried out in considerably moredetail.

5. CORRECTNESS PROOF BY DECODING

In this section, we prove the correctness of the C-encoding with respect tocoupled simulation equivalence (Sections 5.2!5.6) and sketch the correspondingproof for the D-encoding with respect to weak bisimulation equivalence (Sec-tion 5.7). The main contribution of this section is a proof notation and technique,with decoding functions, which we use to exhibit that source terms and their trans-lations are in fact equivalent. We also include a proof that the C! -encoding isdivergence-free (Section 5.8).

5.1. Overview

Since the choice encodings are not prompt, we have to explicitly deal with thebehavior of intermediate states and relate them to source terms with equivalentbehavior. Moreover, in the case of the C! -encoding, an intermediate state may bepartially committed, so we may have to relate it to two different source terms. Bydefinition, partial commitments are absent in source terms; thus, a partially com-mitted derivative of a translation can only be related to source terms whichrepresent either its reset or its completion.Technically, we are going to build the coupled simulation constructively as a pair

of decoding functions from target terms to source terms. However, intermediatestates are target terms that have lost the structure of being literal translations ofsource terms; thus, it is impossible to denote the source term from which an inter-mediate target term derives without some knowledge of its derivation history.We therefore introduce annotated source terms (Section 5.2) as abbreviations for

derivatives of their translations. An annotated term shows its source-level choicestructure while providing information about which target state its choices inhabit,using a representation of its derivation history that is constructed from an opera-tional semantics of annotated choice. We formally introduce the language A ofannotated source terms and use it as follows for the investigation of the correctnessof the C! -encoding:

Factorization (Section 5.3) Annotated source terms represent abbreviations oftarget terms. We define an annotation encoding A mapping source terms toabbreviations and a flattening encoding F expanding abbreviations to target terms.

24 NESTMANN AND PIERCE

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 138: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

To preserve—and track—this source-level structure under target-level computation,

we introduce an annotated source language.

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 139: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Consequently, the decoding is conveniently definable by exploiting the annotation structure.

To preserve—and track—this source-level structure under target-level computation,

we introduce an annotated source language.

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 140: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Consequently, the decoding is conveniently definable by exploiting the annotation structure.

To preserve—and track—this source-level structure under target-level computation,

we introduce an annotated source language.

Decoding (Section 5.4) Annotated source terms deal with partial commitmentsexplicitly. We define two decoding functions U of annotated terms back into sourceterm where U! resets and U> completes partial commitments.

Expansion (Section 5.5) Since the target language T is a language extended withBooleans, we show that their use in our setting is rather well behaved according tothe expanding encoding B.

The factorization, the decodings, and the expansion enjoy several nice properties:

1. F is a strong bisimulation between abbreviations and Boolean targetterms.

2. (U! , U>) is a coupled simulation between abbreviations and source terms.

3. B, i.e., a variant of it, is an expansion for Booleans in target terms.

Those can be combined to provide a coupled simulation on S_P. The observa-tion that every source term S and its translation C! !S" are related by this relationconcludes the proof of coupled-simulation-correctness of the C! -encoding (Sec-tion 5.6).

Simplifications due to homomorphic encodings and decodings. Many of the proofsin this section have in common that they exhibit particular transitions of terms byconstructing appropriate inference trees either

v from the inductive structure of (annotated) terms, or

v by simply replacing some leafs in the inference trees of their encodings ordecodings.

Since the A, F, U! , U> , B-functions are each defined homomorphically on everyconstructor of P according to the scheme in Section 4, there is a strong syntacticcorrespondence between terms and their respective translations, and, as a conse-quence, there is also a strong correspondence between transition inference trees.More precisely, since in transitions involving choice

v there is at most one application of a choice rule, and

v an application of the choice rule always represents a leaf in the inferencetree,

it suffices for all proofs to regard choice terms in isolation.

25DECODING CHOICE ENCODINGS

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 141: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Consequently, the decoding is conveniently definable by exploiting the annotation structure.

To preserve—and track—this source-level structure under target-level computation,

we introduce an annotated source language.

Decoding (Section 5.4) Annotated source terms deal with partial commitmentsexplicitly. We define two decoding functions U of annotated terms back into sourceterm where U! resets and U> completes partial commitments.

Expansion (Section 5.5) Since the target language T is a language extended withBooleans, we show that their use in our setting is rather well behaved according tothe expanding encoding B.

The factorization, the decodings, and the expansion enjoy several nice properties:

1. F is a strong bisimulation between abbreviations and Boolean targetterms.

2. (U! , U>) is a coupled simulation between abbreviations and source terms.

3. B, i.e., a variant of it, is an expansion for Booleans in target terms.

Those can be combined to provide a coupled simulation on S_P. The observa-tion that every source term S and its translation C! !S" are related by this relationconcludes the proof of coupled-simulation-correctness of the C! -encoding (Sec-tion 5.6).

Simplifications due to homomorphic encodings and decodings. Many of the proofsin this section have in common that they exhibit particular transitions of terms byconstructing appropriate inference trees either

v from the inductive structure of (annotated) terms, or

v by simply replacing some leafs in the inference trees of their encodings ordecodings.

Since the A, F, U! , U> , B-functions are each defined homomorphically on everyconstructor of P according to the scheme in Section 4, there is a strong syntacticcorrespondence between terms and their respective translations, and, as a conse-quence, there is also a strong correspondence between transition inference trees.More precisely, since in transitions involving choice

v there is at most one application of a choice rule, and

v an application of the choice rule always represents a leaf in the inferencetree,

it suffices for all proofs to regard choice terms in isolation.

25DECODING CHOICE ENCODINGS

The Decoding ProblemOnce a source term is translated into the target language,

its source-level structure is often completely lost.

additional primitives in our choice encodings: we only use the Booleans on restric-ted channels, so they never become visible. The formal justification later on usesthis fact explicitly (cf. Section 5.5).

4.2. Two Choice Encodings

This subsection contains two simple encodings of S into its choice-free fragmentT. Both encodings C! ", D! ": S!T map terms of the source language S induc-tively into the target language T. Since the encodings coincide on all constructorsbut choice, we use a common homomorphic scheme of definition, where ! " maydenote either C! " or D! ":

!(x) P" =def (x)!P" !P1 | P2" =def !P1" | !P2"

!y! z" =def y! z !0" =def 0

!y(x) .P" =def y(x) . !P" !! R" =def !!R".

This scheme will also be reused later on for other encoding functions: if an encodingX! " acts homomorphically on each constructor of P, then it is defined according tothe scheme with ! " replaced by X! ".A source-level choice term is implemented by a particular target-level structure:

for each choice expression, the translation

!:j # J Rj"=def (l) \l" t | `j # J Branchl( !Rj")+installs a local lock!!a message on l that carries a Boolean name!!that is onlyaccessible for the parallel composition of its branches, which use the lock forrunning a mutual exclusion protocol, as specified by the semantic rule C-INP. Theprotocol crucially depends on the invariant that at any time there is at most oneBoolean message available on l. If we manage to maintain this invariant, i.e., if itholds for-all derivatives, then we are guaranteed that at any time the value of thelock is uniquely determined, which allows us to regard possible subsequentmessages on l as representing the behavior of a ``determinate'' lock that is accessiblevia l. The determinacy of the lock limits the nondeterminacy of the encoding solelyto the concurrent race of the translations of the branches. Initially the lock carriest, so it represents the fact that the choice is not yet resolved; consequently, acommitted choice will be recognizable by the lock carrying f.Two slightly different ways of implementing the mutual exclusion protocol are

given in the following subsections, differing only with respect to the possibility ofundoing activities of branches, thus yielding two different definitions for theimplementation Branchl .

17DECODING CHOICE ENCODINGS

61Wednesday, October 14, 2009

Page 142: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intermediate Language (I)

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

annotated choice

62Wednesday, October 14, 2009

Page 143: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intermediate Language (I)

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

annotated choice

When looking for simulations between terms and their translations (as in theproofs of Proposition 5.3.4 and Lemmas 5.3.1, 5.4.2, and 5.4.3), a transition whichdoes not involve choice rules is trivially simulated via the identical inference tree bythe translated term. A transition which emanates from a choice term inside a termcontext, and which may be simulated by the translated choice term in isolation, willalso be derivable by the same inference tree, except for the leaf being adapted to thesimulating transition. Note that possible side-conditions are not critical if thesimulation has been successful for the choice term itself.When (as in the proofs 5.4.7 and 5.4.6) looking for internal transitions of a

particular term, generated by occurrences of choice, it suffices to use choices inisolation since {-steps are passed through arbitrary contexts without condition.

5.2. Annotated Choice

This subsection introduces an annotated variant of choice, which providesabbreviations for all derivatives of C-translations. Its shape reveals the high-levelstructure, but it exhibits low-level operational behavior. The attached annotationsrecord essential information about the low-level derivation history.According to the definition in Section 4, the computations of individual branches

of a choice will pass through basically three different states: Read, Test, and one ofCommit or Abort as the final state. The state of each branch is completely deter-mined by the result of two inputs:

v the value (if any) which is currently carried by the channel, and

v the Boolean (if any) which has been assigned by acquiring the lock.

For the representation of that information for a J-indexed choice, we use

v a partial function v: J(V, mapping choice indices to values, and

v a possibly empty set B!J of choice indices such that B&dom(v)=<.

In the context of a particular J-indexed choice with branches Rj= yj (x) .Pj foreach j # J, the definedness of vj (the value v( j) held by branch j) means that a valuehas been read from the environment, but has not yet either led to a commitmentof branch j or been reinjected into the environment. The set B records thosebranches which have already accessed the Boolean lock. An empty set B means thatnone of the branches has yet been chosen, i.e., that the choice has not (yet) com-mitted. Then, with the abbreviation V :=dom(v), the state of each individualbranch can be retrieved from the annotations v and B. Branch k # J is in

v Read-state if k # J"(V_B)

v Test-state if k #V

v Commit!Abort-state if k # B.

The state of the whole choice is completely determined by the states of itsbranches.

26 NESTMANN AND PIERCE

corresponding to decoded source-level choice states:

62Wednesday, October 14, 2009

Page 144: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intermediate Language (II)TABLE 5

Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

63Wednesday, October 14, 2009

Page 145: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Annotated Choice States(* as composed by the states of its branches *)

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

64Wednesday, October 14, 2009

Page 146: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The Key Lemmaidentity receptors by Honda and Tokoro [HT92]. In fact, a stronger propertyholds:

Lemma 5.2.2.

\:j # J Rj+v

B{<-`

j #V

yj vj .

Proof. Let M be an arbitrary m-ary composition of messages. We show that

R =def {\M | \ :j # J

Rj +v

B{<, M | `

j #V

y j vj+ } \ :j # J

Rj+v

B#A=

LHSRHS

is an expansion.

Case internal steps. According to the operational semantics, {-transitions areonly possible for LHS. Since RHS does not exhibit {-transitions, we show that forall LHSw!{ LHS$, we have (LHS$, RHS) #R. There are two subcases: either (1) viaan ABORT-step of choice, or (2) via choice consuming an M-message due toREAD.

Case ABORT. Since B{<, for k #V, via ABORT, followed by m timesPAR1 , we have

LHS=M } \ :j # J

Rj +v

Bw!{ #M | yk vk } \ :

j # J

Rj+v&k

B+k=: LHS$.

Since k's values are erased from v, we observe that the R-correspondent ofLHS$!!note that it is an admissible left-hand side for R!!as defined by

M |yk vk | `k{ j #V

yj vj=M } `j #V yj vj

coincides with RHS.

Case READ. For M#N | y! z and k # J"(V_B) such that y= yk , letv$=v+(k[ z). Then, via rule READ for the choice part, OUT for the indicatedy-message, and rule COM to derive the {-transition from the former visibletransitions, we have

LHS=N | y! z | \ :j # J

Rj+v

Bw!{ N } \ :

j # J

Rj+v$

B=: LHS$.

By definition, the R-correspondent of LHS$, is of the form

N } `j #V$

y j vj=N } yk vk } `j #V y j v j=M } `j #V yj vj

which coincides with RHS.

Case output steps. According to the operational semantics, output-transitionsare possible for each message in M, i.e., for both LHS and RHS. The (strong)

28 NESTMANN AND PIERCE

bisimulation behavior for that case is trivial. LHS does not exhibit furtherimmediate outputs. In contrast, RHS allows outputs for each message in >j # V yj vj ,i.e., for k #V, via OUT and PAR, we have

RHS=M } `j # V yj vj w!ykvk M } `k{ j # V

yj v j=: RHS$

Due to ABORT, we derive an internal step and afterward release the message yk vk

LHS=M } \ :j # J

Rj+v

Bw!{ M } \ :

j # J

Rj+v&k

B+k } yk vkw!ykvk M } \ :

j # J

Rj+v&k

B+k=: LHS$

which weakly simulates the transition of RHS. We observe that (LHS$, RHS$) #R. K

Corollary 5.2.3 (Inertness). (!j # J Rj)<B{<-0.

From an asynchronous observer's point of view, committed choice behavesexactly like the composition of the messages that are held by branches in Test-state,except that it involves additional internal computation. Note that a standard(synchronous) observer, which may detect inputs, would be able to tell the dif-ference,Since the purpose of annotated choice is to keep track of which low-level actions

belong to the same high-level choice, we introduce a language of annotated pro-cesses. The set P(7) of such processes is generated by adding a clause for(!)-expressions to the grammar of P:

P ::=| \:j # J Rj+v

B.

The operational semantics of P(7) is given by the rules in Tables 1 and 5.

5.3. Factorization

We now introduce the components of a factorization for the encoding C! ": anannotation encoding A! ", and a flattening encoding F! ".

29DECODING CHOICE ENCODINGS

(* the essence of asynchrony *)

Messages consumed (see v) by committed choices (see B)are still fully available to the environment.

“asynchronous garbage”

65Wednesday, October 14, 2009

Page 147: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intermediate Encoding

bisimulation behavior for that case is trivial. LHS does not exhibit furtherimmediate outputs. In contrast, RHS allows outputs for each message in >j # V yj vj ,i.e., for k #V, via OUT and PAR, we have

RHS=M } `j # V yj vj w!ykvk M } `k{ j # V

yj v j=: RHS$

Due to ABORT, we derive an internal step and afterward release the message yk vk

LHS=M } \ :j # J

Rj+v

Bw!{ M } \ :

j # J

Rj+v&k

B+k } yk vkw!ykvk M } \ :

j # J

Rj+v&k

B+k=: LHS$

which weakly simulates the transition of RHS. We observe that (LHS$, RHS$) #R. K

Corollary 5.2.3 (Inertness). (!j # J Rj)<B{<-0.

From an asynchronous observer's point of view, committed choice behavesexactly like the composition of the messages that are held by branches in Test-state,except that it involves additional internal computation. Note that a standard(synchronous) observer, which may detect inputs, would be able to tell the dif-ference,Since the purpose of annotated choice is to keep track of which low-level actions

belong to the same high-level choice, we introduce a language of annotated pro-cesses. The set P(7) of such processes is generated by adding a clause for(!)-expressions to the grammar of P:

P ::=| \:j # J Rj+v

B.

The operational semantics of P(7) is given by the rules in Tables 1 and 5.

5.3. Factorization

We now introduce the components of a factorization for the encoding C! ": anannotation encoding A! ", and a flattening encoding F! ".

29DECODING CHOICE ENCODINGS

An intermediate sublanguage A will be characterized as the sublanguage of P(7)

that precisely contains the derivatives of A-translations. We write C, A, and F forthe encoding functions considered as relations.

Annotation

The encoding A! ": S!P(7) acts homomorphically on every constructor exceptfor choice according to the scheme in Section 4. The latter case is given by

A !:j # J Rj"=def \ :j # J A!Rj"+<

<

which translates choices into their annotated counterparts with all branches ininitial state. The following lemma represents a first simple operational completenessstatement for A! ".

Lemma 5.3.1 (Completeness). A is a weak simulation up to expansion.

Proof. We show the proof for weak synchronous simulation, which implies theasynchronous case. The proof is by structural induction on S #S and transitioninduction on S!S$. By the simplification discussed in Section 5.1, it suffices toregard the case

S=:j # J

R j

where, according to the rules in Table 3, there is only one subcase.

Case C-INP. For k # J, with !j # J Rj w!ykz Pk[ z"x], there is always a weaklysimulating sequence by READ and COMMIT

A !:j # J Rj"=\:j # J A!Rj"+<

<ww!ykz \:j # J A!R j"+

(k[ z)

<

w!{ A!Pk"[ z"x] } \:j # J A!Rj"+<

[k]

- A!Pk"[ z"x] | 0

# A!Pk[ z"x]",

where the expansion relation holds due to Lemma 5.2.2. K

Intermediate language

Terms in the target of A! " and also their derivatives are of a particularrestricted form, which can be made precise by characterizing the possible shape ofoccurrences of choice terms. Let a subterm be called guarded when it occurs as asubterm of a guard. The basic syntactic properties of the intermediate languagethen are:

30 NESTMANN AND PIERCE

66Wednesday, October 14, 2009

Page 148: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intermediate Encoding

bisimulation behavior for that case is trivial. LHS does not exhibit furtherimmediate outputs. In contrast, RHS allows outputs for each message in >j # V yj vj ,i.e., for k #V, via OUT and PAR, we have

RHS=M } `j # V yj vj w!ykvk M } `k{ j # V

yj v j=: RHS$

Due to ABORT, we derive an internal step and afterward release the message yk vk

LHS=M } \ :j # J

Rj+v

Bw!{ M } \ :

j # J

Rj+v&k

B+k } yk vkw!ykvk M } \ :

j # J

Rj+v&k

B+k=: LHS$

which weakly simulates the transition of RHS. We observe that (LHS$, RHS$) #R. K

Corollary 5.2.3 (Inertness). (!j # J Rj)<B{<-0.

From an asynchronous observer's point of view, committed choice behavesexactly like the composition of the messages that are held by branches in Test-state,except that it involves additional internal computation. Note that a standard(synchronous) observer, which may detect inputs, would be able to tell the dif-ference,Since the purpose of annotated choice is to keep track of which low-level actions

belong to the same high-level choice, we introduce a language of annotated pro-cesses. The set P(7) of such processes is generated by adding a clause for(!)-expressions to the grammar of P:

P ::=| \:j # J Rj+v

B.

The operational semantics of P(7) is given by the rules in Tables 1 and 5.

5.3. Factorization

We now introduce the components of a factorization for the encoding C! ": anannotation encoding A! ", and a flattening encoding F! ".

29DECODING CHOICE ENCODINGS

An intermediate sublanguage A will be characterized as the sublanguage of P(7)

that precisely contains the derivatives of A-translations. We write C, A, and F forthe encoding functions considered as relations.

Annotation

The encoding A! ": S!P(7) acts homomorphically on every constructor exceptfor choice according to the scheme in Section 4. The latter case is given by

A !:j # J Rj"=def \ :j # J A!Rj"+<

<

which translates choices into their annotated counterparts with all branches ininitial state. The following lemma represents a first simple operational completenessstatement for A! ".

Lemma 5.3.1 (Completeness). A is a weak simulation up to expansion.

Proof. We show the proof for weak synchronous simulation, which implies theasynchronous case. The proof is by structural induction on S #S and transitioninduction on S!S$. By the simplification discussed in Section 5.1, it suffices toregard the case

S=:j # J

R j

where, according to the rules in Table 3, there is only one subcase.

Case C-INP. For k # J, with !j # J Rj w!ykz Pk[ z"x], there is always a weaklysimulating sequence by READ and COMMIT

A !:j # J Rj"=\:j # J A!Rj"+<

<ww!ykz \:j # J A!R j"+

(k[ z)

<

w!{ A!Pk"[ z"x] } \:j # J A!Rj"+<

[k]

- A!Pk"[ z"x] | 0

# A!Pk[ z"x]",

where the expansion relation holds due to Lemma 5.2.2. K

Intermediate language

Terms in the target of A! " and also their derivatives are of a particularrestricted form, which can be made precise by characterizing the possible shape ofoccurrences of choice terms. Let a subterm be called guarded when it occurs as asubterm of a guard. The basic syntactic properties of the intermediate languagethen are:

30 NESTMANN AND PIERCE

An intermediate sublanguage A will be characterized as the sublanguage of P(7)

that precisely contains the derivatives of A-translations. We write C, A, and F forthe encoding functions considered as relations.

Annotation

The encoding A! ": S!P(7) acts homomorphically on every constructor exceptfor choice according to the scheme in Section 4. The latter case is given by

A !:j # J Rj"=def \ :j # J A!Rj"+<

<

which translates choices into their annotated counterparts with all branches ininitial state. The following lemma represents a first simple operational completenessstatement for A! ".

Lemma 5.3.1 (Completeness). A is a weak simulation up to expansion.

Proof. We show the proof for weak synchronous simulation, which implies theasynchronous case. The proof is by structural induction on S #S and transitioninduction on S!S$. By the simplification discussed in Section 5.1, it suffices toregard the case

S=:j # J

R j

where, according to the rules in Table 3, there is only one subcase.

Case C-INP. For k # J, with !j # J Rj w!ykz Pk[ z"x], there is always a weaklysimulating sequence by READ and COMMIT

A !:j # J Rj"=\:j # J A!Rj"+<

<ww!ykz \:j # J A!R j"+

(k[ z)

<

w!{ A!Pk"[ z"x] } \:j # J A!Rj"+<

[k]

- A!Pk"[ z"x] | 0

# A!Pk[ z"x]",

where the expansion relation holds due to Lemma 5.2.2. K

Intermediate language

Terms in the target of A! " and also their derivatives are of a particularrestricted form, which can be made precise by characterizing the possible shape ofoccurrences of choice terms. Let a subterm be called guarded when it occurs as asubterm of a guard. The basic syntactic properties of the intermediate languagethen are:

30 NESTMANN AND PIERCE

66Wednesday, October 14, 2009

Page 149: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Intermediate Encoding

bisimulation behavior for that case is trivial. LHS does not exhibit furtherimmediate outputs. In contrast, RHS allows outputs for each message in >j # V yj vj ,i.e., for k #V, via OUT and PAR, we have

RHS=M } `j # V yj vj w!ykvk M } `k{ j # V

yj v j=: RHS$

Due to ABORT, we derive an internal step and afterward release the message yk vk

LHS=M } \ :j # J

Rj+v

Bw!{ M } \ :

j # J

Rj+v&k

B+k } yk vkw!ykvk M } \ :

j # J

Rj+v&k

B+k=: LHS$

which weakly simulates the transition of RHS. We observe that (LHS$, RHS$) #R. K

Corollary 5.2.3 (Inertness). (!j # J Rj)<B{<-0.

From an asynchronous observer's point of view, committed choice behavesexactly like the composition of the messages that are held by branches in Test-state,except that it involves additional internal computation. Note that a standard(synchronous) observer, which may detect inputs, would be able to tell the dif-ference,Since the purpose of annotated choice is to keep track of which low-level actions

belong to the same high-level choice, we introduce a language of annotated pro-cesses. The set P(7) of such processes is generated by adding a clause for(!)-expressions to the grammar of P:

P ::=| \:j # J Rj+v

B.

The operational semantics of P(7) is given by the rules in Tables 1 and 5.

5.3. Factorization

We now introduce the components of a factorization for the encoding C! ": anannotation encoding A! ", and a flattening encoding F! ".

29DECODING CHOICE ENCODINGS

An intermediate sublanguage A will be characterized as the sublanguage of P(7)

that precisely contains the derivatives of A-translations. We write C, A, and F forthe encoding functions considered as relations.

Annotation

The encoding A! ": S!P(7) acts homomorphically on every constructor exceptfor choice according to the scheme in Section 4. The latter case is given by

A !:j # J Rj"=def \ :j # J A!Rj"+<

<

which translates choices into their annotated counterparts with all branches ininitial state. The following lemma represents a first simple operational completenessstatement for A! ".

Lemma 5.3.1 (Completeness). A is a weak simulation up to expansion.

Proof. We show the proof for weak synchronous simulation, which implies theasynchronous case. The proof is by structural induction on S #S and transitioninduction on S!S$. By the simplification discussed in Section 5.1, it suffices toregard the case

S=:j # J

R j

where, according to the rules in Table 3, there is only one subcase.

Case C-INP. For k # J, with !j # J Rj w!ykz Pk[ z"x], there is always a weaklysimulating sequence by READ and COMMIT

A !:j # J Rj"=\:j # J A!Rj"+<

<ww!ykz \:j # J A!R j"+

(k[ z)

<

w!{ A!Pk"[ z"x] } \:j # J A!Rj"+<

[k]

- A!Pk"[ z"x] | 0

# A!Pk[ z"x]",

where the expansion relation holds due to Lemma 5.2.2. K

Intermediate language

Terms in the target of A! " and also their derivatives are of a particularrestricted form, which can be made precise by characterizing the possible shape ofoccurrences of choice terms. Let a subterm be called guarded when it occurs as asubterm of a guard. The basic syntactic properties of the intermediate languagethen are:

30 NESTMANN AND PIERCE

An intermediate sublanguage A will be characterized as the sublanguage of P(7)

that precisely contains the derivatives of A-translations. We write C, A, and F forthe encoding functions considered as relations.

Annotation

The encoding A! ": S!P(7) acts homomorphically on every constructor exceptfor choice according to the scheme in Section 4. The latter case is given by

A !:j # J Rj"=def \ :j # J A!Rj"+<

<

which translates choices into their annotated counterparts with all branches ininitial state. The following lemma represents a first simple operational completenessstatement for A! ".

Lemma 5.3.1 (Completeness). A is a weak simulation up to expansion.

Proof. We show the proof for weak synchronous simulation, which implies theasynchronous case. The proof is by structural induction on S #S and transitioninduction on S!S$. By the simplification discussed in Section 5.1, it suffices toregard the case

S=:j # J

R j

where, according to the rules in Table 3, there is only one subcase.

Case C-INP. For k # J, with !j # J Rj w!ykz Pk[ z"x], there is always a weaklysimulating sequence by READ and COMMIT

A !:j # J Rj"=\:j # J A!Rj"+<

<ww!ykz \:j # J A!R j"+

(k[ z)

<

w!{ A!Pk"[ z"x] } \:j # J A!Rj"+<

[k]

- A!Pk"[ z"x] | 0

# A!Pk[ z"x]",

where the expansion relation holds due to Lemma 5.2.2. K

Intermediate language

Terms in the target of A! " and also their derivatives are of a particularrestricted form, which can be made precise by characterizing the possible shape ofoccurrences of choice terms. Let a subterm be called guarded when it occurs as asubterm of a guard. The basic syntactic properties of the intermediate languagethen are:

30 NESTMANN AND PIERCE

66Wednesday, October 14, 2009

Page 150: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Flattening

v All occurrences of choice are annotated.

v All guarded occurrences of choice are initial.

v Unguarded occurrences of choice may be initial, partial, or committed.

Later on in this paper (for the proofs of the Lemmas 5.4.6 and 5.4.8), we needthese properties in order to conclude that no guarded annotated choice is in anintermediate state.Therefore, let A denote the sublanguage of terms in P(7) that satisfy the above

syntactic requirements; we give an inductive grammar for generating appropriateterms by two levels. One level generates terms I with only initially annotatedoccurrences of choice terms

G ::=y(x) .I

I ::=0 | y! z | G | ( y) I | I | I | ! G | \:j # J Gj +<

<.

Another level on top generates terms A which allow active (top-level) choices to bein intermediate state, but guarded occurrences of choice only to be initial, asspecified by I,

R ::=y(x) .I

A ::=0 | y! z | R | ( y) A | A |A | ! R | \:j # J Rj+v

B.

where v: J(V and B!J for arbitrary index set J, as usual. A term A #A is calledpartially committed (or partial ), if it contains at least one occurrence of partialchoice, and fully committed (or full ), otherwise.An important property of A is that it is transition-closed.

Lemma 5.3.2. For all A #A, if A w!+ A$, then A$ #A.

Proof. By inspection of the rules in Table 5. K

Flattening

The encoding F! ": P(7)!T acts homomorphically on every constructor butannotated choice according to the scheme in Section 4. For annotated choice, thetranslation

F !\:j # J Rj+v

B"=def (l ) \l" b } `

j # J"(V_B)

Readl(F!Rj")

} `j # V

Testl(F!Rj")[v( j)#x]+where b is t, if B{<, and f, otherwise, expands the abbreviations into the intendedtarget term by following the semantic rules in Table 5. Branches in Test-state are

31DECODING CHOICE ENCODINGS

v All occurrences of choice are annotated.

v All guarded occurrences of choice are initial.

v Unguarded occurrences of choice may be initial, partial, or committed.

Later on in this paper (for the proofs of the Lemmas 5.4.6 and 5.4.8), we needthese properties in order to conclude that no guarded annotated choice is in anintermediate state.Therefore, let A denote the sublanguage of terms in P(7) that satisfy the above

syntactic requirements; we give an inductive grammar for generating appropriateterms by two levels. One level generates terms I with only initially annotatedoccurrences of choice terms

G ::=y(x) .I

I ::=0 | y! z | G | ( y) I | I | I | ! G | \:j # J Gj +<

<.

Another level on top generates terms A which allow active (top-level) choices to bein intermediate state, but guarded occurrences of choice only to be initial, asspecified by I,

R ::=y(x) .I

A ::=0 | y! z | R | ( y) A | A |A | ! R | \:j # J Rj+v

B.

where v: J(V and B!J for arbitrary index set J, as usual. A term A #A is calledpartially committed (or partial ), if it contains at least one occurrence of partialchoice, and fully committed (or full ), otherwise.An important property of A is that it is transition-closed.

Lemma 5.3.2. For all A #A, if A w!+ A$, then A$ #A.

Proof. By inspection of the rules in Table 5. K

Flattening

The encoding F! ": P(7)!T acts homomorphically on every constructor butannotated choice according to the scheme in Section 4. For annotated choice, thetranslation

F !\:j # J Rj+v

B"=def (l ) \l" b } `

j # J"(V_B)

Readl(F!Rj")

} `j # V

Testl(F!Rj")[v( j)#x]+where b is t, if B{<, and f, otherwise, expands the abbreviations into the intendedtarget term by following the semantic rules in Table 5. Branches in Test-state are

31DECODING CHOICE ENCODINGS

67Wednesday, October 14, 2009

Page 151: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Flattening

v All occurrences of choice are annotated.

v All guarded occurrences of choice are initial.

v Unguarded occurrences of choice may be initial, partial, or committed.

Later on in this paper (for the proofs of the Lemmas 5.4.6 and 5.4.8), we needthese properties in order to conclude that no guarded annotated choice is in anintermediate state.Therefore, let A denote the sublanguage of terms in P(7) that satisfy the above

syntactic requirements; we give an inductive grammar for generating appropriateterms by two levels. One level generates terms I with only initially annotatedoccurrences of choice terms

G ::=y(x) .I

I ::=0 | y! z | G | ( y) I | I | I | ! G | \:j # J Gj +<

<.

Another level on top generates terms A which allow active (top-level) choices to bein intermediate state, but guarded occurrences of choice only to be initial, asspecified by I,

R ::=y(x) .I

A ::=0 | y! z | R | ( y) A | A |A | ! R | \:j # J Rj+v

B.

where v: J(V and B!J for arbitrary index set J, as usual. A term A #A is calledpartially committed (or partial ), if it contains at least one occurrence of partialchoice, and fully committed (or full ), otherwise.An important property of A is that it is transition-closed.

Lemma 5.3.2. For all A #A, if A w!+ A$, then A$ #A.

Proof. By inspection of the rules in Table 5. K

Flattening

The encoding F! ": P(7)!T acts homomorphically on every constructor butannotated choice according to the scheme in Section 4. For annotated choice, thetranslation

F !\:j # J Rj+v

B"=def (l ) \l" b } `

j # J"(V_B)

Readl(F!Rj")

} `j # V

Testl(F!Rj")[v( j)#x]+where b is t, if B{<, and f, otherwise, expands the abbreviations into the intendedtarget term by following the semantic rules in Table 5. Branches in Test-state are

31DECODING CHOICE ENCODINGS

v All occurrences of choice are annotated.

v All guarded occurrences of choice are initial.

v Unguarded occurrences of choice may be initial, partial, or committed.

Later on in this paper (for the proofs of the Lemmas 5.4.6 and 5.4.8), we needthese properties in order to conclude that no guarded annotated choice is in anintermediate state.Therefore, let A denote the sublanguage of terms in P(7) that satisfy the above

syntactic requirements; we give an inductive grammar for generating appropriateterms by two levels. One level generates terms I with only initially annotatedoccurrences of choice terms

G ::=y(x) .I

I ::=0 | y! z | G | ( y) I | I | I | ! G | \:j # J Gj +<

<.

Another level on top generates terms A which allow active (top-level) choices to bein intermediate state, but guarded occurrences of choice only to be initial, asspecified by I,

R ::=y(x) .I

A ::=0 | y! z | R | ( y) A | A |A | ! R | \:j # J Rj+v

B.

where v: J(V and B!J for arbitrary index set J, as usual. A term A #A is calledpartially committed (or partial ), if it contains at least one occurrence of partialchoice, and fully committed (or full ), otherwise.An important property of A is that it is transition-closed.

Lemma 5.3.2. For all A #A, if A w!+ A$, then A$ #A.

Proof. By inspection of the rules in Table 5. K

Flattening

The encoding F! ": P(7)!T acts homomorphically on every constructor butannotated choice according to the scheme in Section 4. For annotated choice, thetranslation

F !\:j # J Rj+v

B"=def (l ) \l" b } `

j # J"(V_B)

Readl(F!Rj")

} `j # V

Testl(F!Rj")[v( j)#x]+where b is t, if B{<, and f, otherwise, expands the abbreviations into the intendedtarget term by following the semantic rules in Table 5. Branches in Test-state are

31DECODING CHOICE ENCODINGS

those that carry values (therefore j #V); the substitution [v( j)!x] replaces the inputvariable in the continuation process Pj with the corresponding value. Branches inRead-state must neither currently carry values ( j !V) nor have accessed the lockafter reading values ( j ! B).

Lemma 5.3.3 (Factorization). 1. F! " bA! "=C! ".2. F! " is surjective. For all A #A, F!A_"=F!A" _.

Proof. 1. Straightforward induction on the structure of source terms.

2. Straightforward. Neither A! " nor F! " erase names. Free (bound)occurrences of names of terms correspond to free (bound) occurrences in theirtranslations. K

The important property of the factorization is that the semantics of annotatedchoice (cf. Table 5) precisely mirrors the behavior of the original translations andtheir derivatives.

Proposition 5.3.4 (Semantics correctness). F is a strong bisimulation.

Proof. By simple induction on the structure of A #A and transition inductionon A!A$ and F!A"!T $, where it suffices to check the induction case forannotated choice constructors (see Appendix A.1). K

Corollary 5.3.5 (Mirroring). For all S #S and C!S"!* T #T, there is A #Asuch that (A!S"!* A and ) F!A"=T.

With Proposition 5.3.4 we prove the correctness of C! " by proving the correct-ness of A! ". This is a considerably simpler task, since the A-annotations in thetarget of A! " provide more structure, in particular concerning partially committedderivatives, which is heavily exploited in Section 5.4.

5.4. Decoding Derivatives of A-Translations

We want to construct a coupled simulation (cf. Definition 2.4.1) between sourceterms and abbreviated target terms by mapping the latter back to the former. Sincederivatives of target terms may correspond to source-level choices in a partiallycommitted intermediate state, there are two natural strategies for decoding. Thedecoding functions U! ! " and U> ! "

S dwwwwwdwwwww

U* !"

Ub !"A

map partially committed annotated terms in A back to source terms in S which areeither

v the least possible committed (resetting decoding U! ), or

v the most possible committed (committing decoding U>).

32 NESTMANN AND PIERCE

67Wednesday, October 14, 2009

Page 152: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Decoding

68Wednesday, October 14, 2009

Page 153: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Decoding (I)Decoding (Section 5.4) Annotated source terms deal with partial commitments

explicitly. We define two decoding functions U of annotated terms back into sourceterm where U! resets and U> completes partial commitments.

Expansion (Section 5.5) Since the target language T is a language extended withBooleans, we show that their use in our setting is rather well behaved according tothe expanding encoding B.

The factorization, the decodings, and the expansion enjoy several nice properties:

1. F is a strong bisimulation between abbreviations and Boolean targetterms.

2. (U! , U>) is a coupled simulation between abbreviations and source terms.

3. B, i.e., a variant of it, is an expansion for Booleans in target terms.

Those can be combined to provide a coupled simulation on S_P. The observa-tion that every source term S and its translation C! !S" are related by this relationconcludes the proof of coupled-simulation-correctness of the C! -encoding (Sec-tion 5.6).

Simplifications due to homomorphic encodings and decodings. Many of the proofsin this section have in common that they exhibit particular transitions of terms byconstructing appropriate inference trees either

v from the inductive structure of (annotated) terms, or

v by simply replacing some leafs in the inference trees of their encodings ordecodings.

Since the A, F, U! , U> , B-functions are each defined homomorphically on everyconstructor of P according to the scheme in Section 4, there is a strong syntacticcorrespondence between terms and their respective translations, and, as a conse-quence, there is also a strong correspondence between transition inference trees.More precisely, since in transitions involving choice

v there is at most one application of a choice rule, and

v an application of the choice rule always represents a leaf in the inferencetree,

it suffices for all proofs to regard choice terms in isolation.

25DECODING CHOICE ENCODINGS

69Wednesday, October 14, 2009

Page 154: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Decoding (I)Decoding (Section 5.4) Annotated source terms deal with partial commitments

explicitly. We define two decoding functions U of annotated terms back into sourceterm where U! resets and U> completes partial commitments.

Expansion (Section 5.5) Since the target language T is a language extended withBooleans, we show that their use in our setting is rather well behaved according tothe expanding encoding B.

The factorization, the decodings, and the expansion enjoy several nice properties:

1. F is a strong bisimulation between abbreviations and Boolean targetterms.

2. (U! , U>) is a coupled simulation between abbreviations and source terms.

3. B, i.e., a variant of it, is an expansion for Booleans in target terms.

Those can be combined to provide a coupled simulation on S_P. The observa-tion that every source term S and its translation C! !S" are related by this relationconcludes the proof of coupled-simulation-correctness of the C! -encoding (Sec-tion 5.6).

Simplifications due to homomorphic encodings and decodings. Many of the proofsin this section have in common that they exhibit particular transitions of terms byconstructing appropriate inference trees either

v from the inductive structure of (annotated) terms, or

v by simply replacing some leafs in the inference trees of their encodings ordecodings.

Since the A, F, U! , U> , B-functions are each defined homomorphically on everyconstructor of P according to the scheme in Section 4, there is a strong syntacticcorrespondence between terms and their respective translations, and, as a conse-quence, there is also a strong correspondence between transition inference trees.More precisely, since in transitions involving choice

v there is at most one application of a choice rule, and

v an application of the choice rule always represents a leaf in the inferencetree,

it suffices for all proofs to regard choice terms in isolation.

25DECODING CHOICE ENCODINGS

69Wednesday, October 14, 2009

Page 155: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Annotated Choice States(* as composed by the states of its branches *)

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

TABLE 5Early Transition Semantics for Annotated Choice

READ: \ :j # J

Rj+v

Bw!ykz \ :

j # J

R j+v+(k[ z)

Bif k # J"(V_B)

COMMIT: \ :j # J

Rj+v

<w!{ \ :

j # J

R j+v&k

[k] } Pk[vk!x] if k #V

ABORT: \ :j # J

Rj+v

B{<w!{ \ :

j # J

R j+v&k

B+k } yk vk if k #V

Definition 5.2.1 (Annotated choice). Let J be a set of indices. Let Rj=yj (x) .Pj

be input prefixes for j # J. Let v: J(V and B&V=<, where V :=dom(v). Then,

:j # J

Rj and \ :j # J

Rj+v

B

are referred to as bare and annotated choice, respectively.

Annotated choice is given the operational semantics in Table 5. The dynamics ofannotated choice mimic precisely the behavior of the intended low-level process.READ allows a branch k in Read-state (k # J"(V_B)) to optimistically consumea message.3 If the choice is not yet resolved (B=<), COMMIT specifies that anarbitrary branch k in Test-state (k #V) can immediately evolve into its Commit-state, i.e., trigger its continuation process Pk . After the choice is resolved (B{<),ABORT allows any branch k in Commit-state (k #V) to evolve into its Abort-stateto release their consumed messages. Intuitively, by reading the lock, a branchimmediately leaves the choice system and exits. Therefore, annotated choice onlycontains branches in either Read- or Test-state.We distinguish three cases for choice constructors that are important enough to

give them names: initial for V=<=B, partial for V{< and B=<, and com-mitted for B{<. Note that both initial and partial choice contain all branches,whereas committed choice never does; it will even become empty, once all brancheshave reached their final state (B=J).Committed choice exhibits a particularly interesting property: its branches in

Test-state already have consumed a message which they will return after recogniz-ing, by internally testing the lock, that the choice is already committed; its branchesin Read-state are still waiting for values to be consumed and!!since the choice isresolved and the lock carries f!!immediately resent after an internal step. Processeswith such receive-and-resend behavior are weakly bisimilar to 0 and were called

27DECODING CHOICE ENCODINGS

3 Compared to its late counterpart, the early instantiation scheme may be more intuitive for showingthat a particular value has entered the choice system. Furthermore, we do not have to deal with :-con-version of homonym bound names in different branches. Nevertheless, the further development in thispaper does not depend on this decision. Note also that early and late bisimulation coincide in our setting(cf. Section 2.3).

70Wednesday, October 14, 2009

Page 156: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Decoding (II)

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

71Wednesday, October 14, 2009

Page 157: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Decoding (II)

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

71Wednesday, October 14, 2009

Page 158: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Decoding (II)

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

The functions U! ! ", U> ! ": A!S act homomorphically on every constructorbut annotated choice according to the scheme in Section 4. For the latter, wedistinguish between choices that are initial, committed, or partial.For nonpartial choices, the two decoding functions have the same definition

(read U! " as either U! ! " or U> ! " ): initial choice (V=<=B) is mapped to itsbare counterpart,

initial: U !\:j # J Rj+<

<"=def :

j # J

U!Rj"

committed: U !\:j # J Rj+v

B{<"=def `

j # V

yj v j

while committed choice (B{<) is mapped to the parallel composition of thosemessages which are currently held by its branches.For partial choice (B=< and V{<), the two decoding functions act in a dif-

ferent way according to the intuition described above.

Resetting. The aim is to decode an annotated term to its least possible commit-ted source correspondent. Intuitively, this means that we have to reset all of itspartial commitments by mapping it to the original choice in parallel with thealready consumed messages.

partial: U! !\:j # J Rj+v{<

< "=def `j # V yj vj } :j # J U! !Rj"

Committing. The aim is to decode an annotated term to a committed sourcecorrespondent. Intuitively, this means that we have to complete one of the (possiblyseveral) activated branches that the annotated choice has engaged in. Letk :=take(V) select an arbitrary element of V. Then,

partial: U> !\:j # J Rj+v{<

< "=def yj vj |U> !Pk"[vk!x]

maps to the source-level commitment to the selected branch.

Lemma 5.4.1 (Decoding).

1. Let U # [U! , U> ]. Then U! " bA! "=id.

2. Both U! ! " and U> ! " are surjective.

3. For A #A and U # [U! , U>], U!A_"=U!A" _.

Proof. Immediate by definition (1, 2); straightforward by induction (3). K

In the remainder of this subsection, we prove that (U! , U>) is a coupled simula-tion on A_S (Proposition 5.4.9). This requires, in particular, that U! and U&1

> areweak simulations. We will see that these simulations are eventually progressing.First, we show that U! !A" simulates A (i.e., APU! !A" ) for all A #A.

33DECODING CHOICE ENCODINGS

71Wednesday, October 14, 2009

Page 159: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (I)

Lemma 5.4.2. U! is a weak simulation.

Proof. The proof is by structural induction on A #A and transition inductionon U> !A"!S$. By the simplification discussed in Section 5.1, it suffices to regardthe case

A=\ :j # J

Rj+v

B,

where, by the operational rules in Table 5, there are three subcases.

Case (committed). B{<: Then, U! !A"=>j # V yjvj .Since we know by Lemma 5.2.2 that (! j # J R j)vB->j #V yjvj , we have that, in

this case, U! !A" is even bisimilar to A.

Case (initial). V=<=B: Then, U! !A"=! j # J U! !Rj", and there is only onetype of transitions possible. Note that we do not have to directly simulate inputtransitions for simulation in the asynchronous ?-calculus, but we have to careabout the simulation in all contexts, including matching messages. Therefore:

Case READ"OUT"COM. For k # J"(V_B), we have

yk z } \ :j # J

Rj+v

Bw!{ \ :

j # J

Rj+v+(k[ z)

Band

U! ! yk z } \ :j # J

Rj+v

B "=yz z } :j # J Rj } `j #V y j v j#U! ! \ :j # J

Rj+v+(k[ z)

B "immediately concludes the proof by an empty weakly simulating sequence.

Case (partial). B=<{V: Let k=take(v). There are two subcases:

Case READ"OUT"COM. (completely analogous to previous case).

Case COMMIT. For k #V, we have

\ :j # J

Rj+v

<w!{ Pk_k } \ :

j # J

Rj +v&k

[k]

and via C-INP"OUT"PAR"COM, there is the simulating step

U! !\ :j # J

Rj+v

<" = :j # J

Rj } `j # V yj v j

w!{ Pk[vk"z] } `k{ j #V

yj vj

= U! !Pk_k } \ :j # J

Rj+v&k

[k] "which concludes the proof. K

34 NESTMANN AND PIERCE

Next, we show that U> !A" is simulated by A (i.e., ApU> !A" ) for all A #A.

Lemma 5.4.3. U&1> is a weak simulation.

Proof. The proof is by structural induction on A #A and transition inductionon U> !A"!S$. By the simplification discussed in Section 5.1, it suffices to regardthe case

A=\ :j # J

Rj+v

B

where, by definition of U> , there are three subcases.

Case (initial). V=<=B: Then U> !A"=!j # J yj (x) .U> !Pj" where, accordingto the rule in Table 3, there is only one subcase for generating transitions: C-INP.For k # J, we have

:j # J

yj (x) .U> !Pj" w!ykz U> !Pk"[ z"x]

and there is always a weakly simulating sequence by READ and COMMIT

A=\ :j # J

Rj+<

<w!ykz \ :

j # J

Rj+(k[ z)

<w!{ Pk_k } \ :

j # J

Rj+<

[k]=: A$

where U> !A$"=U> !Pk"[ z"x] holds.

Case (committed). B{<: Then, U> !A"=>j # V yj vj .

Case OUT. For k #V, the transitions

U> !A" ww!y! vk `j #V&k

yj vj=: Sk

can be simulated by

Aw!{ \ :j # J

R j+v&k

B+k } yk vk ww!y! vk \ :j # J

Rj+v&k

B+k=: Ak

such that Sk=U> !Ak".

Case (partial). B=<{V. Let k=take(v). Then

U> !A"= `j #V&k

yk vj |U> !Pk" _k=: S$.

By Aw!{ (!j # J Rj)v&k[k] | Pk_k=: A$ such that S$=U> !A$" we can always take an

internal step in order to fully commit A. Note that A$ is fully committed since, asa guarded subterm, Pk is fully committed by definition of A. The observation thatS$=U> !A$" already concludes the proof since we may reduce it to the case of fullterms. K

35DECODING CHOICE ENCODINGS

72Wednesday, October 14, 2009

Page 160: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (I)

Lemma 5.4.2. U! is a weak simulation.

Proof. The proof is by structural induction on A #A and transition inductionon U> !A"!S$. By the simplification discussed in Section 5.1, it suffices to regardthe case

A=\ :j # J

Rj+v

B,

where, by the operational rules in Table 5, there are three subcases.

Case (committed). B{<: Then, U! !A"=>j # V yjvj .Since we know by Lemma 5.2.2 that (! j # J R j)vB->j #V yjvj , we have that, in

this case, U! !A" is even bisimilar to A.

Case (initial). V=<=B: Then, U! !A"=! j # J U! !Rj", and there is only onetype of transitions possible. Note that we do not have to directly simulate inputtransitions for simulation in the asynchronous ?-calculus, but we have to careabout the simulation in all contexts, including matching messages. Therefore:

Case READ"OUT"COM. For k # J"(V_B), we have

yk z } \ :j # J

Rj+v

Bw!{ \ :

j # J

Rj+v+(k[ z)

Band

U! ! yk z } \ :j # J

Rj+v

B "=yz z } :j # J Rj } `j #V y j v j#U! ! \ :j # J

Rj+v+(k[ z)

B "immediately concludes the proof by an empty weakly simulating sequence.

Case (partial). B=<{V: Let k=take(v). There are two subcases:

Case READ"OUT"COM. (completely analogous to previous case).

Case COMMIT. For k #V, we have

\ :j # J

Rj+v

<w!{ Pk_k } \ :

j # J

Rj +v&k

[k]

and via C-INP"OUT"PAR"COM, there is the simulating step

U! !\ :j # J

Rj+v

<" = :j # J

Rj } `j # V yj v j

w!{ Pk[vk"z] } `k{ j #V

yj vj

= U! !Pk_k } \ :j # J

Rj+v&k

[k] "which concludes the proof. K

34 NESTMANN AND PIERCE

Next, we show that U> !A" is simulated by A (i.e., ApU> !A" ) for all A #A.

Lemma 5.4.3. U&1> is a weak simulation.

Proof. The proof is by structural induction on A #A and transition inductionon U> !A"!S$. By the simplification discussed in Section 5.1, it suffices to regardthe case

A=\ :j # J

Rj+v

B

where, by definition of U> , there are three subcases.

Case (initial). V=<=B: Then U> !A"=!j # J yj (x) .U> !Pj" where, accordingto the rule in Table 3, there is only one subcase for generating transitions: C-INP.For k # J, we have

:j # J

yj (x) .U> !Pj" w!ykz U> !Pk"[ z"x]

and there is always a weakly simulating sequence by READ and COMMIT

A=\ :j # J

Rj+<

<w!ykz \ :

j # J

Rj+(k[ z)

<w!{ Pk_k } \ :

j # J

Rj+<

[k]=: A$

where U> !A$"=U> !Pk"[ z"x] holds.

Case (committed). B{<: Then, U> !A"=>j # V yj vj .

Case OUT. For k #V, the transitions

U> !A" ww!y! vk `j #V&k

yj vj=: Sk

can be simulated by

Aw!{ \ :j # J

R j+v&k

B+k } yk vk ww!y! vk \ :j # J

Rj+v&k

B+k=: Ak

such that Sk=U> !Ak".

Case (partial). B=<{V. Let k=take(v). Then

U> !A"= `j #V&k

yk vj |U> !Pk" _k=: S$.

By Aw!{ (!j # J Rj)v&k[k] | Pk_k=: A$ such that S$=U> !A$" we can always take an

internal step in order to fully commit A. Note that A$ is fully committed since, asa guarded subterm, Pk is fully committed by definition of A. The observation thatS$=U> !A$" already concludes the proof since we may reduce it to the case of fullterms. K

35DECODING CHOICE ENCODINGS

In addition, the weak simulations U! and U&1> satisfy further useful properties

(cf. Section 5.8).

Lemma 5.4.4. 1. U! is strict; U&1> is progressing.

2. Both U! and U&1> are eventually progressing.

Proof. Both the strictness of U! and the progressiveness of U&1> follow from pre-

vious proofs: For U! , Proof 5.4.2 shows that some {-steps of A may be simulatedtrivially by U! !A":

Aw!{ A$ implies U! !A" w!{

U! !A$".

For U&1> , Proof 5.4.3 shows that no {-step of U> !A" may be suppressed by A:

U> !A" w!{

U> !A$" implies A O{ A$.

Immediately, we get that U&1> is eventually progressing.

However, we have not yet proven that also U! is eventually progressing. Weproceed by analyzing {-sequences starting from an arbitrary A #A.

A=A0 w!{ A1 w!

{ A2 w!{ } } } .

Since we know that U! is strict, we have that

Ai w!{ A i+1 implies U! !Ai" w!

{U! !Ai+1",

so we only have to argue that there is an upper bound kA for the number of stepswhere the {-step may be simulated trivially such that for n>kA :

A=A0 w!{ A1 w!

{ } } } w!{ An implies U! !A" w!{ + U! !An".

For arbitrary steps

Ai w!{ A i+1

we may distinguish four different cases, according to which combination of ruleshave been applied in the inference. We omit merely structural rules and mentiononly the essential rules. The following analysis resembles the proof of Lemma 5.4.2.The difference is that, here, we are not interested in visible steps; instead, we takea closer look at the simulations of {-steps.

Case COM!CLOSE! V -INP. Here, no choice operator is involved. The trans-ition is caused by a subterm which may be regarded as a target term. The decodingU! ! " reproduces this part homomorphically at the source level, wherefore we have

U! !Ai" w!{

U! !Ai+1".

36 NESTMANN AND PIERCE

72Wednesday, October 14, 2009

Page 161: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (I)

Lemma 5.4.2. U! is a weak simulation.

Proof. The proof is by structural induction on A #A and transition inductionon U> !A"!S$. By the simplification discussed in Section 5.1, it suffices to regardthe case

A=\ :j # J

Rj+v

B,

where, by the operational rules in Table 5, there are three subcases.

Case (committed). B{<: Then, U! !A"=>j # V yjvj .Since we know by Lemma 5.2.2 that (! j # J R j)vB->j #V yjvj , we have that, in

this case, U! !A" is even bisimilar to A.

Case (initial). V=<=B: Then, U! !A"=! j # J U! !Rj", and there is only onetype of transitions possible. Note that we do not have to directly simulate inputtransitions for simulation in the asynchronous ?-calculus, but we have to careabout the simulation in all contexts, including matching messages. Therefore:

Case READ"OUT"COM. For k # J"(V_B), we have

yk z } \ :j # J

Rj+v

Bw!{ \ :

j # J

Rj+v+(k[ z)

Band

U! ! yk z } \ :j # J

Rj+v

B "=yz z } :j # J Rj } `j #V y j v j#U! ! \ :j # J

Rj+v+(k[ z)

B "immediately concludes the proof by an empty weakly simulating sequence.

Case (partial). B=<{V: Let k=take(v). There are two subcases:

Case READ"OUT"COM. (completely analogous to previous case).

Case COMMIT. For k #V, we have

\ :j # J

Rj+v

<w!{ Pk_k } \ :

j # J

Rj +v&k

[k]

and via C-INP"OUT"PAR"COM, there is the simulating step

U! !\ :j # J

Rj+v

<" = :j # J

Rj } `j # V yj v j

w!{ Pk[vk"z] } `k{ j #V

yj vj

= U! !Pk_k } \ :j # J

Rj+v&k

[k] "which concludes the proof. K

34 NESTMANN AND PIERCE

Next, we show that U> !A" is simulated by A (i.e., ApU> !A" ) for all A #A.

Lemma 5.4.3. U&1> is a weak simulation.

Proof. The proof is by structural induction on A #A and transition inductionon U> !A"!S$. By the simplification discussed in Section 5.1, it suffices to regardthe case

A=\ :j # J

Rj+v

B

where, by definition of U> , there are three subcases.

Case (initial). V=<=B: Then U> !A"=!j # J yj (x) .U> !Pj" where, accordingto the rule in Table 3, there is only one subcase for generating transitions: C-INP.For k # J, we have

:j # J

yj (x) .U> !Pj" w!ykz U> !Pk"[ z"x]

and there is always a weakly simulating sequence by READ and COMMIT

A=\ :j # J

Rj+<

<w!ykz \ :

j # J

Rj+(k[ z)

<w!{ Pk_k } \ :

j # J

Rj+<

[k]=: A$

where U> !A$"=U> !Pk"[ z"x] holds.

Case (committed). B{<: Then, U> !A"=>j # V yj vj .

Case OUT. For k #V, the transitions

U> !A" ww!y! vk `j #V&k

yj vj=: Sk

can be simulated by

Aw!{ \ :j # J

R j+v&k

B+k } yk vk ww!y! vk \ :j # J

Rj+v&k

B+k=: Ak

such that Sk=U> !Ak".

Case (partial). B=<{V. Let k=take(v). Then

U> !A"= `j #V&k

yk vj |U> !Pk" _k=: S$.

By Aw!{ (!j # J Rj)v&k[k] | Pk_k=: A$ such that S$=U> !A$" we can always take an

internal step in order to fully commit A. Note that A$ is fully committed since, asa guarded subterm, Pk is fully committed by definition of A. The observation thatS$=U> !A$" already concludes the proof since we may reduce it to the case of fullterms. K

35DECODING CHOICE ENCODINGS

In addition, the weak simulations U! and U&1> satisfy further useful properties

(cf. Section 5.8).

Lemma 5.4.4. 1. U! is strict; U&1> is progressing.

2. Both U! and U&1> are eventually progressing.

Proof. Both the strictness of U! and the progressiveness of U&1> follow from pre-

vious proofs: For U! , Proof 5.4.2 shows that some {-steps of A may be simulatedtrivially by U! !A":

Aw!{ A$ implies U! !A" w!{

U! !A$".

For U&1> , Proof 5.4.3 shows that no {-step of U> !A" may be suppressed by A:

U> !A" w!{

U> !A$" implies A O{ A$.

Immediately, we get that U&1> is eventually progressing.

However, we have not yet proven that also U! is eventually progressing. Weproceed by analyzing {-sequences starting from an arbitrary A #A.

A=A0 w!{ A1 w!

{ A2 w!{ } } } .

Since we know that U! is strict, we have that

Ai w!{ A i+1 implies U! !Ai" w!

{U! !Ai+1",

so we only have to argue that there is an upper bound kA for the number of stepswhere the {-step may be simulated trivially such that for n>kA :

A=A0 w!{ A1 w!

{ } } } w!{ An implies U! !A" w!{ + U! !An".

For arbitrary steps

Ai w!{ A i+1

we may distinguish four different cases, according to which combination of ruleshave been applied in the inference. We omit merely structural rules and mentiononly the essential rules. The following analysis resembles the proof of Lemma 5.4.2.The difference is that, here, we are not interested in visible steps; instead, we takea closer look at the simulations of {-steps.

Case COM!CLOSE! V -INP. Here, no choice operator is involved. The trans-ition is caused by a subterm which may be regarded as a target term. The decodingU! ! " reproduces this part homomorphically at the source level, wherefore we have

U! !Ai" w!{

U! !Ai+1".

36 NESTMANN AND PIERCE

In addition, the weak simulations U! and U&1> satisfy further useful properties

(cf. Section 5.8).

Lemma 5.4.4. 1. U! is strict; U&1> is progressing.

2. Both U! and U&1> are eventually progressing.

Proof. Both the strictness of U! and the progressiveness of U&1> follow from pre-

vious proofs: For U! , Proof 5.4.2 shows that some {-steps of A may be simulatedtrivially by U! !A":

Aw!{ A$ implies U! !A" w!{

U! !A$".

For U&1> , Proof 5.4.3 shows that no {-step of U> !A" may be suppressed by A:

U> !A" w!{

U> !A$" implies A O{ A$.

Immediately, we get that U&1> is eventually progressing.

However, we have not yet proven that also U! is eventually progressing. Weproceed by analyzing {-sequences starting from an arbitrary A #A.

A=A0 w!{ A1 w!

{ A2 w!{ } } } .

Since we know that U! is strict, we have that

Ai w!{ A i+1 implies U! !Ai" w!

{U! !Ai+1",

so we only have to argue that there is an upper bound kA for the number of stepswhere the {-step may be simulated trivially such that for n>kA :

A=A0 w!{ A1 w!

{ } } } w!{ An implies U! !A" w!{ + U! !An".

For arbitrary steps

Ai w!{ A i+1

we may distinguish four different cases, according to which combination of ruleshave been applied in the inference. We omit merely structural rules and mentiononly the essential rules. The following analysis resembles the proof of Lemma 5.4.2.The difference is that, here, we are not interested in visible steps; instead, we takea closer look at the simulations of {-steps.

Case COM!CLOSE! V -INP. Here, no choice operator is involved. The trans-ition is caused by a subterm which may be regarded as a target term. The decodingU! ! " reproduces this part homomorphically at the source level, wherefore we have

U! !Ai" w!{

U! !Ai+1".

36 NESTMANN AND PIERCE

72Wednesday, October 14, 2009

Page 162: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (II)Since U! !A"pA for all A #A, the first coupling property requires the existenceof transition U! !A"OU> !A" (in S, thus called S-coupling).

Lemma 5.4.6 (S-coupling). For all A #A, U! !A"OU> !A".

Proof. The proof is by structural induction on A #A. By the simplification dis-cussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U>! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<:Immediate by Fact 5.4.5.

Case (partial). B=<{V: Let k=take(V). Then,

U! !A" = `j # V

yj v } :j #U U! !Rj"

w!{ `j #V&k

yj vj } U! !Pk" _k= `j # V&k

yj vj } U> !Pk" _k=U> !A".

where U! !Pk" _k=U>!Pk" _k since Pk_k is fully committed.There may be several occurrences of partially committed choices in a term A,

but, by definition, they only occur unguarded. We may simply collect the corre-sponding internal steps in either order which leads to AOA$. K

Since ApU>!A" for all A #A, the second coupling property addresses U> -relatedterms. In this case, it is not as simple as for the S-coupling to denote what couplingmeans, so we explain it a bit more carefully: For all A #A, whenever (A, S) #U> ,i.e., S=U> !A", there is an internal sequence AOA$ (in A, thus called A-coupling),such that (A$, S) #U! , i.e., S=U! !A$". If we link the two equations for S, we getthe coupling requirement U>!A"=U! !A$" for AOA$. In the diagram

38 NESTMANN AND PIERCE

73Wednesday, October 14, 2009

Page 163: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (II)

Since U! !A"pA for all A #A, the first coupling property requires the existenceof transition U! !A"OU> !A" (in S, thus called S-coupling).

Lemma 5.4.6 (S-coupling). For all A #A, U! !A"OU> !A".

Proof. The proof is by structural induction on A #A. By the simplification dis-cussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U>! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<:Immediate by Fact 5.4.5.

Case (partial). B=<{V: Let k=take(V). Then,

U! !A" = `j # V

yj v } :j #U U! !Rj"

w!{ `j #V&k

yj vj } U! !Pk" _k= `j # V&k

yj vj } U> !Pk" _k=U> !A".

where U! !Pk" _k=U>!Pk" _k since Pk_k is fully committed.There may be several occurrences of partially committed choices in a term A,

but, by definition, they only occur unguarded. We may simply collect the corre-sponding internal steps in either order which leads to AOA$. K

Since ApU>!A" for all A #A, the second coupling property addresses U> -relatedterms. In this case, it is not as simple as for the S-coupling to denote what couplingmeans, so we explain it a bit more carefully: For all A #A, whenever (A, S) #U> ,i.e., S=U> !A", there is an internal sequence AOA$ (in A, thus called A-coupling),such that (A$, S) #U! , i.e., S=U! !A$". If we link the two equations for S, we getthe coupling requirement U>!A"=U! !A$" for AOA$. In the diagram

38 NESTMANN AND PIERCE

Since U! !A"pA for all A #A, the first coupling property requires the existenceof transition U! !A"OU> !A" (in S, thus called S-coupling).

Lemma 5.4.6 (S-coupling). For all A #A, U! !A"OU> !A".

Proof. The proof is by structural induction on A #A. By the simplification dis-cussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U>! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<:Immediate by Fact 5.4.5.

Case (partial). B=<{V: Let k=take(V). Then,

U! !A" = `j # V

yj v } :j #U U! !Rj"

w!{ `j #V&k

yj vj } U! !Pk" _k= `j # V&k

yj vj } U> !Pk" _k=U> !A".

where U! !Pk" _k=U>!Pk" _k since Pk_k is fully committed.There may be several occurrences of partially committed choices in a term A,

but, by definition, they only occur unguarded. We may simply collect the corre-sponding internal steps in either order which leads to AOA$. K

Since ApU>!A" for all A #A, the second coupling property addresses U> -relatedterms. In this case, it is not as simple as for the S-coupling to denote what couplingmeans, so we explain it a bit more carefully: For all A #A, whenever (A, S) #U> ,i.e., S=U> !A", there is an internal sequence AOA$ (in A, thus called A-coupling),such that (A$, S) #U! , i.e., S=U! !A$". If we link the two equations for S, we getthe coupling requirement U>!A"=U! !A$" for AOA$. In the diagram

38 NESTMANN AND PIERCE

73Wednesday, October 14, 2009

Page 164: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (II)

Since U! !A"pA for all A #A, the first coupling property requires the existenceof transition U! !A"OU> !A" (in S, thus called S-coupling).

Lemma 5.4.6 (S-coupling). For all A #A, U! !A"OU> !A".

Proof. The proof is by structural induction on A #A. By the simplification dis-cussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U>! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<:Immediate by Fact 5.4.5.

Case (partial). B=<{V: Let k=take(V). Then,

U! !A" = `j # V

yj v } :j #U U! !Rj"

w!{ `j #V&k

yj vj } U! !Pk" _k= `j # V&k

yj vj } U> !Pk" _k=U> !A".

where U! !Pk" _k=U>!Pk" _k since Pk_k is fully committed.There may be several occurrences of partially committed choices in a term A,

but, by definition, they only occur unguarded. We may simply collect the corre-sponding internal steps in either order which leads to AOA$. K

Since ApU>!A" for all A #A, the second coupling property addresses U> -relatedterms. In this case, it is not as simple as for the S-coupling to denote what couplingmeans, so we explain it a bit more carefully: For all A #A, whenever (A, S) #U> ,i.e., S=U> !A", there is an internal sequence AOA$ (in A, thus called A-coupling),such that (A$, S) #U! , i.e., S=U! !A$". If we link the two equations for S, we getthe coupling requirement U>!A"=U! !A$" for AOA$. In the diagram

38 NESTMANN AND PIERCE

Since U! !A"pA for all A #A, the first coupling property requires the existenceof transition U! !A"OU> !A" (in S, thus called S-coupling).

Lemma 5.4.6 (S-coupling). For all A #A, U! !A"OU> !A".

Proof. The proof is by structural induction on A #A. By the simplification dis-cussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U>! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<:Immediate by Fact 5.4.5.

Case (partial). B=<{V: Let k=take(V). Then,

U! !A" = `j # V

yj v } :j #U U! !Rj"

w!{ `j #V&k

yj vj } U! !Pk" _k= `j # V&k

yj vj } U> !Pk" _k=U> !A".

where U! !Pk" _k=U>!Pk" _k since Pk_k is fully committed.There may be several occurrences of partially committed choices in a term A,

but, by definition, they only occur unguarded. We may simply collect the corre-sponding internal steps in either order which leads to AOA$. K

Since ApU>!A" for all A #A, the second coupling property addresses U> -relatedterms. In this case, it is not as simple as for the S-coupling to denote what couplingmeans, so we explain it a bit more carefully: For all A #A, whenever (A, S) #U> ,i.e., S=U> !A", there is an internal sequence AOA$ (in A, thus called A-coupling),such that (A$, S) #U! , i.e., S=U! !A$". If we link the two equations for S, we getthe coupling requirement U>!A"=U! !A$" for AOA$. In the diagram

38 NESTMANN AND PIERCE

we also indicate the way we proceed in order to do the proof. The relation on theleft is the assumption because U&1

> is a simulation. The two relations on the righthold if A$ is fully committed. The following lemma states that such an A$ alwaysexists as a derivative of A and, furthermore, connects the left-and right-hand sidesof the diagram.

Lemma 5.4.7. For all A #A, there is a fully committing AOA$ such thatU> !A"=U> !A$".

Proof. The proof is by structural induction on A #A. By the simplificationdiscussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U> ! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<: Immediate with A$ =def A.

Case (partial). B=<{V: Let k=take(V). Then, with Aw!{ (!j # J Rj)v&k[ k] |

Pk_k =def A$ we have U> !A"=>j # V&k yj vj |U> !Pk" _k=U> !A$".

Note that Pk is fully committed since it was guarded in A and not changed bythe transition; thus, A$ is fully committed, since branch k has been successfullychosen. K

Lemma 5.4.8 (A-coupling). For all A #A, there is AOA$ such that U> !A"=U! !A$".

Proof. Let A$ be constructed as in the proof of Lemma 5.4.7. Thus, we knowthat U> !A"=U> !A$" and, since A$ is fully committed, we also know (Fact 5.4.5)that U> !A$"=U! !A$". K

Finally, the main property of the decoding functions U! ! " and U> ! " is:

Proposition 5.4.9. (U! , U>) is a coupled simulation.

Proof. By Lemmas 5.4.2 and 5.4.3, we know that (U! , U>) is a mutual simula-tion. By Lemmas 5.4.6 and 5.4.8, we have the necessary coupling between U! andU> . K

Corollary 5.4.10. (U&1> , U&1

! ) is a coupled simulation.

Proof. By Fact 2.4.2. K

Before ending this section, we show how the decoding functions may be used toprovide a notably sharp operational correctness argument for the A-encoding. Let====Oa1 } } } an denote the sequence =Oa1 } } } =Oan where the ai are output-labels. Then,for arbitrary S #S and A!S" ====Oa1 } } } an A, we have the following relations:

39DECODING CHOICE ENCODINGS

we also indicate the way we proceed in order to do the proof. The relation on theleft is the assumption because U&1

> is a simulation. The two relations on the righthold if A$ is fully committed. The following lemma states that such an A$ alwaysexists as a derivative of A and, furthermore, connects the left-and right-hand sidesof the diagram.

Lemma 5.4.7. For all A #A, there is a fully committing AOA$ such thatU> !A"=U> !A$".

Proof. The proof is by structural induction on A #A. By the simplificationdiscussed in Section 5.1, it suffices to regard the case

A=\ :j # J

Rj+v

B,

where, by definition of U> ! ", there are three subcases.

Case (initial). V=<=B or (committed) B{<: Immediate with A$ =def A.

Case (partial). B=<{V: Let k=take(V). Then, with Aw!{ (!j # J Rj)v&k[ k] |

Pk_k =def A$ we have U> !A"=>j # V&k yj vj |U> !Pk" _k=U> !A$".

Note that Pk is fully committed since it was guarded in A and not changed bythe transition; thus, A$ is fully committed, since branch k has been successfullychosen. K

Lemma 5.4.8 (A-coupling). For all A #A, there is AOA$ such that U> !A"=U! !A$".

Proof. Let A$ be constructed as in the proof of Lemma 5.4.7. Thus, we knowthat U> !A"=U> !A$" and, since A$ is fully committed, we also know (Fact 5.4.5)that U> !A$"=U! !A$". K

Finally, the main property of the decoding functions U! ! " and U> ! " is:

Proposition 5.4.9. (U! , U>) is a coupled simulation.

Proof. By Lemmas 5.4.2 and 5.4.3, we know that (U! , U>) is a mutual simula-tion. By Lemmas 5.4.6 and 5.4.8, we have the necessary coupling between U! andU> . K

Corollary 5.4.10. (U&1> , U&1

! ) is a coupled simulation.

Proof. By Fact 2.4.2. K

Before ending this section, we show how the decoding functions may be used toprovide a notably sharp operational correctness argument for the A-encoding. Let====Oa1 } } } an denote the sequence =Oa1 } } } =Oan where the ai are output-labels. Then,for arbitrary S #S and A!S" ====Oa1 } } } an A, we have the following relations:

39DECODING CHOICE ENCODINGS

73Wednesday, October 14, 2009

Page 165: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (III)

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

74Wednesday, October 14, 2009

Page 166: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (III)

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

74Wednesday, October 14, 2009

Page 167: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (III)

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

74Wednesday, October 14, 2009

Page 168: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (III)

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

74Wednesday, October 14, 2009

Page 169: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (III)

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Proof. By the syntactic adequacy lemmas (5.4.1 and 5.3.3), we know that, for allS #S, the translations A!S" and C!S"=(F bA)!S" yield the witnesses for(S, S! !S" ) being contained in both C and S. K

Thus, the C! -encoding is operationally correct, in the sense that every sourceterm is simulated by its translation (completeness via C) and also it simulates itsown translation (soundness via S). Moreover, the result is much stronger since thesimulations are coupled.

Theorem 5.6.3 (Correctness of C!). For all S #S : S$C! !S".

Proof. By Theorem 5.6.1 and Lemma 5.6.2. K

Corollary 5.6.4 (Full abstraction). For all S1 , S2 #S: S1 $ S2 iff C! !S1" $C! !S2".

Proof. By transitivity. K

Note that the C! -encoding is not fully abstract up to weak bisimulation, asproven in Section 3.

5.7. Correctness of the Divergent Protocol

The proof for the divergent choice encoding follows the outline of Sections 5.2 to5.6. In contrast to the divergence-free encoding, the D! -encoding is correct up toweak bisimulation. The overall proof is simpler since the proof obligations for weakbisimulation require less work than those of coupled simulation. We sketch the fullproof here by carefully defining just the main ingredient of the factorization anddecoding diagram:

The annotated intermediate language A that we use here is similar to the one forthe C-encoding. Of course, the annotations and the operational semantics have tobe different in the case of the D-encoding, since they are now expected to model dif-ferent behavior. Also, since we expect source terms and translations to be bisimilar,we only need a single decoding function.

Annotated divergent choice. Since many (replicated) copies of the same branchmay be activated at the same time, we enhance the annotation v to map indices tomultisets of values NV. The information whether a choice is resolved can only beinferred from the value of the lock. In case it is held by an activated branch, thechoice is not (yet) resolved, since this branch may still decide to undo its activity.

43DECODING CHOICE ENCODINGS

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

74Wednesday, October 14, 2009

Page 170: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (IV)Running the same proof strategy

for the atomically committing encoding, we may use a single decoding functions.

Proof. By the syntactic adequacy lemmas (5.4.1 and 5.3.3), we know that, for allS #S, the translations A!S" and C!S"=(F bA)!S" yield the witnesses for(S, S! !S" ) being contained in both C and S. K

Thus, the C! -encoding is operationally correct, in the sense that every sourceterm is simulated by its translation (completeness via C) and also it simulates itsown translation (soundness via S). Moreover, the result is much stronger since thesimulations are coupled.

Theorem 5.6.3 (Correctness of C!). For all S #S : S$C! !S".

Proof. By Theorem 5.6.1 and Lemma 5.6.2. K

Corollary 5.6.4 (Full abstraction). For all S1 , S2 #S: S1 $ S2 iff C! !S1" $C! !S2".

Proof. By transitivity. K

Note that the C! -encoding is not fully abstract up to weak bisimulation, asproven in Section 3.

5.7. Correctness of the Divergent Protocol

The proof for the divergent choice encoding follows the outline of Sections 5.2 to5.6. In contrast to the divergence-free encoding, the D! -encoding is correct up toweak bisimulation. The overall proof is simpler since the proof obligations for weakbisimulation require less work than those of coupled simulation. We sketch the fullproof here by carefully defining just the main ingredient of the factorization anddecoding diagram:

The annotated intermediate language A that we use here is similar to the one forthe C-encoding. Of course, the annotations and the operational semantics have tobe different in the case of the D-encoding, since they are now expected to model dif-ferent behavior. Also, since we expect source terms and translations to be bisimilar,we only need a single decoding function.

Annotated divergent choice. Since many (replicated) copies of the same branchmay be activated at the same time, we enhance the annotation v to map indices tomultisets of values NV. The information whether a choice is resolved can only beinferred from the value of the lock. In case it is held by an activated branch, thechoice is not (yet) resolved, since this branch may still decide to undo its activity.

43DECODING CHOICE ENCODINGS

Proposition 5.7.2. F is a strong bisimulation.

Decoding. The decoding function also takes care of ``hesitating'' branches.

U !\ :j # J

Rj+v

b" =def `

j #V

yj vj | S

where S=def {! j # J U!R j"0

if b=t or b=(k, z)if b=f

Proposition 5.7.3. U is a weak bisimulation.

Theorem 5.7.4. U&1F is a weak bisimulation.

Theorem 5.7.5 (Correctness of D). For all S #S : SrD!S".

Corollary 5.7.6 (Full abstraction). For all S1 , S2 #S : S1rS2 iff D!S1"rD!S2".

Every annotated choice term with at least one branch holding some value has {-loops, i.e., divergent computations. For example,

\ :j # J

Rj+v

t

w!{ \ :j # J

Rj+v

(k, z)w!{ \ :

j # J

Rj+v&(k, z)

t

| yk z w!{ \ :

j # J

Rj +v

t

,

where z # vk , is infinitely often trying, undoing, reading, ....Finally, if we define D! ! " as B! " bD! ", then by proving that U&1FB! is a

weak bisimulation, we also get that D! ! " is correct and fully abstract with respectto weak bisimulation.

5.8. Divergence

In Section 4, we have claimed that our two choice encodings differ in theirdivergence behavior. Our results in the previous subsections do not yet provide anyrigorous justification of this, since the definition of weak simulation ignoresdivergence, as we discussed in Section 2.3.As explained in Section 2.3, an encoding is called divergence-free if it does not

add divergence to the behavior of source terms. More technically, every infinite{-sequence of a derivative T $ of a translation !S" corresponds to some infinite{-sequence of a derivative S$ of S; i.e., T $ ! implies S$ ! . Whereas it is simple toshow that the D! -encoding is not divergence-free by giving an example of a {-loop(which is possible in almost every D-translation for terms containing choices, andcarries over to their B-expansions), as we did in the previous section, our claimthat the C-encoding is divergence-free requires a proof (cf. Theorem 5.8.2).We are going to prove the divergence-freedom by establishing an eventually

progressing simulation between derivatives of translations and source terms; by this

45DECODING CHOICE ENCODINGS

Proposition 5.7.2. F is a strong bisimulation.

Decoding. The decoding function also takes care of ``hesitating'' branches.

U !\ :j # J

Rj+v

b" =def `

j #V

yj vj | S

where S=def {! j # J U!R j"0

if b=t or b=(k, z)if b=f

Proposition 5.7.3. U is a weak bisimulation.

Theorem 5.7.4. U&1F is a weak bisimulation.

Theorem 5.7.5 (Correctness of D). For all S #S : SrD!S".

Corollary 5.7.6 (Full abstraction). For all S1 , S2 #S : S1rS2 iff D!S1"rD!S2".

Every annotated choice term with at least one branch holding some value has {-loops, i.e., divergent computations. For example,

\ :j # J

Rj+v

t

w!{ \ :j # J

Rj+v

(k, z)w!{ \ :

j # J

Rj+v&(k, z)

t

| yk z w!{ \ :

j # J

Rj +v

t

,

where z # vk , is infinitely often trying, undoing, reading, ....Finally, if we define D! ! " as B! " bD! ", then by proving that U&1FB! is a

weak bisimulation, we also get that D! ! " is correct and fully abstract with respectto weak bisimulation.

5.8. Divergence

In Section 4, we have claimed that our two choice encodings differ in theirdivergence behavior. Our results in the previous subsections do not yet provide anyrigorous justification of this, since the definition of weak simulation ignoresdivergence, as we discussed in Section 2.3.As explained in Section 2.3, an encoding is called divergence-free if it does not

add divergence to the behavior of source terms. More technically, every infinite{-sequence of a derivative T $ of a translation !S" corresponds to some infinite{-sequence of a derivative S$ of S; i.e., T $ ! implies S$ ! . Whereas it is simple toshow that the D! -encoding is not divergence-free by giving an example of a {-loop(which is possible in almost every D-translation for terms containing choices, andcarries over to their B-expansions), as we did in the previous section, our claimthat the C-encoding is divergence-free requires a proof (cf. Theorem 5.8.2).We are going to prove the divergence-freedom by establishing an eventually

progressing simulation between derivatives of translations and source terms; by this

45DECODING CHOICE ENCODINGS

75Wednesday, October 14, 2009

Page 171: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Results (V)

simulation, we capture any reachable state of some translation and are guaranteedthat its divergence was not introduced by the encoding, but already present at thesource level.A good candidate for such a simulation is S&1 since it relates all derivatives of

C! -translations and is itself composed of three eventually progressing simulations.

Proposition 5.8.1. S&1 is eventually progressing.

Proof. By S=def U&1! FB! , we have S&1=(B! )&1 F&1U! . Since all of

(B! )&1, F&1, and U! are eventually progressing, their composition is alsoeventually progressing, as proved by simply multiplying the upper bounds for thenumber of trivial simulation steps of the components. K

Theorem 5.8.2. C! ! " is divergence-free.

Proof. Since (S, C! !S" ) #S&1 for all S #S, and S&1 is an eventually progress-ing simulation by Proposition 5.8.1. K

6. CONCLUSIONS

We have investigated two different encodings of the asynchronous ?-calculus withinput-guarded choice into its choice-free fragment. Several points deserve to bediscussed.

Correctness. For both choice encodings, we provided a framework that allowedus to compare source terms and their translations directly. This enabled us to usea correctness notion that is stronger than the usual full abstraction, which herecomes up as a simple corollary. The strength of our results may be compared withthe notion of representability in [HY94a, HY94b], where it was left as an openproblem whether some form of summation could be behaviorally represented byconcurrent combinators. Our divergent encoding (for theoretical questions likerepresentability, divergence is acceptable) provides a first positive answer for therepresentability of input-guarded choice up to weak asynchronous bisimulation.Our results also hold in the setting of value-passing CCS with Boolean values

and test-expressions. However, in the ?-calculus, these additional notions can beencoded by a simple name-passing protocol (as shown in this paper). Furthermore,the results (except for the congruence properties) can be generalized to calculi withpolyadic communication, full replication, and matching.

Asynchrony. For both encodings, the correctness proofs cannot be built uponstandard (i.e., synchronous) notions of simulation. The reason lies in the inherentasynchrony of the implementation algorithm, which arises from the resending ofmessages (which must not be kept by a branch when the choice has already com-mitted to a competing branch).

Nonpromptness. Most examples of encodings into process calculi known in theliterature enjoy the simplifying property of being prompt, i.e., initial transitions of

46 NESTMANN AND PIERCE

simulation, we capture any reachable state of some translation and are guaranteedthat its divergence was not introduced by the encoding, but already present at thesource level.A good candidate for such a simulation is S&1 since it relates all derivatives of

C! -translations and is itself composed of three eventually progressing simulations.

Proposition 5.8.1. S&1 is eventually progressing.

Proof. By S=def U&1! FB! , we have S&1=(B! )&1 F&1U! . Since all of

(B! )&1, F&1, and U! are eventually progressing, their composition is alsoeventually progressing, as proved by simply multiplying the upper bounds for thenumber of trivial simulation steps of the components. K

Theorem 5.8.2. C! ! " is divergence-free.

Proof. Since (S, C! !S" ) #S&1 for all S #S, and S&1 is an eventually progress-ing simulation by Proposition 5.8.1. K

6. CONCLUSIONS

We have investigated two different encodings of the asynchronous ?-calculus withinput-guarded choice into its choice-free fragment. Several points deserve to bediscussed.

Correctness. For both choice encodings, we provided a framework that allowedus to compare source terms and their translations directly. This enabled us to usea correctness notion that is stronger than the usual full abstraction, which herecomes up as a simple corollary. The strength of our results may be compared withthe notion of representability in [HY94a, HY94b], where it was left as an openproblem whether some form of summation could be behaviorally represented byconcurrent combinators. Our divergent encoding (for theoretical questions likerepresentability, divergence is acceptable) provides a first positive answer for therepresentability of input-guarded choice up to weak asynchronous bisimulation.Our results also hold in the setting of value-passing CCS with Boolean values

and test-expressions. However, in the ?-calculus, these additional notions can beencoded by a simple name-passing protocol (as shown in this paper). Furthermore,the results (except for the congruence properties) can be generalized to calculi withpolyadic communication, full replication, and matching.

Asynchrony. For both encodings, the correctness proofs cannot be built uponstandard (i.e., synchronous) notions of simulation. The reason lies in the inherentasynchrony of the implementation algorithm, which arises from the resending ofmessages (which must not be kept by a branch when the choice has already com-mitted to a competing branch).

Nonpromptness. Most examples of encodings into process calculi known in theliterature enjoy the simplifying property of being prompt, i.e., initial transitions of

46 NESTMANN AND PIERCE

Lemma 5.5.6. B! is progressing; (B! )&1 is eventually progressing.

Proof. Directly from the operational correspondence Lemma 5.5.1. The even-tually progressing property for (B! )&1 comes with an upper bound for triviallysimulating {-steps as determined by the (finite) number of ``active'' test-expressions,multiplied with 5 as the worst case that all of the active test-expressions have justdone the first step. K

5.6. Main Result

In this section, we establish a coupled simulation (cf. Definition 2.4.1) betweensource terms and their C! -translations by exploiting the results for the A-encodingvia the decodings U! ! " and U> ! ". Reasoning about the annotated versions ofchoice allowed us to use their high-level structure for the decoding functions.We argued that we could safely concentrate on the annotated language A, since

F! " flattens abbreviation terms correctly (up to t ) into terms of Ptest, whereasB! " expands test-expressions correctly (up to ! ) into terms of P. In order tocombine those ideas, let the simulations C (completeness) and S&1 (soundness) bedefined by

C=def U&1> FB! and S =def U&1

! FB!

according to the diagram

where the relations U! and U> are only defined on the subset A of P(7). The resultsfor annotated terms carry over smoothly to the expanded versions.

Theorem 5.6.1. (C, S) is a coupled simulation.

Proof. By Corollary 5.4.10, Proposition 5.3.4, and Lemma 2.5.3 twice. K

Observe that C is constructed from the committing decoding U> ! ", soderivatives of target terms are at most as committed as their C-related source terms.Analogously, S is constructed from the resetting decoding U! ! ", so derivatives oftarget terms are at least as committed as their S-related source terms.By construction, the relations C and S are big enough to contain all source and

target terms and, in particular, to relate all source terms and their C! -translations.

Lemma 5.6.2. For all S #S : (S, C! !S" ) # C&S.

42 NESTMANN AND PIERCE

76Wednesday, October 14, 2009

Page 172: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Operational Correspondence(* for free *)

translations are committing, by corresponding to some particular computation stepof their source. Both of our choice encodings fall in the class of nonpromptencodings that, moreover, cannot be dealt with by optimization with administrativenormal forms.

Partial commitments. With respect to the different results for the two choiceencodings, it is crucial to notice that only C! " breaks up the atomicity of commit-ting a choice. The resulting partially committed states are exactly the reason whycorrectness up to weak bisimulation has to fail, whereas coupled simulation appliessuccessfully.

Divergence. We have not been able to formulate a choice encoding which isdivergence-free and correct with respect to weak bisimulation. We conjecture thatit is impossible.

Decodings. Any operational correctness proof which states that an encoding issound in the sense that each step of a translation is compatible with some sourcestep implicitly uses the idea of mapping back the translation to its source term inorder to detect the correspondence. We made this intuition explicit in decodingfunctions which provide a notation for the proofs that is both compact andintuitive. With prompt encodings, the reconstruction of source terms from targetterms is rather simple, since it suffices to deal with literal translations. In contrast,nonprompt encodings require the decoding of derivatives of translations.

Annotations. The only way to detect the origin of derivatives of translations isto retrace their derivation histories. As the underlying semantics, one could, forexample, use causality-based techniques, but this would introduce extra technicaloverhead. Instead, we exploit annotated source terms that precisely capture theinformation that is necessary to perform the backtracking. An intermediatelanguage built from annotated source terms provides the basis for a soundfactorization and a proper setting for the definition of decodings.

According to the soundness interpretation of the correctness results for theA-encoding in Section 5.4, the operational character of coupled simulation (C, S)also induces a sharp characterization of the operational soundness of the C-encod-ing: for all source terms S #S,

an arbitrary derivative T of a translation represents an intermediate state in somesource-level computation evolving from S$ into S". The correspondence betweenthe derivation traces of target C!S" and source S is guaranteed by S. The relations

47DECODING CHOICE ENCODINGS

77Wednesday, October 14, 2009

Page 173: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

IntermediateConclusions

78Wednesday, October 14, 2009

Page 174: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A bit of cheating: encodings depend on n-ary choice.

There is no theory of encodings yet.

Asynchronous Pi is “sufficiently” expressive ...... for programming. (Who needs mixed choice?)

The role of name-passing.

The role of coupled simulation.

79Wednesday, October 14, 2009

Page 175: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

OverviewPart 0a: Encodings vs Full Abstraction Comparison of Languages/Calculi Correctness of Encodings

Part 0b: Asynchronous Pi Calculus

Part 1: Input-Guarded Choice Encoding (Distributed Implementation) Decoding (Correctness Proof)

Part 2: Output-Guarded Choice Encoding Separate Choice Encoding Mixed Choice

Conclusions80Wednesday, October 14, 2009

Page 176: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Calculus

81Wednesday, October 14, 2009

Page 177: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Choice in Synchronous(!) Contexts

As motivated by Catuscia Palamidessi’s talk,we focus now on encodings of choice operators that may contain (synchronous) send prefixes.

82Wednesday, October 14, 2009

Page 178: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Choice in Synchronous(!) Contexts

As motivated by Catuscia Palamidessi’s talk,we focus now on encodings of choice operators that may contain (synchronous) send prefixes.

We try to investigate some limits of encodability.In contrast to Catuscia Palamidessi,

we seek positive correctness results.

82Wednesday, October 14, 2009

Page 179: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

18

!" calculus with mixed choice

!" calculusAsynchronous

!" calculus with separate choice

!" calculus with input choice !" calculus without choice

Palamidessi 97

(no choice, no output prefix)

Nestmann 97

(no output prefix) (output prefix)

Identity encoding Identity encoding

Honda!Tokoro 91,Boudol 92Nestmann!Pierce 96

Fig. 2.2. The π-calculus hierarchy. The dashed line represents an identity encoding.83Wednesday, October 14, 2009

Page 180: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

18

!" calculus with mixed choice

!" calculusAsynchronous

!" calculus with separate choice

!" calculus with input choice !" calculus without choice

Palamidessi 97

(no choice, no output prefix)

Nestmann 97

(no output prefix) (output prefix)

Identity encoding Identity encoding

Honda!Tokoro 91,Boudol 92Nestmann!Pierce 96

Fig. 2.2. The π-calculus hierarchy. The dashed line represents an identity encoding.83Wednesday, October 14, 2009

Page 181: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

18

!" calculus with mixed choice

!" calculusAsynchronous

!" calculus with separate choice

!" calculus with input choice !" calculus without choice

Palamidessi 97

(no choice, no output prefix)

Nestmann 97

(no output prefix) (output prefix)

Identity encoding Identity encoding

Honda!Tokoro 91,Boudol 92Nestmann!Pierce 96

Fig. 2.2. The π-calculus hierarchy. The dashed line represents an identity encoding.83Wednesday, October 14, 2009

Page 182: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Syntax + Semantik

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

...

...

...

...

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

84Wednesday, October 14, 2009

Page 183: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Syntax + Semantik

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

...

...

...

...

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

84Wednesday, October 14, 2009

Page 184: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Syntax + Semantik

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

...

...

...

...

Throughout the paper, we emphasize the exposition of encodings, algorithms,and trade-offs, instead of just presenting the formal proofs, which are ratherstraightforward in most cases. Some technicalities of proofs of deadlock-freedomby using type systems, where we apply interesting more recent techniques, areassembled in the Appendix.

2. TECHNICAL PRELIMINARIES

We introduce various polyadic ?-calculi [Mil93] as source (S) and target (T)languages. Let N be a countable set of names, and let x~ denote a finite tuplex1 , ..., xn of names in

? : :=y![z~ ] | y?[x~ ]

Smix: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

?i .P i

Ssep: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi | :i # I

yi ![z~ i].Pi

Sinp: P : :=P | P | (&y) P | y?*[x~ ].P | :i # I

yi?[x~ i].Pi

T : P : := P | P | (&y) P | y?*[x~ ].P | y?[x~ ].P | y![z~ ],

where x, y, z #N, and I ranges over finite sets of indices i. The source languages S7

with 7 # [mix, sep, inp] are polyadic versions of the calculi ?mixs , ?sep

s , and ? inpa ,

respectively, of Fig. 1. The target language T is defined with just asynchronous out-put, i.e., messages and only single input-prefixes, instead of choice and, thus, is apolyadic version of ?a .The informal semantics of parallel composition and restriction is as usual. In

choices, we use an output guard y![z~ ].P to denote the emission of names z~ alongchannel y before behaving as P, and an input guard y?[x~ ].P to denote the receptionof arbitrary names z~ along channel y and afterwards behaving as P[ z~ !x~ ], whichdenotes the simultaneous substitution of all free occurrences of names x~ by thereceived names z~ , while silently performing :-conversion, wherever necessary.A replicated input guard y?*[x~ ].P denotes a process that allows us to spawn offarbitrary instances of the form P[ z~ !x~ ] in parallel by repeatedly receiving names z~along channel y. We use N1+N2 to abbreviate binary choice (commutative andassociative), and 0 to denote empty choice in S7 and the term (&x) (x![]) in T.Operator precedence is, in decreasing order of binding strength: (1) substitution,

(2) prefixing, restriction, replication, (3) choice, and (4) parallel composition.A term is guarded when it occurs as a subterm of some guard. In y![z~ ] and y?[x~ ],y is called subject, while x~ and z~ are called objects. The sets fn(P) and bn(P) of freeand bound names of a process P, and their union n(P), are defined as usual.Created names are assumed to be fresh, i.e., not occurring in any other term.For the sake of readability in T, we use primitive boolean names t, f #B and con-

ditional operators test y then P else Q for destructively reading and testing

292 UWE NESTMANN

84Wednesday, October 14, 2009

Page 185: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Reduction Semantics

FIG. 2. Reduction relation and structural congruence.

the current (boolean) value on channel y. The above conditional is an abbreviationof y?[x]. if x then P else Q with the usual meaning of if, which is only defined,if the name received for x is a boolean. For T with booleans, we require for thegrammar above that y #N, while x, z #V :=N_B. Note that booleans can becleanly encoded into the intended target language T (cf. [Nes96]).As Milner [Mil93], we assume that all processes are well-typed according to the

correct use of polyadic channels; i.e., matching senders and receivers always havethe same expectation about the arity or the boolean type of transmitted values. Thisalso prevents us from restriction on, communications on, and substitution forbooleans, e.g., by using structural types T : :=B | [T! ] without recursion, togetherwith straightforward typing rules.The formal semantics for the languages S7 and T is presented in Fig. 2 as a

reduction relation ! (with reflexive-transitive closure O ) on structural con-gruence classes (silently including :-conversion). The only difference among thelanguages is in the rules for communication, which arise from the different kinds ofchoices and receptors.

3. IMPLEMENTING-SEPARATE CHOICE

Intuitively, branches in a guarded choice may be seen as individual, but con-currently available processes that have to synchronize each other's progress bymutual exclusion. Reminiscent of distributed implementations, we should useparallel composition to express this concurrent activity of branches.The encoding scheme in Fig. 3 implements choice-states as boolean messages on

private channels l, so-called locks: t means that no branch in the current choice hasyet been chosen; f means the contrary (so the initial value must be t). Whenever(an encoding of) a branch wants to proceed, it must test its associated lock; it mustalso explicitly reset the lock after having tested it in order to enable competingbranches to also test the choices' state. We use the scheme for several encodings.Instead of presenting them all at once and studying their properties afterwards, we

293``GOOD'' ENCODINGS OF GUARDED CHOICE

85Wednesday, October 14, 2009

Page 186: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encodings

86Wednesday, October 14, 2009

Page 187: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Homomorphic Encoding Scheme

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

87Wednesday, October 14, 2009

Page 188: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The locking game on l does the trick.

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

Encoding Input-Guarded Choice:

Homomorphic Encoding Scheme

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

87Wednesday, October 14, 2009

Page 189: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The locking game on l does the trick.

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

Encoding Input-Guarded Choice:

Homomorphic Encoding Scheme

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

87Wednesday, October 14, 2009

Page 190: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The locking game on l does the trick.

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

Encoding Input-Guarded Choice:

Homomorphic Encoding Scheme

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

87Wednesday, October 14, 2009

Page 191: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

The locking game on l does the trick.

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

Encoding Input-Guarded Choice:

Homomorphic Encoding Scheme

FIG. 3. Encoding scheme S7!T.

proceed stepwise, which allows us to emphasize their differences. In all cases,uniformity [Pal97] is guaranteed by the compositional encoding of parallel com-position and restriction (see Appendix A).

3.1. Input-Guarded Choice

According to Nestmann and Pierce [NP96], input-guarded choice can beencoded as shown in Figs. 3 and 4. The only nontrivial case is for input-guards:after receiving a value from the environment, the name l is used to test whether thecurrent guard is allowed to proceed (by reading t from l ), or whether it has to beaborted (by reading f from l ) and is obliged to resend the received value. Theencoding obeys strong invariant properties on the use of locks:

v On each lock, at most one message may ever be available at any time. Thisguarantee implements locking, which enables mutual exclusion.

v Each reader of a lock eventually writes back to the lock. This obligationenables the correct abortion of nonchosen branches.

It is crucial for the correctness that send-requests that do not lead to com-munication!!because of the receiver being aborted!!are resent, i.e., possibly passedon to another receiver waiting on the same channel. Furthermore, abortion wouldnot be handled correctly were we not guaranteed that, once read, lock l eventuallybecomes available again with message f. This encoding preserves a ``reasonable''semantics, since it is fully abstract with respect to coupled simulation, which impliesdeadlock-freedom, and it is also divergence-free. In fact, a correctness resultstronger than full abstraction holds; terms and their translations are congruent, sothey cannot be distinguished by any context.

3.2. Output-Guarded Choice

If output is blocking, i.e., guarding some behavior that is only enabled if the out-put was successful, then synchronization is no longer local to the receiver's choice.The idea is (cf. Fig. 5) that, in the target language, a sender asynchronously trans-mits its values z~ , together with a private acknowledgment channel a, which can beused just once by some matching receiver to signal either success or failure, i.e.,

FIG. 4. Sinp!T.

294 UWE NESTMANN

87Wednesday, October 14, 2009

Page 192: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Separate Choice

FIG. 5. Ssep!T.

either enabling the sender's continuation to proceed, or to abort it. Since output-guards are also branches in a choice whose state must be tested, the correspondinglock r is, in addition to z~ and a, transmitted to some matching receiver that thenperforms the required choice-test.

Input-guards, revisited. The encoding is more elaborate due to the increasedinformation that is transmitted by send-requests. First, there are now two locks thathave to be tested in some order. In Fig. 5, we chose to test the local lock l first and,only in the case of a positive outcome, to test the remote lock r. (This particularorder is useful in an actual distributed implementation, where remote communica-tion is usually much more expensive than local communication.) Second, we haveto use the acknowledgment channel correctly, which means that a positive acknowl-edgment may only be sent if both locks were tested positively. Third, in the casethat the test of the sender's choice-lock was negative, we must not resend the send-request!!instead, and only if the test of the receiver's choice-lock was positive, wehave to restart the receiver process from the beginning by allowing it to try othersend-requests. In Fig. 5, this is implemented by recursively sending a trigger-signalto a replicated input process on b that represents the receiver-loop's entry point. Inorder to match this protocol of synchronous outputs, the encoding of input-guarded replication has to check the sender's lock and, based on its value, eitherto commit and trigger a copy of its continuation, or to abort the sender.

Evaluation. An encoding is deadlock!divergence-free, if it does not add dead-locks!loops to the behavior of terms; a deadlock!loop that occurs in (somederivative of ) an encoded term necessarily results from a deadlock!loop alreadyoccurring in (some derivative of ) the original term. Note that divergence-freedomimplies livelock-freedom.To prove deadlock-freedom, we take advantage of type information for the

channels that are added in the encoding. We refine channel types according toKobayashi's classification [Kob97], which distinguishes between reliable andunreliable channels. The following three types of channels are reliable:

v linear channels, which are used just once (like our acknowledgementchannels a),

295``GOOD'' ENCODINGS OF GUARDED CHOICE

l : local lockr: remote lock

88Wednesday, October 14, 2009

Page 193: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Separate Choice

FIG. 5. Ssep!T.

either enabling the sender's continuation to proceed, or to abort it. Since output-guards are also branches in a choice whose state must be tested, the correspondinglock r is, in addition to z~ and a, transmitted to some matching receiver that thenperforms the required choice-test.

Input-guards, revisited. The encoding is more elaborate due to the increasedinformation that is transmitted by send-requests. First, there are now two locks thathave to be tested in some order. In Fig. 5, we chose to test the local lock l first and,only in the case of a positive outcome, to test the remote lock r. (This particularorder is useful in an actual distributed implementation, where remote communica-tion is usually much more expensive than local communication.) Second, we haveto use the acknowledgment channel correctly, which means that a positive acknowl-edgment may only be sent if both locks were tested positively. Third, in the casethat the test of the sender's choice-lock was negative, we must not resend the send-request!!instead, and only if the test of the receiver's choice-lock was positive, wehave to restart the receiver process from the beginning by allowing it to try othersend-requests. In Fig. 5, this is implemented by recursively sending a trigger-signalto a replicated input process on b that represents the receiver-loop's entry point. Inorder to match this protocol of synchronous outputs, the encoding of input-guarded replication has to check the sender's lock and, based on its value, eitherto commit and trigger a copy of its continuation, or to abort the sender.

Evaluation. An encoding is deadlock!divergence-free, if it does not add dead-locks!loops to the behavior of terms; a deadlock!loop that occurs in (somederivative of ) an encoded term necessarily results from a deadlock!loop alreadyoccurring in (some derivative of ) the original term. Note that divergence-freedomimplies livelock-freedom.To prove deadlock-freedom, we take advantage of type information for the

channels that are added in the encoding. We refine channel types according toKobayashi's classification [Kob97], which distinguishes between reliable andunreliable channels. The following three types of channels are reliable:

v linear channels, which are used just once (like our acknowledgementchannels a),

295``GOOD'' ENCODINGS OF GUARDED CHOICE

l : local lockr: remote lock

88Wednesday, October 14, 2009

Page 194: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Separate Choice

FIG. 5. Ssep!T.

either enabling the sender's continuation to proceed, or to abort it. Since output-guards are also branches in a choice whose state must be tested, the correspondinglock r is, in addition to z~ and a, transmitted to some matching receiver that thenperforms the required choice-test.

Input-guards, revisited. The encoding is more elaborate due to the increasedinformation that is transmitted by send-requests. First, there are now two locks thathave to be tested in some order. In Fig. 5, we chose to test the local lock l first and,only in the case of a positive outcome, to test the remote lock r. (This particularorder is useful in an actual distributed implementation, where remote communica-tion is usually much more expensive than local communication.) Second, we haveto use the acknowledgment channel correctly, which means that a positive acknowl-edgment may only be sent if both locks were tested positively. Third, in the casethat the test of the sender's choice-lock was negative, we must not resend the send-request!!instead, and only if the test of the receiver's choice-lock was positive, wehave to restart the receiver process from the beginning by allowing it to try othersend-requests. In Fig. 5, this is implemented by recursively sending a trigger-signalto a replicated input process on b that represents the receiver-loop's entry point. Inorder to match this protocol of synchronous outputs, the encoding of input-guarded replication has to check the sender's lock and, based on its value, eitherto commit and trigger a copy of its continuation, or to abort the sender.

Evaluation. An encoding is deadlock!divergence-free, if it does not add dead-locks!loops to the behavior of terms; a deadlock!loop that occurs in (somederivative of ) an encoded term necessarily results from a deadlock!loop alreadyoccurring in (some derivative of ) the original term. Note that divergence-freedomimplies livelock-freedom.To prove deadlock-freedom, we take advantage of type information for the

channels that are added in the encoding. We refine channel types according toKobayashi's classification [Kob97], which distinguishes between reliable andunreliable channels. The following three types of channels are reliable:

v linear channels, which are used just once (like our acknowledgementchannels a),

295``GOOD'' ENCODINGS OF GUARDED CHOICE

l : local lockr: remote lock

88Wednesday, October 14, 2009

Page 195: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Separate Choice

FIG. 5. Ssep!T.

either enabling the sender's continuation to proceed, or to abort it. Since output-guards are also branches in a choice whose state must be tested, the correspondinglock r is, in addition to z~ and a, transmitted to some matching receiver that thenperforms the required choice-test.

Input-guards, revisited. The encoding is more elaborate due to the increasedinformation that is transmitted by send-requests. First, there are now two locks thathave to be tested in some order. In Fig. 5, we chose to test the local lock l first and,only in the case of a positive outcome, to test the remote lock r. (This particularorder is useful in an actual distributed implementation, where remote communica-tion is usually much more expensive than local communication.) Second, we haveto use the acknowledgment channel correctly, which means that a positive acknowl-edgment may only be sent if both locks were tested positively. Third, in the casethat the test of the sender's choice-lock was negative, we must not resend the send-request!!instead, and only if the test of the receiver's choice-lock was positive, wehave to restart the receiver process from the beginning by allowing it to try othersend-requests. In Fig. 5, this is implemented by recursively sending a trigger-signalto a replicated input process on b that represents the receiver-loop's entry point. Inorder to match this protocol of synchronous outputs, the encoding of input-guarded replication has to check the sender's lock and, based on its value, eitherto commit and trigger a copy of its continuation, or to abort the sender.

Evaluation. An encoding is deadlock!divergence-free, if it does not add dead-locks!loops to the behavior of terms; a deadlock!loop that occurs in (somederivative of ) an encoded term necessarily results from a deadlock!loop alreadyoccurring in (some derivative of ) the original term. Note that divergence-freedomimplies livelock-freedom.To prove deadlock-freedom, we take advantage of type information for the

channels that are added in the encoding. We refine channel types according toKobayashi's classification [Kob97], which distinguishes between reliable andunreliable channels. The following three types of channels are reliable:

v linear channels, which are used just once (like our acknowledgementchannels a),

295``GOOD'' ENCODINGS OF GUARDED CHOICE

l : local lockr: remote lock

88Wednesday, October 14, 2009

Page 196: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Separate Choice

v replicated input channels, whose input ends must not occur more than once,but whose output ends may be used arbitrarily often (like our restart channels b),and

v mutex channels, which need to obey the invariants (of our lock channels l )that we mentioned in Section 3.1. Also, a message must be available right after theircreation.

Kobayashi also developed a typing system that provides a behavioral propertyfor well-typed processes; every (immediate) deadlock can only be caused byunreliable channels. A subject reduction theorem extends the proposition todeadlocks that may ever occur in derivatives of well-typed processes.As indicated above, every channel that is added by our choice encodings is

reliable. Since we can further show that every encoded term is well typed withrespect to Kobayashi's type system (when regarding every source-level channel asunreliable), we already get the desired proposition.

Proposition 3.2.1. Ssep!T is deadlock-free.

Proof. By type-checking. More details can be found in Appendix B. K

Proposition 3.2.2. Ssep!T is divergence-free.

Proof. The only possibility for an encoding to add an infinite loop would be inthe translation of input-guards since it is only there that we use replication. In orderto trigger a copy of this replication, three conditions must be met: (1) the receiver'slock must still contain t, (2) a matching send-request must be consumed from theenvironment, and (3) this sender's lock must contain f. However, in this situation,by ``looping back'' the consumed message will not be given back to the system!!inother words, the system's state is decreased. This cannot be done infinitely often,unless an infinite number of matching send-requests is produced. This, in turn, isonly possible by using replication, e.g., by (&x)(x![] | x?*[].(x![] | y![z~ ] } } } )), butthen, due to the encoding of replicated input, this replication of messages must havebeen already present in the source language.6 K

4. IMPLEMENTING MIXED CHOICE

The na@"ve attempt is to simply reuse the encoding for separate choices of theprevious section as is for encoding mixed choices. This seems sensible at first,because in both cases all input- and output-guards are branches in choices, so theyshould behave similarly. However, we are faced with two inherent sources of poten-tial deadlock in the mixed setting: one is for the symmetric term P |Q :=y0 ![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 of Eq. (4) in the Introduction,the other is for I :=y![z].P+ y?[x].Q. The deadlock situations may become clearfrom a spatial representation:

296 UWE NESTMANN

6 Note that the presence of output-guarded replication in the source language would not invalidatethis result, since in the encoding, this construct would be translated by mentioning a lock that alwayscarries t due to the lack of competitors, thus invalidating condition (3) in the proof.

Proof: only sketched, but reasonably straightforward.

89Wednesday, October 14, 2009

Page 197: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Separate Choice

v replicated input channels, whose input ends must not occur more than once,but whose output ends may be used arbitrarily often (like our restart channels b),and

v mutex channels, which need to obey the invariants (of our lock channels l )that we mentioned in Section 3.1. Also, a message must be available right after theircreation.

Kobayashi also developed a typing system that provides a behavioral propertyfor well-typed processes; every (immediate) deadlock can only be caused byunreliable channels. A subject reduction theorem extends the proposition todeadlocks that may ever occur in derivatives of well-typed processes.As indicated above, every channel that is added by our choice encodings is

reliable. Since we can further show that every encoded term is well typed withrespect to Kobayashi's type system (when regarding every source-level channel asunreliable), we already get the desired proposition.

Proposition 3.2.1. Ssep!T is deadlock-free.

Proof. By type-checking. More details can be found in Appendix B. K

Proposition 3.2.2. Ssep!T is divergence-free.

Proof. The only possibility for an encoding to add an infinite loop would be inthe translation of input-guards since it is only there that we use replication. In orderto trigger a copy of this replication, three conditions must be met: (1) the receiver'slock must still contain t, (2) a matching send-request must be consumed from theenvironment, and (3) this sender's lock must contain f. However, in this situation,by ``looping back'' the consumed message will not be given back to the system!!inother words, the system's state is decreased. This cannot be done infinitely often,unless an infinite number of matching send-requests is produced. This, in turn, isonly possible by using replication, e.g., by (&x)(x![] | x?*[].(x![] | y![z~ ] } } } )), butthen, due to the encoding of replicated input, this replication of messages must havebeen already present in the source language.6 K

4. IMPLEMENTING MIXED CHOICE

The na@"ve attempt is to simply reuse the encoding for separate choices of theprevious section as is for encoding mixed choices. This seems sensible at first,because in both cases all input- and output-guards are branches in choices, so theyshould behave similarly. However, we are faced with two inherent sources of poten-tial deadlock in the mixed setting: one is for the symmetric term P |Q :=y0 ![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 of Eq. (4) in the Introduction,the other is for I :=y![z].P+ y?[x].Q. The deadlock situations may become clearfrom a spatial representation:

296 UWE NESTMANN

6 Note that the presence of output-guarded replication in the source language would not invalidatethis result, since in the encoding, this construct would be translated by mentioning a lock that alwayscarries t due to the lack of competitors, thus invalidating condition (3) in the proof.

Proof: using a sophistcated type system by Kobayashi et.al.

v replicated input channels, whose input ends must not occur more than once,but whose output ends may be used arbitrarily often (like our restart channels b),and

v mutex channels, which need to obey the invariants (of our lock channels l )that we mentioned in Section 3.1. Also, a message must be available right after theircreation.

Kobayashi also developed a typing system that provides a behavioral propertyfor well-typed processes; every (immediate) deadlock can only be caused byunreliable channels. A subject reduction theorem extends the proposition todeadlocks that may ever occur in derivatives of well-typed processes.As indicated above, every channel that is added by our choice encodings is

reliable. Since we can further show that every encoded term is well typed withrespect to Kobayashi's type system (when regarding every source-level channel asunreliable), we already get the desired proposition.

Proposition 3.2.1. Ssep!T is deadlock-free.

Proof. By type-checking. More details can be found in Appendix B. K

Proposition 3.2.2. Ssep!T is divergence-free.

Proof. The only possibility for an encoding to add an infinite loop would be inthe translation of input-guards since it is only there that we use replication. In orderto trigger a copy of this replication, three conditions must be met: (1) the receiver'slock must still contain t, (2) a matching send-request must be consumed from theenvironment, and (3) this sender's lock must contain f. However, in this situation,by ``looping back'' the consumed message will not be given back to the system!!inother words, the system's state is decreased. This cannot be done infinitely often,unless an infinite number of matching send-requests is produced. This, in turn, isonly possible by using replication, e.g., by (&x)(x![] | x?*[].(x![] | y![z~ ] } } } )), butthen, due to the encoding of replicated input, this replication of messages must havebeen already present in the source language.6 K

4. IMPLEMENTING MIXED CHOICE

The na@"ve attempt is to simply reuse the encoding for separate choices of theprevious section as is for encoding mixed choices. This seems sensible at first,because in both cases all input- and output-guards are branches in choices, so theyshould behave similarly. However, we are faced with two inherent sources of poten-tial deadlock in the mixed setting: one is for the symmetric term P |Q :=y0 ![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 of Eq. (4) in the Introduction,the other is for I :=y![z].P+ y?[x].Q. The deadlock situations may become clearfrom a spatial representation:

296 UWE NESTMANN

6 Note that the presence of output-guarded replication in the source language would not invalidatethis result, since in the encoding, this construct would be translated by mentioning a lock that alwayscarries t due to the lack of competitors, thus invalidating condition (3) in the proof.

Proof: only sketched, but reasonably straightforward.

89Wednesday, October 14, 2009

Page 198: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Encoding Mixed Choice

So, we seem to know how to implement input-guards and how to implement output-guards.

Why not reuse the same encoding also for the case of mixed choice?

90Wednesday, October 14, 2009

Page 199: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Symmetric Cyclic Wait

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

91Wednesday, October 14, 2009

Page 200: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Symmetric Cyclic Wait

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

P and Q’s code is symmetric.

In process !P |Q", imagine the situation, where both receivers for y0 and y1 haveinput the matching request and afterwards successfully tested their own choice-lock.Here, both have to wait for their respective sender's choice-lock to becomeavailable again, but neither of them will, so both receivers remain blocked forever.This symmetric cyclic-wait situation is very similar to the classical ``diningphilosophers'' problem [RL94], where several (in our case, two) processes competefor mutually exclusive access to forks (locks).In process !I ", the sender's request on y could be consumed by the competing

receiver branch, which results in a deadlock situation, because the receiver wouldtry to test the same lock twice, which is impossible.

Breaking the symmetry. In distributed computing, one method to resolve cyclicdependencies among processes is by using time-outs or probabilistic algorithms forthe attempt to acquire some lock. Then, however, we face the problem of infiniteloops, such that randomized solutions are not ``reasonable'' [Pal97], although it isknown that solutions exist that guarantee progress with probability 1 [RL94]. If,in such cases, we assume fair execution schedulers, then divergence is not harmfulanymore, as long as there is no danger for live-locks.Another method, known from the distributed implementation of concurrent

languages, is exploiting a total order among the threads in the system by, forexample, always choosing the lock of the smaller thread first [Ber80, BS83,Kna93], when required to make a choice. Then, the above symmetric cyclic-waitsituation is immediately prevented since both receivers choose the same thread, i.e.,lock, as the first to interrogate. Note also that under a total order assumption sym-metric networks according to [Pal97] do not exist.In the following subsections, we adapt the methods of randomization (see

Section 4.1) and total ordering of threads (see Sections 4.2 and 4.3) to our case ofencoding mixed choice into the asynchronous ?-calculus, and we evaluate theirproperties.

4.1. A Randomized Solution

Randomization means removing determinism from an algorithm and adding ran-domly possible computation paths. In our case, instead of choosing a fixed orderfor testing the locks as in Fig. 5, we might allow ourselves to test them nondeter-ministically in either order and allow first-phase locks to be given back (cf.[RL94]). Of course, in our target language we cannot trivially write down ``eitherreceive from the second lock, or resend on the first lock,'' because in order to do so,

297``GOOD'' ENCODINGS OF GUARDED CHOICE

91Wednesday, October 14, 2009

Page 201: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Symmetric Cyclic Wait

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

P and Q’s code is symmetric.

In process !P |Q", imagine the situation, where both receivers for y0 and y1 haveinput the matching request and afterwards successfully tested their own choice-lock.Here, both have to wait for their respective sender's choice-lock to becomeavailable again, but neither of them will, so both receivers remain blocked forever.This symmetric cyclic-wait situation is very similar to the classical ``diningphilosophers'' problem [RL94], where several (in our case, two) processes competefor mutually exclusive access to forks (locks).In process !I ", the sender's request on y could be consumed by the competing

receiver branch, which results in a deadlock situation, because the receiver wouldtry to test the same lock twice, which is impossible.

Breaking the symmetry. In distributed computing, one method to resolve cyclicdependencies among processes is by using time-outs or probabilistic algorithms forthe attempt to acquire some lock. Then, however, we face the problem of infiniteloops, such that randomized solutions are not ``reasonable'' [Pal97], although it isknown that solutions exist that guarantee progress with probability 1 [RL94]. If,in such cases, we assume fair execution schedulers, then divergence is not harmfulanymore, as long as there is no danger for live-locks.Another method, known from the distributed implementation of concurrent

languages, is exploiting a total order among the threads in the system by, forexample, always choosing the lock of the smaller thread first [Ber80, BS83,Kna93], when required to make a choice. Then, the above symmetric cyclic-waitsituation is immediately prevented since both receivers choose the same thread, i.e.,lock, as the first to interrogate. Note also that under a total order assumption sym-metric networks according to [Pal97] do not exist.In the following subsections, we adapt the methods of randomization (see

Section 4.1) and total ordering of threads (see Sections 4.2 and 4.3) to our case ofencoding mixed choice into the asynchronous ?-calculus, and we evaluate theirproperties.

4.1. A Randomized Solution

Randomization means removing determinism from an algorithm and adding ran-domly possible computation paths. In our case, instead of choosing a fixed orderfor testing the locks as in Fig. 5, we might allow ourselves to test them nondeter-ministically in either order and allow first-phase locks to be given back (cf.[RL94]). Of course, in our target language we cannot trivially write down ``eitherreceive from the second lock, or resend on the first lock,'' because in order to do so,

297``GOOD'' ENCODINGS OF GUARDED CHOICE

Mixed guarded choice can break the symmetry !But the encodings of separate choice may deadlock ...

91Wednesday, October 14, 2009

Page 202: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Symmetric Cyclic Wait

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

P and Q’s code is symmetric.

In process !P |Q", imagine the situation, where both receivers for y0 and y1 haveinput the matching request and afterwards successfully tested their own choice-lock.Here, both have to wait for their respective sender's choice-lock to becomeavailable again, but neither of them will, so both receivers remain blocked forever.This symmetric cyclic-wait situation is very similar to the classical ``diningphilosophers'' problem [RL94], where several (in our case, two) processes competefor mutually exclusive access to forks (locks).In process !I ", the sender's request on y could be consumed by the competing

receiver branch, which results in a deadlock situation, because the receiver wouldtry to test the same lock twice, which is impossible.

Breaking the symmetry. In distributed computing, one method to resolve cyclicdependencies among processes is by using time-outs or probabilistic algorithms forthe attempt to acquire some lock. Then, however, we face the problem of infiniteloops, such that randomized solutions are not ``reasonable'' [Pal97], although it isknown that solutions exist that guarantee progress with probability 1 [RL94]. If,in such cases, we assume fair execution schedulers, then divergence is not harmfulanymore, as long as there is no danger for live-locks.Another method, known from the distributed implementation of concurrent

languages, is exploiting a total order among the threads in the system by, forexample, always choosing the lock of the smaller thread first [Ber80, BS83,Kna93], when required to make a choice. Then, the above symmetric cyclic-waitsituation is immediately prevented since both receivers choose the same thread, i.e.,lock, as the first to interrogate. Note also that under a total order assumption sym-metric networks according to [Pal97] do not exist.In the following subsections, we adapt the methods of randomization (see

Section 4.1) and total ordering of threads (see Sections 4.2 and 4.3) to our case ofencoding mixed choice into the asynchronous ?-calculus, and we evaluate theirproperties.

4.1. A Randomized Solution

Randomization means removing determinism from an algorithm and adding ran-domly possible computation paths. In our case, instead of choosing a fixed orderfor testing the locks as in Fig. 5, we might allow ourselves to test them nondeter-ministically in either order and allow first-phase locks to be given back (cf.[RL94]). Of course, in our target language we cannot trivially write down ``eitherreceive from the second lock, or resend on the first lock,'' because in order to do so,

297``GOOD'' ENCODINGS OF GUARDED CHOICE

Mixed guarded choice can break the symmetry !But the encodings of separate choice may deadlock ...

FIG. 5. Ssep!T.

either enabling the sender's continuation to proceed, or to abort it. Since output-guards are also branches in a choice whose state must be tested, the correspondinglock r is, in addition to z~ and a, transmitted to some matching receiver that thenperforms the required choice-test.

Input-guards, revisited. The encoding is more elaborate due to the increasedinformation that is transmitted by send-requests. First, there are now two locks thathave to be tested in some order. In Fig. 5, we chose to test the local lock l first and,only in the case of a positive outcome, to test the remote lock r. (This particularorder is useful in an actual distributed implementation, where remote communica-tion is usually much more expensive than local communication.) Second, we haveto use the acknowledgment channel correctly, which means that a positive acknowl-edgment may only be sent if both locks were tested positively. Third, in the casethat the test of the sender's choice-lock was negative, we must not resend the send-request!!instead, and only if the test of the receiver's choice-lock was positive, wehave to restart the receiver process from the beginning by allowing it to try othersend-requests. In Fig. 5, this is implemented by recursively sending a trigger-signalto a replicated input process on b that represents the receiver-loop's entry point. Inorder to match this protocol of synchronous outputs, the encoding of input-guarded replication has to check the sender's lock and, based on its value, eitherto commit and trigger a copy of its continuation, or to abort the sender.

Evaluation. An encoding is deadlock!divergence-free, if it does not add dead-locks!loops to the behavior of terms; a deadlock!loop that occurs in (somederivative of ) an encoded term necessarily results from a deadlock!loop alreadyoccurring in (some derivative of ) the original term. Note that divergence-freedomimplies livelock-freedom.To prove deadlock-freedom, we take advantage of type information for the

channels that are added in the encoding. We refine channel types according toKobayashi's classification [Kob97], which distinguishes between reliable andunreliable channels. The following three types of channels are reliable:

v linear channels, which are used just once (like our acknowledgementchannels a),

295``GOOD'' ENCODINGS OF GUARDED CHOICE

91Wednesday, October 14, 2009

Page 203: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Symmetric Cyclic Wait

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

where _ denotes an injective renaming function. While the first condition merelyrequires that the candidate encoding be compatible with the renaming of freechannels, the second condition represents the requirement that an encoding ofmixed-guarded choice should be ``truly distributed,'' in the sense that it is notallowed to have a mediating process M, as in

!P1 | P2"=(&x1 , ..., xn)(!P1" |M | !P2" ) (3)

which could monitor parallel activities via the internal names x1 , ..., xn .

reasonable means, according to Palamidessi [Pal97], ``we call reasonable asemantics which distinguishes two processes P and Q whenever in some computa-tion of P the actions on certain intended channels are different from those in anycomputation of Q.'' This includes sensitivity to divergence since an action on anintended channel in some computation of P is required to happen in any computa-tion of Q, so infinite loops in computations of Q that do not mention the intendedaction are detected.

Palamidessi's impossibility theorem for encodings of mixed choice is a corollaryof her formal separation result between ?mix

s and ?a (and also ?seps ). Similar to pre-

vious work of Bouge! [Bou88] within the setting of CSP, it is based on the abilityor inability of the calculi to express leader election algorithms in symmetricnetworks (here, of ?-calculus processes). Such algorithms require the ability tobreak symmetries in communication graphs, like the atomic agreement of two pro-cesses about two values (e.g., the process" id of the leader). ?mix

s can break suchsymmetries, e.g., in the parallel composition of ``symmetric'' choices,

P |Q=def y0![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 , (4)

where symmetry means that the program code of P and Q is identical under struc-tural congruence and renaming of process" id's modulo 2, we end up with either ofthe asymmetric systems P0 |Q0[0!x] or P1[1!x] |Q1 . In contrast, the above sym-metric system could not be written in ?a since mixed-guarded choice is not a partof this language. Instead, corresponding systems with concurrently enabled input-and output-actions (see the diagram for a process which mimics the behavior of theabove P) would behave under the regime of a confluence property.

y0! [0] y1? [x]

y1? [x] y0! [0]

P

P0 P1

P$

Here, since both P and the corresponding Q would behave confluently, the sym-metry of P |Q would be preserved under computation; i.e., no leader could be

290 UWE NESTMANN

P and Q’s code is symmetric.

In process !P |Q", imagine the situation, where both receivers for y0 and y1 haveinput the matching request and afterwards successfully tested their own choice-lock.Here, both have to wait for their respective sender's choice-lock to becomeavailable again, but neither of them will, so both receivers remain blocked forever.This symmetric cyclic-wait situation is very similar to the classical ``diningphilosophers'' problem [RL94], where several (in our case, two) processes competefor mutually exclusive access to forks (locks).In process !I ", the sender's request on y could be consumed by the competing

receiver branch, which results in a deadlock situation, because the receiver wouldtry to test the same lock twice, which is impossible.

Breaking the symmetry. In distributed computing, one method to resolve cyclicdependencies among processes is by using time-outs or probabilistic algorithms forthe attempt to acquire some lock. Then, however, we face the problem of infiniteloops, such that randomized solutions are not ``reasonable'' [Pal97], although it isknown that solutions exist that guarantee progress with probability 1 [RL94]. If,in such cases, we assume fair execution schedulers, then divergence is not harmfulanymore, as long as there is no danger for live-locks.Another method, known from the distributed implementation of concurrent

languages, is exploiting a total order among the threads in the system by, forexample, always choosing the lock of the smaller thread first [Ber80, BS83,Kna93], when required to make a choice. Then, the above symmetric cyclic-waitsituation is immediately prevented since both receivers choose the same thread, i.e.,lock, as the first to interrogate. Note also that under a total order assumption sym-metric networks according to [Pal97] do not exist.In the following subsections, we adapt the methods of randomization (see

Section 4.1) and total ordering of threads (see Sections 4.2 and 4.3) to our case ofencoding mixed choice into the asynchronous ?-calculus, and we evaluate theirproperties.

4.1. A Randomized Solution

Randomization means removing determinism from an algorithm and adding ran-domly possible computation paths. In our case, instead of choosing a fixed orderfor testing the locks as in Fig. 5, we might allow ourselves to test them nondeter-ministically in either order and allow first-phase locks to be given back (cf.[RL94]). Of course, in our target language we cannot trivially write down ``eitherreceive from the second lock, or resend on the first lock,'' because in order to do so,

297``GOOD'' ENCODINGS OF GUARDED CHOICE

Mixed guarded choice can break the symmetry !But the encodings of separate choice may deadlock ...

91Wednesday, October 14, 2009

Page 204: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Incest

v replicated input channels, whose input ends must not occur more than once,but whose output ends may be used arbitrarily often (like our restart channels b),and

v mutex channels, which need to obey the invariants (of our lock channels l )that we mentioned in Section 3.1. Also, a message must be available right after theircreation.

Kobayashi also developed a typing system that provides a behavioral propertyfor well-typed processes; every (immediate) deadlock can only be caused byunreliable channels. A subject reduction theorem extends the proposition todeadlocks that may ever occur in derivatives of well-typed processes.As indicated above, every channel that is added by our choice encodings is

reliable. Since we can further show that every encoded term is well typed withrespect to Kobayashi's type system (when regarding every source-level channel asunreliable), we already get the desired proposition.

Proposition 3.2.1. Ssep!T is deadlock-free.

Proof. By type-checking. More details can be found in Appendix B. K

Proposition 3.2.2. Ssep!T is divergence-free.

Proof. The only possibility for an encoding to add an infinite loop would be inthe translation of input-guards since it is only there that we use replication. In orderto trigger a copy of this replication, three conditions must be met: (1) the receiver'slock must still contain t, (2) a matching send-request must be consumed from theenvironment, and (3) this sender's lock must contain f. However, in this situation,by ``looping back'' the consumed message will not be given back to the system!!inother words, the system's state is decreased. This cannot be done infinitely often,unless an infinite number of matching send-requests is produced. This, in turn, isonly possible by using replication, e.g., by (&x)(x![] | x?*[].(x![] | y![z~ ] } } } )), butthen, due to the encoding of replicated input, this replication of messages must havebeen already present in the source language.6 K

4. IMPLEMENTING MIXED CHOICE

The na@"ve attempt is to simply reuse the encoding for separate choices of theprevious section as is for encoding mixed choices. This seems sensible at first,because in both cases all input- and output-guards are branches in choices, so theyshould behave similarly. However, we are faced with two inherent sources of poten-tial deadlock in the mixed setting: one is for the symmetric term P |Q :=y0 ![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 of Eq. (4) in the Introduction,the other is for I :=y![z].P+ y?[x].Q. The deadlock situations may become clearfrom a spatial representation:

296 UWE NESTMANN

6 Note that the presence of output-guarded replication in the source language would not invalidatethis result, since in the encoding, this construct would be translated by mentioning a lock that alwayscarries t due to the lack of competitors, thus invalidating condition (3) in the proof.

92Wednesday, October 14, 2009

Page 205: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Incest

In process !P |Q", imagine the situation, where both receivers for y0 and y1 haveinput the matching request and afterwards successfully tested their own choice-lock.Here, both have to wait for their respective sender's choice-lock to becomeavailable again, but neither of them will, so both receivers remain blocked forever.This symmetric cyclic-wait situation is very similar to the classical ``diningphilosophers'' problem [RL94], where several (in our case, two) processes competefor mutually exclusive access to forks (locks).In process !I ", the sender's request on y could be consumed by the competing

receiver branch, which results in a deadlock situation, because the receiver wouldtry to test the same lock twice, which is impossible.

Breaking the symmetry. In distributed computing, one method to resolve cyclicdependencies among processes is by using time-outs or probabilistic algorithms forthe attempt to acquire some lock. Then, however, we face the problem of infiniteloops, such that randomized solutions are not ``reasonable'' [Pal97], although it isknown that solutions exist that guarantee progress with probability 1 [RL94]. If,in such cases, we assume fair execution schedulers, then divergence is not harmfulanymore, as long as there is no danger for live-locks.Another method, known from the distributed implementation of concurrent

languages, is exploiting a total order among the threads in the system by, forexample, always choosing the lock of the smaller thread first [Ber80, BS83,Kna93], when required to make a choice. Then, the above symmetric cyclic-waitsituation is immediately prevented since both receivers choose the same thread, i.e.,lock, as the first to interrogate. Note also that under a total order assumption sym-metric networks according to [Pal97] do not exist.In the following subsections, we adapt the methods of randomization (see

Section 4.1) and total ordering of threads (see Sections 4.2 and 4.3) to our case ofencoding mixed choice into the asynchronous ?-calculus, and we evaluate theirproperties.

4.1. A Randomized Solution

Randomization means removing determinism from an algorithm and adding ran-domly possible computation paths. In our case, instead of choosing a fixed orderfor testing the locks as in Fig. 5, we might allow ourselves to test them nondeter-ministically in either order and allow first-phase locks to be given back (cf.[RL94]). Of course, in our target language we cannot trivially write down ``eitherreceive from the second lock, or resend on the first lock,'' because in order to do so,

297``GOOD'' ENCODINGS OF GUARDED CHOICE

v replicated input channels, whose input ends must not occur more than once,but whose output ends may be used arbitrarily often (like our restart channels b),and

v mutex channels, which need to obey the invariants (of our lock channels l )that we mentioned in Section 3.1. Also, a message must be available right after theircreation.

Kobayashi also developed a typing system that provides a behavioral propertyfor well-typed processes; every (immediate) deadlock can only be caused byunreliable channels. A subject reduction theorem extends the proposition todeadlocks that may ever occur in derivatives of well-typed processes.As indicated above, every channel that is added by our choice encodings is

reliable. Since we can further show that every encoded term is well typed withrespect to Kobayashi's type system (when regarding every source-level channel asunreliable), we already get the desired proposition.

Proposition 3.2.1. Ssep!T is deadlock-free.

Proof. By type-checking. More details can be found in Appendix B. K

Proposition 3.2.2. Ssep!T is divergence-free.

Proof. The only possibility for an encoding to add an infinite loop would be inthe translation of input-guards since it is only there that we use replication. In orderto trigger a copy of this replication, three conditions must be met: (1) the receiver'slock must still contain t, (2) a matching send-request must be consumed from theenvironment, and (3) this sender's lock must contain f. However, in this situation,by ``looping back'' the consumed message will not be given back to the system!!inother words, the system's state is decreased. This cannot be done infinitely often,unless an infinite number of matching send-requests is produced. This, in turn, isonly possible by using replication, e.g., by (&x)(x![] | x?*[].(x![] | y![z~ ] } } } )), butthen, due to the encoding of replicated input, this replication of messages must havebeen already present in the source language.6 K

4. IMPLEMENTING MIXED CHOICE

The na@"ve attempt is to simply reuse the encoding for separate choices of theprevious section as is for encoding mixed choices. This seems sensible at first,because in both cases all input- and output-guards are branches in choices, so theyshould behave similarly. However, we are faced with two inherent sources of poten-tial deadlock in the mixed setting: one is for the symmetric term P |Q :=y0 ![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 of Eq. (4) in the Introduction,the other is for I :=y![z].P+ y?[x].Q. The deadlock situations may become clearfrom a spatial representation:

296 UWE NESTMANN

6 Note that the presence of output-guarded replication in the source language would not invalidatethis result, since in the encoding, this construct would be translated by mentioning a lock that alwayscarries t due to the lack of competitors, thus invalidating condition (3) in the proof.

92Wednesday, October 14, 2009

Page 206: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Incest

In process !P |Q", imagine the situation, where both receivers for y0 and y1 haveinput the matching request and afterwards successfully tested their own choice-lock.Here, both have to wait for their respective sender's choice-lock to becomeavailable again, but neither of them will, so both receivers remain blocked forever.This symmetric cyclic-wait situation is very similar to the classical ``diningphilosophers'' problem [RL94], where several (in our case, two) processes competefor mutually exclusive access to forks (locks).In process !I ", the sender's request on y could be consumed by the competing

receiver branch, which results in a deadlock situation, because the receiver wouldtry to test the same lock twice, which is impossible.

Breaking the symmetry. In distributed computing, one method to resolve cyclicdependencies among processes is by using time-outs or probabilistic algorithms forthe attempt to acquire some lock. Then, however, we face the problem of infiniteloops, such that randomized solutions are not ``reasonable'' [Pal97], although it isknown that solutions exist that guarantee progress with probability 1 [RL94]. If,in such cases, we assume fair execution schedulers, then divergence is not harmfulanymore, as long as there is no danger for live-locks.Another method, known from the distributed implementation of concurrent

languages, is exploiting a total order among the threads in the system by, forexample, always choosing the lock of the smaller thread first [Ber80, BS83,Kna93], when required to make a choice. Then, the above symmetric cyclic-waitsituation is immediately prevented since both receivers choose the same thread, i.e.,lock, as the first to interrogate. Note also that under a total order assumption sym-metric networks according to [Pal97] do not exist.In the following subsections, we adapt the methods of randomization (see

Section 4.1) and total ordering of threads (see Sections 4.2 and 4.3) to our case ofencoding mixed choice into the asynchronous ?-calculus, and we evaluate theirproperties.

4.1. A Randomized Solution

Randomization means removing determinism from an algorithm and adding ran-domly possible computation paths. In our case, instead of choosing a fixed orderfor testing the locks as in Fig. 5, we might allow ourselves to test them nondeter-ministically in either order and allow first-phase locks to be given back (cf.[RL94]). Of course, in our target language we cannot trivially write down ``eitherreceive from the second lock, or resend on the first lock,'' because in order to do so,

297``GOOD'' ENCODINGS OF GUARDED CHOICE

v replicated input channels, whose input ends must not occur more than once,but whose output ends may be used arbitrarily often (like our restart channels b),and

v mutex channels, which need to obey the invariants (of our lock channels l )that we mentioned in Section 3.1. Also, a message must be available right after theircreation.

Kobayashi also developed a typing system that provides a behavioral propertyfor well-typed processes; every (immediate) deadlock can only be caused byunreliable channels. A subject reduction theorem extends the proposition todeadlocks that may ever occur in derivatives of well-typed processes.As indicated above, every channel that is added by our choice encodings is

reliable. Since we can further show that every encoded term is well typed withrespect to Kobayashi's type system (when regarding every source-level channel asunreliable), we already get the desired proposition.

Proposition 3.2.1. Ssep!T is deadlock-free.

Proof. By type-checking. More details can be found in Appendix B. K

Proposition 3.2.2. Ssep!T is divergence-free.

Proof. The only possibility for an encoding to add an infinite loop would be inthe translation of input-guards since it is only there that we use replication. In orderto trigger a copy of this replication, three conditions must be met: (1) the receiver'slock must still contain t, (2) a matching send-request must be consumed from theenvironment, and (3) this sender's lock must contain f. However, in this situation,by ``looping back'' the consumed message will not be given back to the system!!inother words, the system's state is decreased. This cannot be done infinitely often,unless an infinite number of matching send-requests is produced. This, in turn, isonly possible by using replication, e.g., by (&x)(x![] | x?*[].(x![] | y![z~ ] } } } )), butthen, due to the encoding of replicated input, this replication of messages must havebeen already present in the source language.6 K

4. IMPLEMENTING MIXED CHOICE

The na@"ve attempt is to simply reuse the encoding for separate choices of theprevious section as is for encoding mixed choices. This seems sensible at first,because in both cases all input- and output-guards are branches in choices, so theyshould behave similarly. However, we are faced with two inherent sources of poten-tial deadlock in the mixed setting: one is for the symmetric term P |Q :=y0 ![0].P0+ y1?[x].P1 | y0?[x].Q0+ y1 ![1].Q1 of Eq. (4) in the Introduction,the other is for I :=y![z].P+ y?[x].Q. The deadlock situations may become clearfrom a spatial representation:

296 UWE NESTMANN

6 Note that the presence of output-guarded replication in the source language would not invalidatethis result, since in the encoding, this construct would be translated by mentioning a lock that alwayscarries t due to the lack of competitors, thus invalidating condition (3) in the proof.

No Intra-Communication possible on the source !But the encoding of separate choice may deadlock ...

92Wednesday, October 14, 2009

Page 207: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Nondeterminism/Randomization

FIG. 6. Randomized Ssep!T for use as Smix!T.

we would need a mixed choice construct. Note that we cannot use internal choiceeither, because it would only delay potential deadlocks, which arise when the inter-nal decision favors the branch ``waiting for the second lock.''In Fig. 6, we model a randomized solution based on the encoding in Fig. 5 by

only supplying a new clause for receivers.7 We use a local state, implemented as amutex channel s that carries a tag8 (and a boolean value) that tells, whether none(tag N), the local (tag L), or the remote (tag R) lock are currently held by thereceiver. The tag-information, initially N (w.l.o.g. with value f ), is supplied by twoprocesses, called lock-checkers, waiting at lcl and rmt, which try to get hold of thelocal lock l and remote lock r, respectively. After grabbing a lock, these processesneed to read the current state; if the complementary lock is already held, then thetwo lock values are passed on to the analyzer process waiting at bth and the states is initialized. Otherwise, the state s is appropriately updated to announce successfor getting the current lock and, in addition to this announcement, a randomizerprocess at rnd is started that competes with the lock-checkers for reading the state.If the randomizer succeeds in reading the state, it resets the state and resends thelock, while restarting the corresponding lock-checker. If both lock-checkers succeedreading the state without the randomizer interfering, then s is left with its initialvalue and is finally consumed by the active randomizer to terminate the system byresetting the state without restarting any of the lock-checkers and without restart-ing the randomizer itself. Note that after restarting the whole receiver at b in thecase of local success (bL=t) and remote failure (bR=f ), a new state will be created,when a new request on y arrives.

Evaluation. As the encoding for separate choice, the randomized encoding formixed choice in Fig. 6 is uniform since restriction and parallelism are encoded

298 UWE NESTMANN

7 A similar solution is used in the implementation of receivers in the join-calculus [FG96].8 We use this special syntax for the sake of readability; since we only need three different tags, we can

easily simulate them by two boolean tags and use the corresponding if- and test-expressions formatching.

93Wednesday, October 14, 2009

Page 208: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Nondeterminism/Randomization

The encoding is:

• loosely inspired by [Rabin, Lehmann 94]

• (strongly) compositonal

• obeys a name discipline

• deadlock-free

• ... but not livelock-free ... it introduces divergence

• fully abstract in a very restricted way ...

• a candidate for an explicit probabilistic treatment

94Wednesday, October 14, 2009

Page 209: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Mixed-Guarded Choice

69

[[νxP ]] = νx[[P ]]

[[P1 | P2]] = [[P1]] | [[P2]]

[[X]] = X

[[recXP ]] = recX [[P ]]

∑i αi.Pi

+∑j τ.Qj

+∑k βk.Rk

= νl (lt | νh (h |

∏i[[αi.Pi]]lh) |

∏j [[τ.Qj ]]l |

∏k[[βk.Rk]]l)

[[xy.P ]]rh = νa (x〈r, a, h, y〉 | a(b). if b then [[P ]] else 0)

[[τ.Q]]l = l(b).(lf | if b then [[Q]] else 0)

[[x(y).R]]l = recX (x(r, a, h, y).h.recY ( 1/2 τ.l(bL).((1− ε) r(bR).B + ε τ.(lbL | Y ))+1/2 τ.r(bR).((1− ε) l(bL).B + ε τ.(rbR | Y )) ))

whereB = if bL ∧ bR then h | lf | rf | at | [[R]]

else if bL then h | lt | rf | af | Xelse if bR then h | lf | rt | x〈r, a, h, y〉

else h | lf | rf | af

Table 6.2. The encoding of π into πpa. In the translation of the mixed choice, theαi’s represent output actions, and the βk’s represent input actions. ε stands for a realnumber in [0, 1).

95Wednesday, October 14, 2009

Page 210: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Mixed-Guarded Choice

69

[[νxP ]] = νx[[P ]]

[[P1 | P2]] = [[P1]] | [[P2]]

[[X]] = X

[[recXP ]] = recX [[P ]]

∑i αi.Pi

+∑j τ.Qj

+∑k βk.Rk

= νl (lt | νh (h |

∏i[[αi.Pi]]lh) |

∏j [[τ.Qj ]]l |

∏k[[βk.Rk]]l)

[[xy.P ]]rh = νa (x〈r, a, h, y〉 | a(b). if b then [[P ]] else 0)

[[τ.Q]]l = l(b).(lf | if b then [[Q]] else 0)

[[x(y).R]]l = recX (x(r, a, h, y).h.recY ( 1/2 τ.l(bL).((1− ε) r(bR).B + ε τ.(lbL | Y ))+1/2 τ.r(bR).((1− ε) l(bL).B + ε τ.(rbR | Y )) ))

whereB = if bL ∧ bR then h | lf | rf | at | [[R]]

else if bL then h | lt | rf | af | Xelse if bR then h | lf | rt | x〈r, a, h, y〉

else h | lf | rf | af

Table 6.2. The encoding of π into πpa. In the translation of the mixed choice, theαi’s represent output actions, and the βk’s represent input actions. ε stands for a realnumber in [0, 1).

co-stimulated the development ofprobabilistic Pi Calculus

95Wednesday, October 14, 2009

Page 211: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

Mixed-Guarded Choice

69

[[νxP ]] = νx[[P ]]

[[P1 | P2]] = [[P1]] | [[P2]]

[[X]] = X

[[recXP ]] = recX [[P ]]

∑i αi.Pi

+∑j τ.Qj

+∑k βk.Rk

= νl (lt | νh (h |

∏i[[αi.Pi]]lh) |

∏j [[τ.Qj ]]l |

∏k[[βk.Rk]]l)

[[xy.P ]]rh = νa (x〈r, a, h, y〉 | a(b). if b then [[P ]] else 0)

[[τ.Q]]l = l(b).(lf | if b then [[Q]] else 0)

[[x(y).R]]l = recX (x(r, a, h, y).h.recY ( 1/2 τ.l(bL).((1− ε) r(bR).B + ε τ.(lbL | Y ))+1/2 τ.r(bR).((1− ε) l(bL).B + ε τ.(rbR | Y )) ))

whereB = if bL ∧ bR then h | lf | rf | at | [[R]]

else if bL then h | lt | rf | af | Xelse if bR then h | lf | rt | x〈r, a, h, y〉

else h | lf | rf | af

Table 6.2. The encoding of π into πpa. In the translation of the mixed choice, theαi’s represent output actions, and the βk’s represent input actions. ε stands for a realnumber in [0, 1).

co-stimulated the development ofprobabilistic Pi Calculus

F.A. w.r.t. may/must testing

95Wednesday, October 14, 2009

Page 212: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

96Wednesday, October 14, 2009

Page 213: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !

96Wednesday, October 14, 2009

Page 214: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

96Wednesday, October 14, 2009

Page 215: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

96Wednesday, October 14, 2009

Page 216: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

96Wednesday, October 14, 2009

Page 217: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

96Wednesday, October 14, 2009

Page 218: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

96Wednesday, October 14, 2009

Page 219: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

?

96Wednesday, October 14, 2009

Page 220: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

?

96Wednesday, October 14, 2009

Page 221: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

?

96Wednesday, October 14, 2009

Page 222: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

?

96Wednesday, October 14, 2009

Page 223: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (I)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANNThe following encoding is not strongly compositional !In fact, it is centralized ...

The top-level adds another layer to the encoding, so it cannot be composed with other instances.

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

?

96Wednesday, October 14, 2009

Page 224: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

97Wednesday, October 14, 2009

Page 225: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

97Wednesday, October 14, 2009

Page 226: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

97Wednesday, October 14, 2009

Page 227: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

97Wednesday, October 14, 2009

Page 228: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

97Wednesday, October 14, 2009

Page 229: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

total order propertyprevents from deadlock

97Wednesday, October 14, 2009

Page 230: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

total order propertyprevents from deadlock

97Wednesday, October 14, 2009

Page 231: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A “Bakery” Algorithm (II)

FIG. 7. A ``bakery'' solution for Smix!T.

Evaluation. The encoding !! "" (with top-level) is not uniform since !!P |Q""!!!P"" | !!Q"" (see also Appendix A), whereas the mere inner encoding ! " c isuniform. The encoding is deadlock-free, since we (1) prevent cyclic waiting on locksby using a variant of the bakery algorithm, and (2) deal with ``incestuous'' com-munication by checking equality n=m of the request's id's, such that an unintendedsend-request is resent and the receiver's loop is restarted. Knabe's graph-basedproof sketch [Kna93] for deadlock-freedom of his implementation could beadapted to the current setting. See Appendix B for a discussion on an extension ofKobayashi's typing system [Kob97] to cope with the encoding.Unfortunately, the encoding is not quite divergence-free due to the way we avoid

deadlocks in the case of ``incestuous'' self-communication in the n=m clause;a sender's request may be reconsumed again and again. Yet, the encoding is stilllivelock-free, since for every enabled matching competitor of an incestuous pair ofbranches it is always, i.e., again and again, possible to stop the self-communication.

4.3. A ``Practical '' Solution

The main theme in this subsection is the approach of changing the source andtarget languages of the choice encodings to reflect some phenomena that occur in

300 UWE NESTMANN

total order propertyprevents from deadlock

equality check prevents from incest(but produces divergence)

97Wednesday, October 14, 2009

Page 232: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

OverviewPart 0a: Encodings vs Full Abstraction Comparison of Languages/Calculi Correctness of Encodings

Part 0b: Asynchronous Pi Calculus

Part 1: Input-Guarded Choice Encoding (Distributed Implementation) Decoding (Correctness Proof)

Part 2: Output-Guarded Choice Encoding Separate Choice Encoding Mixed Choice

Conclusions98Wednesday, October 14, 2009

Page 233: Encodings into Asynchronous Pi Calculusbasics.sjtu.edu.cn/summer_school/basics09/Slides/...Relative Expressiveness (II) This can also be formulated without actually saying what is

A bit of cheating: encodings depend on n-ary choice.

There is no theory of encodings yet.

The relation to distributed implementations—and to Distributed Computing in general—is not yet sufficiently understood.

Asynchronous Pi is “sufficiently” expressive ...... for programming. (Who needs mixed choice?)

The role of name-passing.

The role of coupled simulation.

99Wednesday, October 14, 2009