Course notes for COMP3152/9152: Comparative Concurrency Semantics
This file will keep track of the material covered so far, and list the
handouts distributed to date.
Thursday 132012
Today I introduced the topics to be treated in this course:
formalising specifications as well as implementations of concurrent
systems, studying the criteria for deciding whether an implementation
meets a specification, and techniques for proving this.
Both specifications and implementations can be represented by means
of models of concurrency such as
labelled transition systems (LTSs). I presented the definition
of LTSs; it also appears in the first section
of this handout.
Another way to represent concurrent systems is as process algebraic
recursive equations. And specifications can be given by means of
temporal logic, a formalisation of Englishlanguage
specifications telling what should happen when.
I illustrated how an implementation could be given as parallel
composition of components, enforcing synchronisation on certain
actions. In general, parallel composition of many components gives
rise to the state explosion problem. One way to combat it is by
means of compositionality: proving things for each component
separately and lifting the result to the entire parallel composition by
application of some general theorem.
For whether an implementation meets a specification, we
use equivalence relations and preorders.
Tuesday 632012
Today I repeated to introduction to labelled transition systems.
By Thursday you should all do the first homework to get familiar with
this notion.
Specifications and implementations can not only be represented by LTSs
or other models of concurrency, but also by process algebraic
expressions, where complex processes are build up from constants for
atomic actions using operators for sequential, alternative and
parallel composition. The most popular process algebraic languages
from the literature are CCS (the Calculus of Communicating Systems),
CSP (Communicating Sequential Processes) and ACP (the Algebra of
Communicating Processes). Today I started explaining the syntax of
ACP, focusing on the partially synchronous parallel
composition operator. I will continue with that on Thursday.
The corresponding textbook is
J.C.M. Baeten & W.P. Weijland (1990): Process Algebra, Cambridge
University Press
or
W.J. Fokkink (2000):
Introduction to Process Algebra, Texts in
Theoretical Computer Science, An EATCS Series, Springer.
The syntax of ACP, the Algebra of Communicating Processes,
features the operations
 ε (successful termination)
 δ (deadlock)
 a (action constant) for each action a.
 P.Q (sequential composition)
 P+Q (summation, choice
or alternative composition)
 PQ (parallel composition)
 ∂_{H}(P) (restriction, or
encapsulation) for each set of visible actions H.
 τ_{I}(P) (abstraction) for each set of
visible actions I.
The atomic actions of ACP consists of
all a, b, c, etc. from a given set A
of visible actions, and one special action τ, that is meant to be
internal and invisible to the outside world. For each application, a
partial communication function γ: AxA > A is
chosen that tells for each two actions a and b whether
they synchronise (namely if γ is defined), and if so, what is
result of their synchronisation: γ(a,b).
The communication function is required to be commutative and associative.
The invisible action cannot take part in synchronisations.
Here are the definitions of these operators, in terms of process graphs with a single
predicate ✔ that signals successful termination.

ε is the graph with one state and no transitions.
This one state is the initial state. It is marked with ✔.

δ is the graph with one state and no transitions.
This one state is the initial state. It is not marked as terminating.
 G.H is the process that first performs G, and upon
successful termination of G proceed with H.
It is homework to write down a formal definition.
 G+H is obtained by taking the union of copies of G
and H with disjoint sets of states, and adding a fresh
state root, which will be the initial state of G+H.
For each transition I_{G} a> s in G, where
I_{G} denotes the initial state of G,
there will be an extra transition root a> s, and likewise,
for each transition I_{H} a> s in H, where
I_{H} denotes the initial state of H,
there will be an extra transition root a> s.
root is labelled with ✔ if
either I_{G} or I_{H} is.
 GH is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G
and H. The transitions are
 (s,t) a> (s',t) whenever s a> s' is a transition in G,
 (s,t) a> (s,t') whenever t a> t' is a transition in H, and
 (s,t) c> (s',t') whenever s a> s' is a transition in G
and t b> t' is a transition in H, and γ(a,b)=c.
Intuitively, GH allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions to synchronise with their communication partners.
 ∂_{H}(G) is just G, but with all
actions in H omitted.
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
 τ_{I}(G) is just G, but with all
actions in I renamed into τ.
Then I defined completed trace equivalence, and
mentioned bisimulation equivalence.
I illustrated their differences with the example a(b+c)
versus ab+ac. These two process are completed trace equivalent,
but they will turn out not to be bisimulation equivalent.
Thursday 832012
Today I explained the notion of a denotational semantics of a system
description language like ACP, and treated the denotational semantics
of ACP given above. In the tutorial we translated back and forth
between process graphs and recursive specifications in ACP.
Tuesday 1332012
I made the definition of completed traces more precise.
For process graph without a termination predicate, a finite completed trace
goes from the initial state to any state from which no further moves
are possible. We write CT^{fin}(p) for the set of
finite completed traces of process p, and CT^{∞}(p)
for the set of all finite completed traces of p together with
its infinite traces. When writing just CT(p) it
depends on context which of the two is meant (i.e. this differs from
one paper to the next).
Partial traces are defined likewise, but can end anywhere.
We write PT^{fin}(p) for the set of
finite partial traces of process p,
and PT^{∞}(p)
for the set of all finite partial traces of p together with
its infinite traces.
On processes that are equipped with a termination predicate ✔ on states
(like the accepting states from automata theory), partial traces and
completed traces are defined in such a way that we could just as well
have thought about the termination predicate that holds in a
state s as a special transition marked ✔ that goes
from s to a fresh state that is not connected in any other way.
Thus, for example, CT^{∞}(a.(ε+b.δ))
= {a, ab}.
Besides the denotational semantics of ACP presented above, I also gave
an operational semantics of that language. The foundations of
structural operational semantics are presented in
this handout. I also introduced the language CCS
and its operational semantics.
The syntax of CCS, the Calculus of Communicating Systems,
features the operations
 0 (inaction)
 a.P (action prefix) for each action a.
 P+Q (summation, choice
or alternative composition)
 PQ (parallel composition)
 P\a (restriction) for each action a.
The atomic actions of CCS consists of
all a, b, c, etc. from a given set A,
together with the complementary actions
a for each a
in A, and one special action τ, that is meant to be
internal and invisible to the outside world. By convention, if we put
an overbar over a complementary action
a, we get a again.
Here are the definitions, in terms of process graphs without
predicates and without a notion of final states, of the operators
mentioned above:

0 is the graph with one state and no transitions.
This one state is the initial state.
 a.G is obtained from G by adding a new state, which will be
the initial one, and a new alabelled transition from this new
state to the initial state of G (which is no longer initial in a.G).
 G+H is obtained by taking the union of copies of G
and H with disjoint sets of states, and adding a fresh
state root, which will be the initial state of G+H.
For each transition I_{G} a> s in G, where
I_{G} denotes the initial state of G,
there will be an extra transition root a> s, and likewise,
for each transition I_{H} a> s in H, where
I_{H} denotes the initial state of H,
there will be an extra transition root a> s.
We can think of this operator, as unwinding the initial states
(roots) of G and H, and then gluing together
these roots to form the new initial state of G+H.
 GH is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G
and H. The transitions are
 (s,t) a> (s',t) whenever s a> s' is a transition in G,
 (s,t) a> (s,t') whenever t a> t' is a transition in H, and
 (s,t) τ> (s',t') whenever s a> s' is a transition in G
and t a> t' is a transition in H.
Intuitively, GH allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions a to synchronise with their complementary actions
a; the result of such an
synchronisation is the internal action τ.
 G\a is just G, but with all actions a and
a omitted.
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
Thursday 1532012
We have treated the first paragraph of the introduction as well as
Definition 1 from the handout on structural operational semantics.
In addition we practiced the formal derivation of transitions out of
the transition rules of the structural operational semantics of a
language. While doing so I illustrated the role of the encapsulation
operators ∂_{H}(P) of ACP. Formally there is one
such operator for every subset H of Act.
(Lectures given by Peter Höfner)
Tuesday 2032012
Today I showed how to design a process algebra for routing protocols for wireless mesh networks (WMNs).
I explained some background on WMNs, showed their importance and set up the requirements.
(e.g., the need of modelling a highly dynamic topology)
Essential features for the process algebra are
 guaranteed broadcast
 conditional unicast
 data structures and data handling
Today I set up the syntax for the process algebra and developed parts of the sos interference rules.
In particular, we discussed sequential processes, which offer a syntax similar to an imperative program.
Major primitives are broadcast, groupcast, unicast, nondeterministic choice, sequential composition and assignment.
Thursday we will discuss the remaining rules. Focus was set on how to model data structure in process algebra.
The lecture followed the first part of Section 4 of a technical report
Thursday 2232012
In today's lecture I continued to present and derive a process algebra for wireless mesh networks.
In particular I discussed and derived structural operational semantics for parallel processes and for network expressions ip:P:R.
The content of the lecture can again be found in Section 4 of a technical report.
Unfortunately I could not finish the presentation of all interference rules. They will be presented in the next lecture.
Organisation
Please note that the next lecture is April 5th, 12:00pm.
Thursday 0542012
In today's lecture I continued to present and derive a process algebra for wireless mesh networks.
In particular I discussed and derived structural operational semantics for network expressions.
The content of the lecture can again be found in Section 4 of a technical report.
Lessons to be learned about the process algebra for Wireless Mesh Network
 data can be included into process algebraic approaches
 process algebras like CSP, ACP can be extended in a way to tackle "reallife problems"
 the sosrules of the presented algebra are closely related to the ones of CSP (e.g. synchronisation)
 how to write specifications using our process algebra
Tuesday 1042012
This week we discussed Büchi automata (aka. omega automata)
Formally, a Büchi automaton is a tuple A = (Q,Σ,Δ,q_{0},F) that consists of the following components:
 Q is a finite sett. The elements of Q are called the states of A.
 Σ is a finite set called the alphabet of A.
 Δ: Q × Σ → 2^{Q} is a transition function.
 q_{0} is an element of Q, called the initial state.
 F ⊆ Q is the acceptance condition. A accepts exactly those runs in which at least one of the infinitely often occurring states is in F. A language accepted by some Büchi automaton is called ωregular.
In a deterministic Büchi automaton, the transition function Δ is replaced with a transition relation δ that returns a single state.
Note that the expressiveness of deterministic and nondeterministic Bücchi automata is not equivalent.
We have discussed other accepting criteria such as the criteria of Muller and Rabin. Moreover we have seen that Büchi automata are closed under union ∪ and complementation. The proof of the latter one is very complex and was not shown.
Thursday 1242012
Today we proved that Büchi automata are closed under intersection ∩.
(Lectures resumed by Rob van Glabbeek)
Tuesday 1742012
An equivalence relation =_{X} is finer or
more discriminating than an equivalence relation =_{Y},
if if the equivalence classes of =_{X} are included in those
of =_{Y}. Then =_{Y} is coarser or
less discriminating; in formulas
p =_{X} q ==> p =_{Y} q.
So completed trace equivalence is finer than partial trace
equivalence, and bisimulation equivalence is even finer.
We drew a spectrum that orders equivalence relations on labelled
transition systems by how fine they are, drawing the finer ones at the
top. Bisimulation equivalence is at the top of that spectrum and
partial trace equivalence near the bottom. Completed trace equivalence
and simulation equivalence are somewhere in between.
Bisimulation equivalence is called a branching time equivalence,
because it takes into account at which point two different traces
branch off. Partial and completed trace equivalence at the other hand
are linear time equivalences, as they only focus on the linear
executions of a process, without taking branching into account.
The definition of bisimulation equivalence is in the
first handout. We also defined when one
process simulates another. This is like bisimulation, but in
the transfer property the initiative is always taken by the simulated
process, and the matching move should be taken by the simulated
process. Two processes are simulation equivalent if one
simulates the other and vice versa. This is not the same as
bisimulation equivalence, for there we insists on a single
relation that is at the same time a simulation in both directions.
The processes ab+a(b+c) and a(b+c) simulate each other,
but are not bisimulation equivalent.
I also presented a definition of bisimulation equivalence in terms of
consistent colourings.
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do an atransition to a state with colour C.
The colouring is consistent if all states of the same colour
also have the same potentials. And two states are bisimulation
equivalent iff there exists a consistent colouring that gives them the
same colour.
Equivalently, we can say define a colouring to be consistent by
saying that two states with the same colour should have the
same coloured traces. Here a coloured trace is a trace
interspersed with the colours of the states one travels through. It
turns out to be the case that in order to make sure that two states
have the same coloured traces, it suffices to check all traces with
only one action in it.
I also presented the HennessyMilner Logic as an instrument to
tell when two processes are not bisimilar. When two processes are
bisimulation equivalent this can be shown by displaying a bisimulation
between them (or a consistent colouring that colours their initial
states in the same way), but when they are not equivalent, showing that no
bisimulation could possibly exist is usually to much work. Instead it
suffices to present a formula in the HennessyMilner Logic that is
satisfied by one of the processes but not by the other. This is
covered in the first handout.
On processes that are equipped with a termination predicate on states
(like the accepting states from automata theory), partial traces and
completed traces are defined in such a way that we could just as well
have thought about the termination predicate that holds in a
state s as a special transition marked ✔ that goes
from s to a fresh state that is not connected in any other way.
In addition, we two processes are called language equivalent if
they have the same traces leading to terminating states. In automata
theory such traces are called words, accepted by s.
The set of all words accepted by s is the language
accepted by s. It turns out that language equivalence is coarser
than partial trace equivalence, whereas completed trace equivalence is
finer than partial trace equivalence.
My paper
The Linear Time  Branching Time Spectrum I is a small
encyclopedia of equivalence relations; language equivalence appears in
Section 19.
Thursday 1942012
Today we explained the HennessyMilner Logic in depth.
Tuesday 2542012
First I explained the partition refinement algorithm for
deciding whether or not two states in an LTS are strong bisimulation
equivalent. I handed out Section 3.1 of Chapter 6 of the Handbook of
Process Algebra; it describes this algorithm.
Then I applied CCS as well as ACP to the specification and
implementation of a relayrace example, and concluded we need "weak"
notions of equivalence that abstract from τactions.
There are two runners R_{1} and R_{1}
participating in the relay race, and while running they carry a stick.
Runner 1 starts running, and at the halfwaypoint passes the stick to
runner 2, who is waiting there. Then runner 2 runs the rest of the
race and delivers the stick at the finish.
We can describe the behaviour of the two runners by the CCS expressions:
R_{1} = start.passon.0
R_{2}=passon.finish.0
Here start is the action of runner 1 starting the race,
passon is the finish of runner 1 at the halfwaypoint,
where R_{1} passes the stick on to R_{2},
passon is the complementary
action of runner 2 to accept the stick from runner 1 and start running,
and finish is the action of runner 2 of arriving at the finish.
A CCS expression describing the whole interaction is
(R_{1}  R_{2}) \ passon
We drew the corresponding process graph in class, and it turned out to
be bisimilar to
start.τ.finish.0
However, from a very high level of abstraction, where we abstract from
the relay in the relay race, and rather focus on the stick, which we
see departing by the action start, and subsequently arriving at
the finish by the action finish, we may want to specify the
overall behaviour of this race as
start.finish.0
To formally prove that the implementation matches this specification,
we need a semantic equivalence that abstracts from the internal action
τ, and makes start.τ.finish.0 equivalent with
start.finish.0. Such equivalences are called weak
or abstract. In contrast, an equivalence that treats τ
like any other action is strong or concrete.
The definitions of language equivalence, partial and completed trace
equivalence and bisimulation equivalence that we encountered last week
can all be regarded as definitions of strong partial trace
equivalence, strong bisimulation, etc., as long as we
treat τ like any other action. To arrive at a definition of weak
partial trace equivalence, we define a weak partial trace of a
process p as a strong partial trace of p from which all the
occurrences of τ have been omitted. Now two processes are
called weak partial trace equivalent if they have the same weak
partial traces.
Weak language equivalence and weak completed trace equivalence can be
defined in the same way. However, a definition of weak bisimulation is
relatively tricky. As we saw by example, just contracting all
τmoves in a labelled transition system doesn't yield what we want
to get. There are several, subtly different, approaches in the
literature.
On Thursday we will define one of these approaches,
called branching bisimulation equivalence in terms of
consistent colourings. A research paper in which this notion is
defined can be found here. All we will
cover is the introduction to Section 3 (on page 7), half of
Definition 4, and the statement of Lemma 3 (half a page in total).
The intuition is that for two states to be equivalent, it should be
possible to provide a colouring of all states such that the two states
in question get the same colour. This colouring should
be consistent, which is a requirement that guarantees that all
states with the same colour have the same possible abstract
behaviours, and hence can be considered equivalent. This requirement is
formalised by saying that two states with the same colour should have
the same abstract coloured traces. Here a concrete coloured
trace is a trace interspersed with the colours of the states one
travels through, and an abstract coloured trace is made from it by
removing all τsteps between states of the same colour.
It turns out to be the case that in order to make sure that two states
have the same abstract coloured traces, it suffices to check all
traces with at most one nonτaction in it, or at most one change
of colour.
As a consequence, τtransitions between states with the same
colour can be contracted without changing the behaviour of the system
in any essential way. Such transitions are called inert.
However, τtransitions between states that necessarily have
different colours may not be contracted, because those states have
different potentials for future behaviours.
Thursday 2642012
As stated above, today we defined branching bisimulation equivalence
in terms of consistent colourings. You should also understand the
definition in terms of bisimulation relations between states; this is
explained in the first handout.
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do some inert transitions followed by an noninert atransition
to a state with colour C.
Here a transition is inert if it is labelled τ and moreover goes
between two states of the same colour. Thus if (a,C) is
a potential of a state s, then either a≠τ or C is
not the colour of s. The colouring is consistent if all
states of the same colour also have the same potentials. And two
states are branching bisimulation equivalent iff there exists a
consistent colouring that gives them the same colour.
In fact, the definition of a consistent colouring in terms of
"potentials" is equivalent to the one in terms of "abstract coloured traces".
Tuesday 152012
First I extended the partition refinement algorithm to deal with
branching bisimulation. The key idea is that τmoves between
states within the same block are skipped over, whereas τmoves
between different blocks are treated as if τ is any normal action.
I also introduced weak bisimulation equivalence and discussed
the difference with branching bisimulation equivalence.
See the first handout.
Next I explained what it means for an equivalence relation to be a
congruence for an operator, or, in other words, for the operator
to be compositional for the equivalence.
This is what makes formal verification of big systems given as parallel compositions feasible.
It is not hard to show that (strong) partial trace equivalence is a
congruence for encapsulation. This follows because the partial traces of
δ_{H}(P) are completely determined by the partial
traces of the process P itself (namely by removing all actions
from H).
The same argument can be used to see that (strong or weak) partial
trace equivalence is a congruence for all operators of CCS and ACP.
Partial trace equivalence is inadequate to deal with deadlock.
Adding deadlock information to partial trace semantics
yields completed trace semantics. However, completed trace equivalence
is not a congruence for all operators. It fails for the restriction
operator of CCS, which is the same as the encapsulation operator of ACP.
Strong bisimulation equivalence on the other hand is a congruence for all
operators of ACP, CCS and CSP. This is a reason to use bisimulation rather
then partial or completed trace equivalence.
Yet, to be truly effective in the verification, we need a weak
equivalence, one that abstracts from internal transitions.
It turns out that neither weak nor branching bisimulation are congruences
for the +operator of CCS and ACP. This could be a reason for the language
CSP not to use the +operator, but instead an internal and external
choice, to be explained next time.
All operators of CSP turn out to be compositional for weak and branching
bisimulation equivalence. The people behind CCS and ACP chose a different
solution, namely to keep the +, but to change the bisimulations a bit to make
them congruences for the +, and all the other operators.
Therefore ACP uses rooted branching bisimulation equivalence,
also called branching bisimulation congruence; notation P =_{RBB} Q.
Two processes P and Q are RBB equivalent if
So a rooted branching bisimulation acts like a strong
bisimulation in the first step.
RBB equivalence is finer, i.e. more discriminating, than BB equivalence,
and it is a congruence for all operators of ACP, CCS and CSP.
Thursday 352012
Today I explained the infinitary version of the HennessyMilner Logic,
HML^{∞}. It uses abbreviations like
⟨a⟩^{n}φ
for ⟨a⟩⟨a⟩ ...
⟨a⟩φ, and
∧_{n=1}^{∞} for an infinite conjunction.
Next, I presented the system description language CSP.
The syntax of CSP, the theory of Communicating Sequential Processes,
features the operations
 0 (inaction, originally written STOP)
 a.P (action prefix, originally written a→P) for each visible action a.
 P☐Q (external choice)
 P⊓Q (internal choice)
 P_{S}Q (parallel composition, enforcing
synchronisation over the set S of visible actions)
 P/a (concealment, originally written P\a) for each action a.
The atomic actions of CSP consists of
all a, b, c, etc. from a given set A of
visible actions. The syntax of CSP does not provide for invisible
actions, although τ does occur in the process graph semantics of
CSP; it is introduced by internal choice and by concealment.
Here are the definitions, in terms of process graphs without
predicates and without a notion of final states, of the operators
mentioned above:

0 is the graph with one state and no transitions.
This one state is the initial state.
 a.G is obtained from G by adding a new state, which will be
the initial one, and a new alabelled transition from this new
state to the initial state of G (which is no longer initial in a.G).
 G☐H is an operator that acts like a parallel composition
without synchronisation until a visible action from one of the two components
occurs, and from that moment onwards (internal or external) actions from
that component only are possible.
 G⊓H is defined to be τG + τH.
 G_{S}H is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G and H. The transitions are
 (s,t) a> (s',t) whenever a is not in S
and s a> s' is a transition in G,
 (s,t) a> (s,t') whenever a is not in S
and t a> t' is a transition in H,
 and (s,t) a> (s',t') whenever a is in S
and s a> s' is a transition in G
and t a> t' is a transition in H.
Unlike the parallel compositions of CCS and ACP, the parallel
composition of CSP not only allows, but also enforces synchronisation.
For this reason CSP does not need a separate restriction or
encapsulation operator.
 G/a is just G, but with all actions a renamed into τ.
I didn't cover the parallel composition above yet (this will happen on
Tuesday), but I also presented a structural operational semantics of
the internal and external choice operators. By means of the example
that shows that the CCS +operator is not compositional for branching
bisimulation equivalence I showed how the CSP external choice behaves
better in that regard.
Tuesday 852012
The example of the relay race, presented earlier, is useful to illustrate
a key difference between CCS, CSP and ACP. To express the interaction
between the two runners, we need to do three things:
 Parallel composition (modelled by a Cartesian product construction)
 Allowing synchronisation (by adding diagonals to the product)
 Enforcing synchronisation (by removing remnants of unsuccessful communication)
 Abstraction (hiding of internal actions, often the result of synchronisation, by renaming them into τ)
In CSP the parallel composition _{S} takes case of 1, 2 and 3,
and there is a separate concealment operator to handle 4.
In CCS the parallel composition  takes case of 1, 2 and 4,
and there is a separate restriction operator to handle 3.
In ACP the parallel composition  takes care of 1 and 2,
and there are separate encapsulation and abstraction operators to handle 3 and 4, respectively.
Besides finishing the presentation of CSP, today I treated the
infinitary part of the infinitary HennessyMilner Logic again.
I also explain a theorem in Structured Operational Semantics, saying
that strong bisimulation is a congruence as soon as all operators
concerned have operational rules in the GSOS format.
See the handout on structural operational semantics.
Thursday 1052012
Today we discussed deadlock, livelock and (possibly
fair) divergence, versus normal progress under a progress
assumption. In weak partial trace semantics, deadlock, livelock,
divergence and normal progress are all identified. In weak completed
trace semantics as treated in this course (there also are different
variants), deadlock, livelock and divergence are identified, and
distinguished from normal progress.
In CSP, as well as in David Walker's version of
bisimulation semantics, livelock and divergence are identified, and
distinguished from deadlock and normal progress, thereby implicitly assuming
choices to be unfair (a safe worst case scenario). In weak and
branching bisimulation semantics as used in CCS and ACP, choices are assumed
to be fair, and fair divergence is identified with normal progress,
and distinguished from livelock. Moreover, there is no distinction
between deadlock and livelock. This helps in the verification of
communication protocols, where you try to send a message as long as
needed until it succeeds and assume it will eventually arrive correctly.
I promised to present a variant of branching bisimulation next weak,
the branching bisimulation with explicit divergence, which is
the finest useful weak equivalence found in the literature. It
distinguishes all four modes of progress.
Tuesday 1552012
First I introduced a variant of branching bisimulation semantics that
distinguishes all four phenomena mentioned above: deadlock, livelock,
possibly fair divergence and normal progress.
This branching bisimulation equivalence with explicit divergence
can best be defined in terms of consistent colourings.
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do some inert transitions followed by an noninert atransition
to a state with colour C.
Here a transition is inert if it is labelled τ and moreover goes
between two states of the same colour. Thus if (a,C) is
a potential of a state s, then either a≠τ or C is
not the colour of s. The colouring is consistent if all
states of the same colour also have the same potentials. And two
states are branching bisimulation equivalent iff there exists a
consistent colouring that gives them the same colour.
State s is divergent w.r.t. a colouring is there is an
infinite path of τtransitions starting from s, only
passing through states with the same colour as s.
The colouring preserves divergence if for any colour either all
states of that colour are divergent w.r.t. the colouring, or none are.
Now two states are branching bisimulation equivalence with explicit
divergence if they receive the same colour by some consistent
colouring that preserves divergence.
Then I introduced Petri nets, and the way they model concurrent systems.
I indicated how to define the main operators from CCS/CSP and ACP on
Petri nets.
Process algebras such as ACP come with complete axiomatisations to
reason about processes without studying their semantics, i.e. the
associated process graphs. The axioms are equations with variables
that are true modulo bisimulation (or some other equivalence relation)
for any processes filled in for the variables. A set of axioms
is complete if every statement about two processes being
equivalent can be derived from the equations. It is impossible to give
a complete axiomatisation with only finitely many equations for
languages like ACP, unless you add some auxiliary operators.
Such operators are the leftmerge and the communication merge.
I showed how a parallel composition between two recursive processes
could be eliminated in favour of choice and prefixing.
More information in
wikipedia.
Tuesday 2252012
Seminar presentations by Cheng, on Probabilistic ACP, and
by Sandeep on LTL and Büchi automata
Thursday 2452012
Seminar presentation on modelling concurrency in LOTOS, by Marco.
The final exam is on Friday June 15, 9:0012:00.
Friday June 15, 9:0012:00 in ASB 107 (Building E12).
You are allowed to take 4 pages (or 2 sheets doublesides) of notes
(either handwritten or typed) with you.