Course notes for COMP6752: Comparative Concurrency Semantics
This file will keep track of the material covered so far, and list the
handouts distributed to date.
Monday 2622018
Today I will introduce the topics to be treated in this course:
formalising specifications as well as implementations of concurrent
systems, studying the criteria for deciding whether an implementation
meets a specification, and techniques for proving this.
Look here for the broader context.
Both specifications and implementations can be represented by means
of models of concurrency such as
labelled transition systems (LTSs) or process graphs.
I present the definition below.
Another way to represent concurrent systems is by means of process
algebraic expressions.
These concepts will be illustrated by means of an example
(see below). It shows
how an implementation could be given as parallel composition of
components, enforcing synchronisation on certain actions.
When both specifications and implementations are given as LTSs, we
need a criterion to determine if they match. I will discuss a
criterion based on bisimulation, that matched the states of the
related LTSs. I show why it is useful here to abstract from
internal actions of the implementation.
When the specification and implementation are given as process
expressions, to decide whether they match, one considers the meaning,
or semantics of these expressions. Often, this meaning is
given by mapping them to LTSs.
A process graph is a triple (S,I,→), with S a set
(of states), I∈S an initial state,
and → a set of triples (s,a,t) with s,t
∈S and a an action, drawn from a set Act.
An LTS is just a process graph without initial state.
(But sometimes LTS is used as synonym for process graph, i.e. with
initial state.)
We'll also use process graphs with a fourth component ✔⊆S
indicating the final states of the process: those in which the
system can terminate successfully.
Specifications and implementations can not only be represented by LTSs
or other models of concurrency, but also by process algebraic
expressions, where complex processes are build up from constants for
atomic actions using operators for sequential, alternative and
parallel composition. The most popular process algebraic languages
from the literature are CCS (the Calculus of Communicating Systems),
CSP (Communicating Sequential Processes) and ACP (the Algebra of
Communicating Processes). I present the syntax of
ACP, focusing on the partially synchronous parallel
composition operator.
The corresponding textbook is
J.C.M. Baeten & W.P. Weijland (1990): Process Algebra, Cambridge
University Press
or
W.J. Fokkink (2000):
Introduction to Process Algebra, Texts in
Theoretical Computer Science, An EATCS Series, Springer.
The syntax of ACP, the Algebra of Communicating Processes,
features the operations
 ε (successful termination)
(only present in the optional
extension ACP_{ε})
 δ (deadlock)
 a (action constant) for each action a.
 P.Q (sequential composition)
 P+Q (summation, choice
or alternative composition)
 PQ (parallel composition)
 ∂_{H}(P) (restriction, or
encapsulation) for each set of visible actions H.
 τ_{I}(P) (abstraction) for each set of
visible actions I
(only present in the optional
extension ACP_{τ}).
The atomic actions of ACP consists of
all a, b, c, etc. from a given set A
of visible actions, and one special action τ, that is meant to be
internal and invisible to the outside world. For each application, a
partial communication function γ: AxA > A is
chosen that tells for each two visible actions a and b whether
they synchronise (namely if γ is defined), and if so, what is
result of their synchronisation: γ(a,b).
The communication function is required to be commutative and associative.
The invisible action cannot take part in synchronisations.
Here are the definitions of these operators, in terms of process
graphs extended with a predicate ✔ that signals successful termination.

ε is the graph with one state and no transitions.
This one state is the initial state. It is marked with ✔.

δ is the graph with one state and no transitions.
This one state is the initial state. It is not marked as terminating.

a is a graph with two states (and initial and a final one) and
one transition between them, labelled a. The final state is
marked with ✔.
 G.H is the process that first performs G, and upon
successful termination of G proceed with H.
It is homework to write down a formal definition.
 G+H is obtained by taking the union of copies of G
and H with disjoint sets of states, and adding a fresh
state root, which will be the initial state of G+H.
For each transition I_{G} a> s in G, where
I_{G} denotes the initial state of G,
there will be an extra transition root a> s, and likewise,
for each transition I_{H} a> s in H, where
I_{H} denotes the initial state of H,
there will be an extra transition root a> s.
root is labelled with ✔ if
either I_{G} or I_{H} is.
 GH is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G and H.
A state (s,t) is labelled ✔ if and only if both s
and t are labelled ✔. The transitions are
 (s,t) a> (s',t) whenever s a> s' is a transition in G,
 (s,t) a> (s,t') whenever t a> t' is a transition in H, and
 (s,t) c> (s',t') whenever s a> s' is a transition in G
and t b> t' is a transition in H, and γ(a,b)=c.
Intuitively, GH allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions to synchronise with their communication partners.
 ∂_{H}(G) is just G, but with all
actions in H omitted.
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
 τ_{I}(G) is just G, but with all
actions in I renamed into τ.
Here is the example of the relay race presented in class.
There are two runners R_{1} and R_{1}
participating in the relay race, and while running they carry a baton.
Runner 1 starts running, and at the halfwaypoint passes the baton to
runner 2, who is waiting there. Then runner 2 runs the rest of the
race and delivers the baton at the finish.
We can describe the behaviour of the two runners by the ACP expressions:
R_{1} = start.give
R_{2} = receive.finish
Here start is the action of runner 1 starting the race,
give is the finish of runner 1 at the halfwaypoint,
where R_{1} gives the baton to R_{2},
receive is the action of runner 2 to accept the baton from runner 1 and start running,
and finish is the action of runner 2 of arriving at the finish.
An ACP expression describing the whole interaction is
τ_{I}(∂_{H}(R_{1}  R_{2}))
with H={give,receive} and I={handover}.
Here the communication function γ is given
by γ(give,receive) = γ(receive,give) = handover.
We will draw the corresponding process graph in class, and after deleting
unreachable parts it looked just like the process graph of the expression
start.τ.finish
However, from a very high level of abstraction, where we abstract from
the relay in the relay race, and rather focus on the baton, which we
see departing by the action start, and subsequently arriving at
the finish by the action finish, we may want to specify the
overall behaviour of this race as
start.finish
To formally prove that the implementation matches this specification,
we need a relation between these processes that abstracts from the
internal action τ.