Both specifications and implementations can be represented by means of models of concurrency such as labelled transition systems (LTSs). I presented the definition of LTSs; it also appears in the first section of this handout.
Another way to represent concurrent systems is as process algebraic recursive equations. And specifications can be given by means of temporal logic, a formalisation of English-language specifications telling what should happen when.
I illustrated how an implementation could be given as parallel composition of components, enforcing synchronisation on certain actions. In general, parallel composition of many components gives rise to the state explosion problem. One way to combat it is by means of compositionality: proving things for each component separately and lifting the result to the entire parallel composition by application of some general theorem.
For whether an implementation meets a specification, we use equivalence relations and preorders.
Specifications and implementations can not only be represented by LTSs or other models of concurrency, but also by process algebraic expressions, where complex processes are build up from constants for atomic actions using operators for sequential, alternative and parallel composition. The most popular process algebraic languages from the literature are CCS (the Calculus of Communicating Systems), CSP (Communicating Sequential Processes) and ACP (the Algebra of Communicating Processes). Today I started explaining the syntax of ACP, focusing on the partially synchronous parallel composition operator. I will continue with that on Thursday. The corresponding textbook is
J.C.M. Baeten & W.P. Weijland (1990): Process Algebra, Cambridge University Press
or
W.J. Fokkink (2000): Introduction to Process Algebra, Texts in Theoretical Computer Science, An EATCS Series, Springer.
The syntax of ACP, the Algebra of Communicating Processes,
features the operations
Here are the definitions of these operators, in terms of process graphs with a single
predicate ✔ that signals successful termination.
Then I defined completed trace equivalence, and
mentioned bisimulation equivalence.
I illustrated their differences with the example a(b+c)
versus ab+ac. These two process are completed trace equivalent,
but they will turn out not to be bisimulation equivalent.
Partial traces are defined likewise, but can end anywhere.
We write PTfin(p) for the set of
finite partial traces of process p,
and PT∞(p)
for the set of all finite partial traces of p together with
its infinite traces.
On processes that are equipped with a termination predicate ✔ on states
(like the accepting states from automata theory), partial traces and
completed traces are defined in such a way that we could just as well
have thought about the termination predicate that holds in a
state s as a special transition marked ✔ that goes
from s to a fresh state that is not connected in any other way.
Thus, for example, CT∞(a.(ε+b.δ))
= {a, ab}.
Besides the denotational semantics of ACP presented above, I also gave
an operational semantics of that language. The foundations of
structural operational semantics are presented in
this handout. I also introduced the language CCS
and its operational semantics.
The syntax of CCS, the Calculus of Communicating Systems,
features the operations
Here are the definitions, in terms of process graphs without
predicates and without a notion of final states, of the operators
mentioned above:
The atomic actions of ACP consists of
all a, b, c, etc. from a given set A
of visible actions, and one special action τ, that is meant to be
internal and invisible to the outside world. For each application, a
partial communication function γ: AxA -> A is
chosen that tells for each two actions a and b whether
they synchronise (namely if γ is defined), and if so, what is
result of their synchronisation: γ(a,b).
The communication function is required to be commutative and associative.
The invisible action cannot take part in synchronisations.
It is homework to write down a formal definition.
root is labelled with ✔ if
either IG or IH is.
Intuitively, G||H allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions to synchronise with their communication partners.
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
Thursday 8-3-2012
Today I explained the notion of a denotational semantics of a system
description language like ACP, and treated the denotational semantics
of ACP given above. In the tutorial we translated back and forth
between process graphs and recursive specifications in ACP.
Tuesday 13-3-2012
I made the definition of completed traces more precise.
For process graph without a termination predicate, a finite completed trace
goes from the initial state to any state from which no further moves
are possible. We write CTfin(p) for the set of
finite completed traces of process p, and CT∞(p)
for the set of all finite completed traces of p together with
its infinite traces. When writing just CT(p) it
depends on context which of the two is meant (i.e. this differs from
one paper to the next).
The atomic actions of CCS consists of
all a, b, c, etc. from a given set A,
together with the complementary actions
a for each a
in A, and one special action τ, that is meant to be
internal and invisible to the outside world. By convention, if we put
an overbar over a complementary action
a, we get a again.
We can think of this operator, as unwinding the initial states
(roots) of G and H, and then gluing together
these roots to form the new initial state of G+H.
Intuitively, G|H allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions a to synchronise with their complementary actions
a; the result of such an
synchronisation is the internal action τ.
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
Thursday 15-3-2012
We have treated the first paragraph of the introduction as well as
Definition 1 from the handout on structural operational semantics.
In addition we practiced the formal derivation of transitions out of
the transition rules of the structural operational semantics of a
language. While doing so I illustrated the role of the encapsulation
operators ∂H(P) of ACP. Formally there is one
such operator for every subset H of Act.
Essential features for the process algebra are
Thursday we will discuss the remaining rules. Focus was set on how to model data structure in process algebra. The lecture followed the first part of Section 4 of a technical report
The content of the lecture can again be found in Section 4 of a technical report. Unfortunately I could not finish the presentation of all interference rules. They will be presented in the next lecture.
The content of the lecture can again be found in Section 4 of a technical report.
Lessons to be learned about the process algebra for Wireless Mesh Network
This week we discussed Büchi automata (aka. omega automata)
Formally, a Büchi automaton is a tuple A = (Q,Σ,Δ,q0,F) that consists of the following components:
In a deterministic Büchi automaton, the transition function Δ is replaced with a transition relation δ that returns a single state. Note that the expressiveness of deterministic and non-deterministic Bücchi automata is not equivalent. We have discussed other accepting criteria such as the criteria of Muller and Rabin. Moreover we have seen that Büchi automata are closed under union ∪ and complementation. The proof of the latter one is very complex and was not shown.
Today we proved that Büchi automata are closed under intersection ∩.
The definition of bisimulation equivalence is in the first handout. We also defined when one process simulates another. This is like bisimulation, but in the transfer property the initiative is always taken by the simulated process, and the matching move should be taken by the simulated process. Two processes are simulation equivalent if one simulates the other and vice versa. This is not the same as bisimulation equivalence, for there we insists on a single relation that is at the same time a simulation in both directions. The processes ab+a(b+c) and a(b+c) simulate each other, but are not bisimulation equivalent.
I also presented a definition of bisimulation equivalence in terms of
consistent colourings.
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do an a-transition to a state with colour C.
The colouring is consistent if all states of the same colour
also have the same potentials. And two states are bisimulation
equivalent iff there exists a consistent colouring that gives them the
same colour.
Equivalently, we can say define a colouring to be consistent by
saying that two states with the same colour should have the
same coloured traces. Here a coloured trace is a trace
interspersed with the colours of the states one travels through. It
turns out to be the case that in order to make sure that two states
have the same coloured traces, it suffices to check all traces with
only one action in it.
I also presented the Hennessy-Milner Logic as an instrument to
tell when two processes are not bisimilar. When two processes are
bisimulation equivalent this can be shown by displaying a bisimulation
between them (or a consistent colouring that colours their initial
states in the same way), but when they are not equivalent, showing that no
bisimulation could possibly exist is usually to much work. Instead it
suffices to present a formula in the Hennessy-Milner Logic that is
satisfied by one of the processes but not by the other. This is
covered in the first handout.
On processes that are equipped with a termination predicate on states
(like the accepting states from automata theory), partial traces and
completed traces are defined in such a way that we could just as well
have thought about the termination predicate that holds in a
state s as a special transition marked ✔ that goes
from s to a fresh state that is not connected in any other way.
In addition, we two processes are called language equivalent if
they have the same traces leading to terminating states. In automata
theory such traces are called words, accepted by s.
The set of all words accepted by s is the language
accepted by s. It turns out that language equivalence is coarser
than partial trace equivalence, whereas completed trace equivalence is
finer than partial trace equivalence.
My paper
The Linear Time - Branching Time Spectrum I is a small
encyclopedia of equivalence relations; language equivalence appears in
Section 19.
Then I applied CCS as well as ACP to the specification and
implementation of a relay-race example, and concluded we need "weak"
notions of equivalence that abstract from τ-actions.
There are two runners R1 and R1
participating in the relay race, and while running they carry a stick.
Runner 1 starts running, and at the halfway-point passes the stick to
runner 2, who is waiting there. Then runner 2 runs the rest of the
race and delivers the stick at the finish.
The definitions of language equivalence, partial and completed trace
equivalence and bisimulation equivalence that we encountered last week
can all be regarded as definitions of strong partial trace
equivalence, strong bisimulation, etc., as long as we
treat τ like any other action. To arrive at a definition of weak
partial trace equivalence, we define a weak partial trace of a
process p as a strong partial trace of p from which all the
occurrences of τ have been omitted. Now two processes are
called weak partial trace equivalent if they have the same weak
partial traces.
On Thursday we will define one of these approaches,
called branching bisimulation equivalence in terms of
consistent colourings. A research paper in which this notion is
defined can be found here. All we will
cover is the introduction to Section 3 (on page 7), half of
Definition 4, and the statement of Lemma 3 (half a page in total).
The intuition is that for two states to be equivalent, it should be
possible to provide a colouring of all states such that the two states
in question get the same colour. This colouring should
be consistent, which is a requirement that guarantees that all
states with the same colour have the same possible abstract
behaviours, and hence can be considered equivalent. This requirement is
formalised by saying that two states with the same colour should have
the same abstract coloured traces. Here a concrete coloured
trace is a trace interspersed with the colours of the states one
travels through, and an abstract coloured trace is made from it by
removing all τ-steps between states of the same colour.
It turns out to be the case that in order to make sure that two states
have the same abstract coloured traces, it suffices to check all
traces with at most one non-τ-action in it, or at most one change
of colour.
As a consequence, τ-transitions between states with the same
colour can be contracted without changing the behaviour of the system
in any essential way. Such transitions are called inert.
However, τ-transitions between states that necessarily have
different colours may not be contracted, because those states have
different potentials for future behaviours.
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do some inert transitions followed by an non-inert a-transition
to a state with colour C.
Here a transition is inert if it is labelled τ and moreover goes
between two states of the same colour. Thus if (a,C) is
a potential of a state s, then either a≠τ or C is
not the colour of s. The colouring is consistent if all
states of the same colour also have the same potentials. And two
states are branching bisimulation equivalent iff there exists a
consistent colouring that gives them the same colour.
In fact, the definition of a consistent colouring in terms of
"potentials" is equivalent to the one in terms of "abstract coloured traces".
I also introduced weak bisimulation equivalence and discussed
the difference with branching bisimulation equivalence.
See the first handout.
Next I explained what it means for an equivalence relation to be a
congruence for an operator, or, in other words, for the operator
to be compositional for the equivalence.
This is what makes formal verification of big systems given as parallel compositions feasible.
It is not hard to show that (strong) partial trace equivalence is a
congruence for encapsulation. This follows because the partial traces of
δH(P) are completely determined by the partial
traces of the process P itself (namely by removing all actions
from H).
The same argument can be used to see that (strong or weak) partial
trace equivalence is a congruence for all operators of CCS and ACP.
Partial trace equivalence is inadequate to deal with deadlock.
Adding deadlock information to partial trace semantics
yields completed trace semantics. However, completed trace equivalence
is not a congruence for all operators. It fails for the restriction
operator of CCS, which is the same as the encapsulation operator of ACP.
Strong bisimulation equivalence on the other hand is a congruence for all
operators of ACP, CCS and CSP. This is a reason to use bisimulation rather
then partial or completed trace equivalence.
Yet, to be truly effective in the verification, we need a weak
equivalence, one that abstracts from internal transitions.
It turns out that neither weak nor branching bisimulation are congruences
for the +-operator of CCS and ACP. This could be a reason for the language
CSP not to use the +-operator, but instead an internal and external
choice, to be explained next time.
All operators of CSP turn out to be compositional for weak and branching
bisimulation equivalence. The people behind CCS and ACP chose a different
solution, namely to keep the +, but to change the bisimulations a bit to make
them congruences for the +, and all the other operators.
Therefore ACP uses rooted branching bisimulation equivalence,
also called branching bisimulation congruence; notation P =RBB Q.
Two processes P and Q are RBB equivalent if
RBB equivalence is finer, i.e. more discriminating, than BB equivalence,
and it is a congruence for all operators of ACP, CCS and CSP.
Next, I presented the system description language CSP.
The syntax of CSP, the theory of Communicating Sequential Processes,
features the operations
Here are the definitions, in terms of process graphs without
predicates and without a notion of final states, of the operators
mentioned above:
Besides finishing the presentation of CSP, today I treated the
infinitary part of the infinitary Hennessy-Milner Logic again.
I also explain a theorem in Structured Operational Semantics, saying
that strong bisimulation is a congruence as soon as all operators
concerned have operational rules in the GSOS format.
See the handout on structural operational semantics.
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do some inert transitions followed by an non-inert a-transition
to a state with colour C.
Here a transition is inert if it is labelled τ and moreover goes
between two states of the same colour. Thus if (a,C) is
a potential of a state s, then either a≠τ or C is
not the colour of s. The colouring is consistent if all
states of the same colour also have the same potentials. And two
states are branching bisimulation equivalent iff there exists a
consistent colouring that gives them the same colour.
State s is divergent w.r.t. a colouring is there is an
infinite path of τ-transitions starting from s, only
passing through states with the same colour as s.
The colouring preserves divergence if for any colour either all
states of that colour are divergent w.r.t. the colouring, or none are.
Now two states are branching bisimulation equivalence with explicit
divergence if they receive the same colour by some consistent
colouring that preserves divergence.
Then I introduced Petri nets, and the way they model concurrent systems.
I indicated how to define the main operators from CCS/CSP and ACP on
Petri nets.
Thursday 19-4-2012
Today we explained the Hennessy-Milner Logic in depth.
Tuesday 25-4-2012
First I explained the partition refinement algorithm for
deciding whether or not two states in an LTS are strong bisimulation
equivalent. I handed out Section 3.1 of Chapter 6 of the Handbook of
Process Algebra; it describes this algorithm.
We can describe the behaviour of the two runners by the CCS expressions:
pass-on is the finish of runner 1 at the halfway-point,
where R1 passes the stick on to R2,
pass-on is the complementary
action of runner 2 to accept the stick from runner 1 and start running,
and finish is the action of runner 2 of arriving at the finish.
A CCS expression describing the whole interaction is
Weak language equivalence and weak completed trace equivalence can be
defined in the same way. However, a definition of weak bisimulation is
relatively tricky. As we saw by example, just contracting all
τ-moves in a labelled transition system doesn't yield what we want
to get. There are several, subtly different, approaches in the
literature.
Thursday 26-4-2012
As stated above, today we defined branching bisimulation equivalence
in terms of consistent colourings. You should also understand the
definition in terms of bisimulation relations between states; this is
explained in the first handout.
Tuesday 1-5-2012
First I extended the partition refinement algorithm to deal with
branching bisimulation. The key idea is that τ-moves between
states within the same block are skipped over, whereas τ-moves
between different blocks are treated as if τ is any normal action.
So a rooted branching bisimulation acts like a strong
bisimulation in the first step.
Thursday 3-5-2012
Today I explained the infinitary version of the Hennessy-Milner Logic,
HML∞. It uses abbreviations like
⟨a⟩nφ
for ⟨a⟩⟨a⟩ ...
⟨a⟩φ, and
∧n=1∞ for an infinite conjunction.
The atomic actions of CSP consists of
all a, b, c, etc. from a given set A of
visible actions. The syntax of CSP does not provide for invisible
actions, although τ does occur in the process graph semantics of
CSP; it is introduced by internal choice and by concealment.
I didn't cover the parallel composition above yet (this will happen on
Tuesday), but I also presented a structural operational semantics of
the internal and external choice operators. By means of the example
that shows that the CCS +-operator is not compositional for branching
bisimulation equivalence I showed how the CSP external choice behaves
better in that regard.
Unlike the parallel compositions of CCS and ACP, the parallel
composition of CSP not only allows, but also enforces synchronisation.
For this reason CSP does not need a separate restriction or
encapsulation operator.
Tuesday 8-5-2012
The example of the relay race, presented earlier, is useful to illustrate
a key difference between CCS, CSP and ACP. To express the interaction
between the two runners, we need to do three things:
In CSP the parallel composition ||S takes case of 1, 2 and 3,
and there is a separate concealment operator to handle 4.
In CCS the parallel composition | takes case of 1, 2 and 4,
and there is a separate restriction operator to handle 3.
In ACP the parallel composition || takes care of 1 and 2,
and there are separate encapsulation and abstraction operators to handle 3 and 4, respectively.
Thursday 10-5-2012
Today we discussed deadlock, livelock and (possibly
fair) divergence, versus normal progress under a progress
assumption. In weak partial trace semantics, deadlock, livelock,
divergence and normal progress are all identified. In weak completed
trace semantics as treated in this course (there also are different
variants), deadlock, livelock and divergence are identified, and
distinguished from normal progress.
In CSP, as well as in David Walker's version of
bisimulation semantics, livelock and divergence are identified, and
distinguished from deadlock and normal progress, thereby implicitly assuming
choices to be unfair (a safe worst case scenario). In weak and
branching bisimulation semantics as used in CCS and ACP, choices are assumed
to be fair, and fair divergence is identified with normal progress,
and distinguished from livelock. Moreover, there is no distinction
between deadlock and livelock. This helps in the verification of
communication protocols, where you try to send a message as long as
needed until it succeeds and assume it will eventually arrive correctly.
I promised to present a variant of branching bisimulation next weak,
the branching bisimulation with explicit divergence, which is
the finest useful weak equivalence found in the literature. It
distinguishes all four modes of progress.
Tuesday 15-5-2012
First I introduced a variant of branching bisimulation semantics that
distinguishes all four phenomena mentioned above: deadlock, livelock,
possibly fair divergence and normal progress.
This branching bisimulation equivalence with explicit divergence
can best be defined in terms of consistent colourings.
Thursday 17-5-2012
Process algebras such as ACP come with complete axiomatisations to
reason about processes without studying their semantics, i.e. the
associated process graphs. The axioms are equations with variables
that are true modulo bisimulation (or some other equivalence relation)
for any processes filled in for the variables. A set of axioms
is complete if every statement about two processes being
equivalent can be derived from the equations. It is impossible to give
a complete axiomatisation with only finitely many equations for
languages like ACP, unless you add some auxiliary operators.
Such operators are the left-merge and the communication merge.
I showed how a parallel composition between two recursive processes
could be eliminated in favour of choice and prefixing.
More information in
wikipedia.
Tuesday 22-5-2012
Seminar presentations by Cheng, on Probabilistic ACP, and
by Sandeep on LTL and Büchi automata
Thursday 24-5-2012
Seminar presentation on modelling concurrency in LOTOS, by Marco.
The final exam is on Friday June 15, 9:00-12:00.
Friday June 15, 9:00-12:00 in ASB 107 (Building E12).
You are allowed to take 4 pages (or 2 sheets double-sides) of notes
(either handwritten or typed) with you.
Rob van Glabbeek | homework | rvg@cse.unsw.edu.au |