Course notes for COMP6752: Comparative Concurrency Semantics
This file will keep track of the material covered so far, and list the
handouts distributed to date.
Monday 2622018
Today I will introduce the topics to be treated in this course:
formalising specifications as well as implementations of concurrent
systems, studying the criteria for deciding whether an implementation
meets a specification, and techniques for proving this.
Look here for the broader context.
Both specifications and implementations can be represented by means
of models of concurrency such as
labelled transition systems (LTSs) or process graphs.
I present the definition below.
Another way to represent concurrent systems is by means of process
algebraic expressions.
These concepts will be illustrated by means of an example
(see below). It shows
how an implementation could be given as parallel composition of
components, enforcing synchronisation on certain actions.
When both specifications and implementations are given as LTSs, we
need a criterion to determine if they match. I will discuss a
criterion based on bisimulation, that matched the states of the
related LTSs. I show why it is useful here to abstract from
internal actions of the implementation.
When the specification and implementation are given as process
expressions, to decide whether they match, one considers the meaning,
or semantics of these expressions. Often, this meaning is
given by mapping them to LTSs.
A process graph is a triple (S,I,→), with S a set
(of states), I∈S an initial state,
and → a set of triples (s,a,t) with s,t
∈S and a an action, drawn from a set Act.
An LTS is just a process graph without initial state.
(But sometimes LTS is used as synonym for process graph, i.e. with
initial state.)
We'll also use process graphs with a fourth component ✔⊆S
indicating the final states of the process: those in which the
system can terminate successfully.
Specifications and implementations can not only be represented by LTSs
or other models of concurrency, but also by process algebraic
expressions, where complex processes are build up from constants for
atomic actions using operators for sequential, alternative and
parallel composition. The most popular process algebraic languages
from the literature are CCS (the Calculus of Communicating Systems),
CSP (Communicating Sequential Processes) and ACP (the Algebra of
Communicating Processes). I present the syntax of
ACP, focusing on the partially synchronous parallel
composition operator.
The corresponding textbook is
J.C.M. Baeten & W.P. Weijland (1990): Process Algebra, Cambridge
University Press
or
W.J. Fokkink (2000):
Introduction to Process Algebra, Texts in
Theoretical Computer Science, An EATCS Series, Springer.
The syntax of ACP, the Algebra of Communicating Processes,
features the operations
 ε (successful termination)
(only present in the optional
extension ACP_{ε})
 δ (deadlock)
 a (action constant) for each action a.
 P.Q (sequential composition)
 P+Q (summation, choice
or alternative composition)
 PQ (parallel composition)
 ∂_{H}(P) (restriction, or
encapsulation) for each set of visible actions H.
 τ_{I}(P) (abstraction) for each set of
visible actions I
(only present in the optional
extension ACP_{τ}).
The atomic actions of ACP consists of
all a, b, c, etc. from a given set A
of visible actions, and one special action τ, that is meant to be
internal and invisible to the outside world. For each application, a
partial communication function γ: AxA > A is
chosen that tells for each two visible actions a and b whether
they synchronise (namely if γ is defined), and if so, what is
result of their synchronisation: γ(a,b).
The communication function is required to be commutative and associative.
The invisible action cannot take part in synchronisations.
Here are the definitions of these operators, in terms of process
graphs extended with a predicate ✔ that signals successful termination.

ε is the graph with one state and no transitions.
This one state is the initial state. It is marked with ✔.

δ is the graph with one state and no transitions.
This one state is the initial state. It is not marked as terminating.

a is a graph with two states (and initial and a final one) and
one transition between them, labelled a. The final state is
marked with ✔.
 G.H is the process that first performs G, and upon
successful termination of G proceed with H.
It is homework to write down a formal definition.
 G+H is obtained by taking the union of copies of G
and H with disjoint sets of states, and adding a fresh
state root, which will be the initial state of G+H.
For each transition I_{G} a> s in G, where
I_{G} denotes the initial state of G,
there will be an extra transition root a> s, and likewise,
for each transition I_{H} a> s in H, where
I_{H} denotes the initial state of H,
there will be an extra transition root a> s.
root is labelled with ✔ if
either I_{G} or I_{H} is.
 GH is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G and H.
A state (s,t) is labelled ✔ if and only if both s
and t are labelled ✔. The transitions are
 (s,t) a> (s',t) whenever s a> s' is a transition in G,
 (s,t) a> (s,t') whenever t a> t' is a transition in H, and
 (s,t) c> (s',t') whenever s a> s' is a transition in G
and t b> t' is a transition in H, and γ(a,b)=c.
Intuitively, GH allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions to synchronise with their communication partners.
 ∂_{H}(G) is just G, but with all
actions in H omitted.
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
 τ_{I}(G) is just G, but with all
actions in I renamed into τ.
Here is the example of the relay race presented in class.
There are two runners R_{1} and R_{1}
participating in the relay race, and while running they carry a baton.
Runner 1 starts running, and at the halfwaypoint passes the baton to
runner 2, who is waiting there. Then runner 2 runs the rest of the
race and delivers the baton at the finish.
We can describe the behaviour of the two runners by the ACP expressions:
R_{1} = start.give
R_{2} = receive.finish
Here start is the action of runner 1 starting the race,
give is the finish of runner 1 at the halfwaypoint,
where R_{1} gives the baton to R_{2},
receive is the action of runner 2 to accept the baton from runner 1 and start running,
and finish is the action of runner 2 of arriving at the finish.
An ACP expression describing the whole interaction is
τ_{I}(∂_{H}(R_{1}  R_{2}))
with H={give,receive} and I={handover}.
Here the communication function γ is given
by γ(give,receive) = γ(receive,give) = handover.
We will draw the corresponding process graph in class, and after deleting
unreachable parts it looked just like the process graph of the expression
start.τ.finish
However, from a very high level of abstraction, where we abstract from
the relay in the relay race, and rather focus on the baton, which we
see departing by the action start, and subsequently arriving at
the finish by the action finish, we may want to specify the
overall behaviour of this race as
start.finish
To formally prove that the implementation matches this specification,
we need a relation between these processes that abstracts from the
internal action τ.
Tuesday 2722018
Today I presented the syntax and semantics of the process algebra ACP.
See above for details.
The semantics presented was of the denotational kind,
and in terms if process graphs equipped with a state label ✔,
indicating successful termination. Here "denotational" entails that
each constant denotes a process graph (up to isomorphism)
and each ACP operator denotes an operational on process graphs
(creating a new graph out of one or two argument graphs).
I also treated the optional extension ACP_{ε},
whose semantics is given by process graphs (or LTSs) in which not only
end states but also intermediate states may be marked ✔.
Monday 532018
I discussed the concept of an equivalence relation, and recalled
that any equivalence relation on a set D gives rise to a partitioning
of D into equivalence classes (see Homework 1). Now one equivalence
relation ≡ is called finer (or more discriminating)
than another equivalence relation ~ if each ≡equivalence class
is included in a ~equivalence class, i.e. when p≡q
implies p~q for all p,q in D. Conversely, ~ is
called coarser than ≡.
In this course I will present a lattice of semantic equivalence
relations on process graphs. These represent different points of view
on whether two processes are roughly the same. In such a lattice,
equivalences are classified with respect to their discriminating power.
As part of this lattice of semantic equivalences I presented 8=2x2x2
versions of trace semantics, all defined on process graphs
G=(S,T,I) that are not equipped with a termination
predicate (=final states).
Two processes p and q are partial trace
equivalent, notation P =_{PT} Q, if they have the
same partial traces. Here a partial trace of a process p
is the sequence of transition labels encountered on a path in the
process graph of p, starting from the initial state. Here a path can
be defined as an alternating sequence of states and transitions, where
each transition goes from the state before it to the state after it.
Processes p and q are completed trace equivalent,
notation P =_{CT} Q, if moreover they have the same
complete traces. Here a completed trace is a state that stems from a
maximal path; one that cannot be prolonged at the end.
For process graphs without a termination predicate, a path of a
process p is an alternating sequence of states and transitions,
starting from the state p and either infinite or ending in a
state, such that each transition goes from the state before it to the
state after it. It is complete if it is either infinite or ends
in a state from which no further transitions are possible.
A completed trace of p is the sequence of labels of the
transitions in a complete path. We write CT^{fin}(p) for the set of
finite completed traces of process p, and CT^{∞}(p)
for the set of all finite completed traces of p together with
its infinite traces. When writing just CT(p) it depends on
context which of the two is meant (i.e. this differs from one paper to
the next). I this class I defined CT(p) to mean CT^{fin}(p).
Partial traces are defined likewise, but can end anywhere.
We write PT^{fin}(p) for the set of
finite partial traces of process p,
and PT^{∞}(p)
for the set of all finite partial traces of p together with
its infinite traces. Normally, PT(p) means PT^{fin}(p).
Now processes p and q are infinitary completed trace equivalent,
notation P =_{CT}^{∞} Q,
if CT^{∞}(p) = CT^{∞}(q).
It is a theorem that this implies that also PT^{∞}(p) = PT^{∞}(q).
Processes p and q are finitary completed trace equivalent,
notation P =_{CT}^{fin} Q or just P =_{CT} Q,
if CT^{fin}(p) = CT^{fin}(q) and PT^{fin}(p) = PT^{fin}(q).
Here the last requirement cannot be skipped.
Likewise, processes p and q are infinitary partial trace equivalent,
notation P =_{PT}^{∞} Q,
if PT^{∞}(p) = PT^{∞}(q).
They are finitary partial trace equivalent,
notation P =_{PT} Q,
if PT(P) = PT(Q).
Together, these four equivalence notion form a square when ordered to
discriminating power.
One of the things we learned from the example of the relay race
is that we need an equivalence relation on processes that abstracts
from the internal action τ.
To formally prove that the implementation matches this specification,
we need a semantic equivalence that makes start.τ.finish equivalent with
start.finish. Such equivalences are called weak
or abstract. In contrast, an equivalence that treats τ
like any other action is strong or concrete.
The definitions of partial and completed trace
equivalence we encountered above
can all be regarded as definitions of strong partial trace
equivalence and strong partial completed trace equivalence, as long as we
treat τ like any other action. To arrive at a definition of weak
partial trace equivalence (=_{WPT}), we define a weak partial trace of a
process p as a strong partial trace of p from which all the
occurrences of τ have been omitted. Now two processes are
called weak partial trace equivalent if they have the same weak
partial traces. Weak completed trace equivalence, and the
infinitary versions, can be defined in the same vein.
On processes that are equipped with a termination predicate on states
(like the accepting states from automata theory), partial traces and
completed traces are defined in such a way that we could just as well
have thought about the termination predicate that holds in a
state s as a special transition marked ✔ that goes
from s to a fresh state that is not connected in any other way.
Thus, for example, CT^{∞}(a.(ε+b.δ))
= {a✔, ab}.
In addition, two processes are called language equivalent if
they have the same traces leading to terminating states. In automata
theory such traces are called words, accepted by s.
The set of all words accepted by s is the language
accepted by s. It turns out that language equivalence is coarser
than partial trace equivalence, whereas completed trace equivalence is
finer than partial trace equivalence.
My paper
The Linear Time  Branching Time Spectrum I is a small
encyclopedia of (strong, or concrete) equivalence relations;
language equivalence appears in Section 19.
Tuesday 632018
I discussed why one would want to distinguish the processes
b + c and τ.b+τ.c. The reason is that in a context
of an environment that admits synchronisation on b,
but not on c, the first process always performs the (weak)
trace b,
whereas the second has the possibility to reach a state
of deadlock after doing the internal step τ.
Deadlock is a (bad) state of a system, in which no further actions are
possible, and the system does not terminate successfully.
For the same reason, one would want to distinguish the systems
Q := τ.(b + c) and P:= τ.b+τ.c.
However, these two systems are equivalence under each of the 8 variants
of trace equivalence discussed above.
For this reason we need equivalence relations on process graphs or labelled
transition systems that are finer than partial or completed trace
equivalence, in the sense that they make more distinctions.
The definition of bisimulation equivalence, also called bisimilarity,
can be found in this handout,
which you should read entirely, except for the sections
"Modal logic", "Nonwellfounded sets" and "Concurrency".
In that handout, LTSs and process graphs have an extra component ⊨
that is used to tell whether certain predicates, taken from a set P,
hold in certain states of our processes. In this course, we typically
consider only one predicate, namely ✔, telling that successful
termination is possible in a state. For this reason, a quadruple
(S,I,→,⊨) is presented as (S,I,→,✔).
More commonly, one ignores the component ⊨ or ✔ completely.
On the set of all process graph we can study multiple equivalence
relations and order them w.r.t. their discriminating power.
I drew a spectrum of equivalence relations, with finer ones
located above coarser ones.
In this spectrum I located partial trace equivalence,
completed trace equivalence and
bisimulation equivalence, and treated examples showing their differences.
Bisimulation equivalence is at the top of the spectrum and
partial trace equivalence near the bottom. Completed trace equivalence
is somewhere in between.
Bisimulation equivalence is called a branching time equivalence,
because it takes into account at which point two different traces
branch off. Partial and completed trace equivalence at the other hand
are linear time equivalences, as they only focus on the linear
executions of a process, without taking branching into account.
The spectrum of weak trace equivalences is the same as in the strong case:
weak completed trace equivalence is finer (more
discriminating) than weak partial trace equivalence.
A weak version of bisimilarity is even finer.
Defining a weak form of bisimilarity is relatively tricky. As
we will see later by example, just contracting all τmoves in a labelled
transition system doesn't yield what we want to get. There are
several, subtly different, approaches in the literature. The most
prominent ones are weak bisimilarity and branching bisimilarity,
both defined in the abovelinked handout. I presented the definition of
weak bisimilarity in class.
Monday 1232018
I discussed deadlock, livelock and (possibly
fair) divergence, versus normal progress under a progress
assumption. Some formal definitions are
in this handout, which you should read.
A deadlock occurs in a state where no further actions are possible.
A livelock occurs in a state where no further visible (nonτ) actions are possible,
but always an internal action (τ) is possible.
A divergence is an infinite sequence of internal actions.
A livelock is a special kind of divergence. A divergence that is not a
livelock could be regarded "possibly fair" if one makes an appropriate
fairness assumption, because one might escape from it.
A progress assumption roughly states that if we are in a state with an
outgoing transition, we will not stay in that state forever.
A fairness assumption roughly says, of a choice between a certain
action g and an alternative, that if we encounter this choice
infinitely often, sooner or later g will be chosen.
A global fairness assumption assumes this for each and every
choice in the system.
   WPT^{fin} 
P  normal progress  a.b.0  $\{ab,a,\epsilon\}$ 
Q  fair divergence  a.Δb.0  $\{ab,a,\epsilon\}$ 
R  livelock  a.(b.0+ τ.Δ0)  $\{ab,a,\epsilon\}$ 
S  deadlock  a.(b.0 + τ.0)  $\{ab,a,\epsilon\}$ 
In the above table, the unary operator $\Delta$ appends a $\tau$loop
at the beginning of its argument. Formally, $\Delta G$ is the process
graph created out of $G$ by adding a fresh initial state $I$, together
with a fresh $\tau$loop I $\tau$> I, and a
transition I $\alpha$> s for each transition
I_{G} $\alpha$> s from $G$.
Now process graphs of the 4 processes in the above table were drawn in class.
If we assume fairness, "normal progress" may be identified with "fair divergence",
but in the absence of a fairness assumption only "normal progress"
satisfies the liveness assumption that b will happen eventually.
In weak partial trace semantics, deadlock, livelock,
divergence and normal progress are all identified.
The weak partial traces of all four processes are the same.
This week it is homework to invent definitions of completed trace
equivalence that make different choices in this matter.
I also treated weak and branching bisimulation equivalence, from this
handout;
read the section "Abstraction".
I also characterised branching bisimilarity in terms of
consistent colourings:
A colouring of a labelled transition system is just a function that
associates a colour with every state. (Thus a colouring induces an
equivalence relation on states, and any equivalence relation induces a
colouring.) Now given a colouring, the potentials of a
state s of a certain colour, are all pairs (a,C)
with a an action and C a colour, such that s can
do some inert transitions followed by an noninert atransition
to a state with colour C.
Here a transition is inert if it is labelled τ and moreover goes
between two states of the same colour. Thus if (a,C) is
a potential of a state s, then either a≠τ or C is
not the colour of s. The colouring is consistent if all
states of the same colour also have the same potentials. Now one can
show that two states are branching bisimulation equivalent iff there
exists a consistent colouring that gives them the same colour.
Strong bisimilarity can be characterised in the same way; the only
difference is that no actions count as inert. Thus a potential (a,C)WPT^{fin}
simply means one can do an atransition to a state with colour C.
Tuesday 1332018
In order to describe infinite processes in a language like ACP, we add
a bunch of constants X,Y,Z to the language, known as process names.
Each process name X comes with a defining equation X = P_{X}
where P_{X} is any expression in the language, possibly
including X, or any other process names. The set of all such
defining equations (that are created separately for each application
of the language) is called a recursive specification. If any occurrence of a
process name in P_{X} lays within the scope of a
subexpression a.P'_{X} of P_{X}, the
recursive specification is guarded. Process names from a
guarded recursive specification are a good way to define (possibly
infinite) processes. An example is an a loop, defined by the
recursive specification X = a.X.
An alternative approach is to introduce variables X,Y,Z to the language,
as well as recursion constructs <XS>. Here S is
a set of recursive equations X = P_{X},
and <XS> specifies "the process X that satisfies the
equations S". We normally restrict attention to the guarded
case, in which there is only one process X satisfying S.
The syntax of CCS, the Calculus of Communicating Systems,
features the operations
 0 (inaction)
 a.P (action prefix) for each action a.
 P+Q (summation, choice
or alternative composition)
 PQ (parallel composition)
 P\a (restriction) for each action a.
 P[f] (relabelling) for each function f:A > A.
The atomic actions of CCS consists of
all a, b, c, etc. from a given set A,
together with the complementary actions
a for each a
in A, and one special action τ, that is meant to be
internal and invisible to the outside world. By convention, if we put
an overbar over a complementary action
a, we get a again.
Here are the definitions, in terms of process graphs without
predicates and without a notion of final states, of the operators
mentioned above:

0 is the graph with one state and no transitions.
This one state is the initial state.
 a.G is obtained from G by adding a new state, which will be
the initial one, and a new alabelled transition from this new
state to the initial state of G (which is no longer initial in a.G).
 G+H is obtained by taking the union of copies of G
and H with disjoint sets of states, and adding a fresh
state root, which will be the initial state of G+H.
For each transition I_{G} a> s in G, where
I_{G} denotes the initial state of G,
there will be an extra transition root a> s, and likewise,
for each transition I_{H} a> s in H, where
I_{H} denotes the initial state of H,
there will be an extra transition root a> s.
We can think of this operator, as unwinding the initial statesWPT^{fin}
(roots) of G and H, and then gluing together
these roots to form the new initial state of G+H.
 GH is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G
and H. The transitions are
 (s,t) a> (s',t) whenever s a> s' is a transition in G,
 (s,t) a> (s,t') whenever t a> t' is a transition in H, and
 (s,t) τ> (s',t') whenever s a> s' is a transition in G
and t a> t' is a transition in H.
Intuitively, GH allows all possible interleavings of actions
from G and actions from H. In addition, it enables
actions a to synchronise with their complementary actions
a; the result of such an
synchronisation is the internal action τ.
 G\a is just G, but with all actions a and
a omitted.WPT^{fin}
It is used to remove the remnants of unsuccessful communication, so
that the synchronisation that is enabled by parallel composition, is enforced.
 G[f] is the graph G in which each label a
is replaced by f(a), and likewise
a by f(a).
Processes with loops are specified by means of recursive specifications,
just as for ACP (see above).
The main differences between ACP and CCS are:
 ACP makes a distinction between deadlock and
successful termination. As a consequence, action prefixing can be
replaced by action constants and a general sequential composition.
This would not work for CCS, as the behaviour of the sequential
composition depends on the successful termination of the first
component, which is not modelled in CCS.
 ACP adds two auxiliary operators, the left merge and the
communication merge, to enable a
finite equational axiomatisation of the parallel composition.
These operators do not occur in CCS.
 Whereas CCS combines communication and abstraction from internal
actions in one operator, in ACP these activities are separated. In CCS
the result of any communication is the unobservable action τ. In
ACP it is an observable action, from which (in the extended language
ACP_{τ}) one can abstract by applying an abstraction
operator τ_{I}, renaming designated actions into τ.WPT^{fin}
 CCS adheres to a specific communication format, admitting
only handshaking communication, whereas ACP allows a variety of
communication paradigms, including ternary communication, through the
choice of the communication function γ.
Monday 1932018
Today we had an excellent presentation on denotational semantics, by Ryan Barry.
Here are the slides.
In preparation for tomorrow's student presentation I presented the
structural operational semantics of CCS. A more thorough introduction
to structural operational semantics (SOS) will be given by Kai next week.
In the denotational semantics we saw before,
the denotation of an expression is a process graph (an LTS
with initial state) and the meaning of a composed expression
like PQ is a process graph obtained in a particular way from
the process graphs of the arguments P and Q.
In the operational framework we drop the distinction between processes
and states. There will be one humongous LTS (without initial state)
whose states are all possible closed CCS expressions, and the transitions
between states are given by certain operational rules.
Here is the SOS of CCS. (Only read the first
section.) What is called $\textbf{fix}_XS$ in the handout was called
<XS> in class. The rule for recursion involves
substitution; we with clarify it further later on.
To get an idea of the difference between process graphs and LTSs, read
the definitions in this handout,
and especially note how Definition 3 of bisimulation avoids having the
clause on initial states.
Tuesday 2032018
Today Minjie Shen gave his presentation,
starting with a smallstep operational semantics of a Mini Robot
Language with commands init, move and turn,
and then giving an operational semantics of quantum computing
following the book Foundations of Quantum Programming by Minsheng Ying (2016).
Here is an outline of the talk and
slides (not presented,
since Minjie gave a whiteboard talk).
Tutorial
Answer to HW3, Q1:
I only consider process graphs G without a termination predicate.
A rooted path of G is an alternatingWPT^{fin}
sequence of states and transitions, starting from the initial state of
G and either infinite or ending in a state, such that each
transition goes from the state before it to the state after it.
For π such a path, let l(π) be the (finite or infinite)
sequence of labels of the transitions of π, in the order in which
these transitions occur in π. Let wl(π) be the same
sequence after omitting all τs.
A state s is a deadlock state if it has no outgoing transitions.
A finite path π is a deadlock path if its last state is a deadlock state.
A state s is a livelock state if from s we can not reach
a deadlock state, nor a state from which a visible action can be performed.
A finite path π is a livelock path if its last state is a livelock state.
A state s is a divergence state if it is the beginning of an infinite path of τtransitions.
A finite path π is a divergence path if its last state is a divergence state.
Let WCT^{δ}(G) := {wl(π)  π is a deadlock path of G}.
Let WCT^{λ}(G) := {wl(π)  π is a livelock path of G}.
Let WCT^{Δ}(G) := {wl(π)  π is a divergence of G}.
Note that WCT^{λ}(G) always is a subset of WCT^{Δ}(G).
These sets are illustrated on the examples given earlier:
  WPT^{fin}  WCT^{δ}  WCT^{λ}  WCT^{Δ} 
normal progress  a.b.0  $\{ab,a,\epsilon\}$  $\{ab\}$  $\emptyset$  $\emptyset$ 
fair divergence  a.Δb.0  $\{ab,a,\epsilon\}$  $\{ab\}$  $\emptyset$  $\{a\}$ 
livelock  a.(b.0+ τ.Δ0)  $\{ab,a,\epsilon\}$  $\{ab\}$  $\{a\}$  $\{a\}$ 
deadlock  a.(b.0 + τ.0)  $\{ab,a,\epsilon\}$  $\{ab,a\}$  $\emptyset$  $\emptyset$ 
The question was: Modify the definition of weak completed trace semantics in six
different ways, namely such that
 deadlock = livelock and fair divergence = normal progress,
 fair divergence = normal progress, and livelock differs from
deadlock and from normal progress,
 deadlock = livelock and fair divergence differs from
livelock and from normal progress,
 livelock = fair divergence, different from deadlock and progress,
 deadlock = livelock = fair divergence, different from progress,
 all four phenomena are distinguished.
The answers:
 Let $G =_{\rm WCT}^a H$ if WPT^{fin}(G) = WPT^{fin}(H) and WCT^{δ}(G)
$\cup$ WCT^{λ}(G) = WCT^{δ}(H) $\cup$ WCT^{λ}(H).
 Let $G =_{\rm WCT}^b H$ if WPT^{fin}(G) = WPT^{fin}(H) and WCT^{δ}(G) = WCT^{δ}(H) and WCT^{λ}(G) = WCT^{λ}(H).
 Let $G =_{\rm WCT}^c H$ if WPT^{fin}(G) = WPT^{fin}(H) and WCT^{δ}(G)
$\cup$ WCT^{λ}(G) = WCT^{δ}(H) $\cup$ WCT^{λ}(H) and WCT^{δ}(G)
$\cup$ WCT^{Δ}(G) = WCT^{δ}(H) $\cup$ WCT^{Δ}(H)
 Let $G =_{\rm WCT}^d H$ if WPT^{fin}(G) = WPT^{fin}(H) and WCT^{δ}(G) = WCT^{δ}(H) and WCT^{Δ}(G) = WCT^{Δ}(H).
 Let $G =_{\rm WCT}^e H$ if WPT^{fin}(G) = WPT^{fin}(H) and WCT^{δ}(G)
$\cup$ WCT^{Δ}(G) = WCT^{δ}(H) $\cup$ WCT^{Δ}(H).
 Let $G =_{\rm WCT}^f H$ if WPT^{fin}(G) = WPT^{fin}(H) and WCT^{δ}(G) = WCT^{δ}(H) and WCT^{λ}(G) = WCT^{λ}(H) and WCT^{Δ}(G) = WCT^{Δ}(H).
Monday 2632018 to Tuesday 1042018 by Kai Engelhardt
See this page,
also for homework.
Monday 1642018 by Peter Höfner: Temporal logic
Today I introduced the logics CTL and CTL*. The syntax and the semantics can be found in any text book on temporal logic, e.g. [HR04].
For CTL I explained how fixed points can be used to check whether a given Kripke structure satisfies E_U_ and E_U_.
I then continued the lecture by discussing how 2 different logics can be compared w.r.t. expressiveness.
As a trivial example, we showed LTL ⊆ CTL* and CTL ⊂ CTL* .
A superset of today's contents can be found at the slides of COMP3153/9153.
In particular, Lecture 3 (Syntax and Semantics of LTL/CTL and CTL*) and
Lecture 5 (calculating fixed point of E_U_ and E_U_).
[HR04] Michael Huth and Mark Ryan. Logic in Computer Science (2nd edition). Cambridge University Press, 2004.
Tuesday 1742018 by Peter Höfner
As we taught you to compare logics yesterday, we showed
that LTL and CTL are incomparable.
We did this by proving that there cannot exist an LTLformula
that is equivalent to the CTLformula
AG EF a, and by showing that there cannot exist
a CTLformula equivalent
to FG a (see e.g. [BK08].
At the end of the lecture we discussed how Kripke structures can be
translated into LTSs, and vice versa.
[BK08] Christel Baier and JoostPieter Katoen. Principles of model
checking. MIT Press, 2008. ISBN 9780262026499
Monday 2342018: Congruence
I defined what it means for an equivalence relation to be a
congruence for an operator, or, in other words, for the operator
to be compositional for the equivalence.
This is what makes formal verification of big systems given as parallel compositions feasible.
An equivalence ~ is a congruence for a language L if
P ~ Q implies that C[P] ~ C[Q]
for every context C[ ]. Here a context C[ ] is an
Lexpression with a hole in it, and C[P] is the result of
plugging in P for the hole.
I also explained a theorem in Structured Operational Semantics, saying
that strong bisimulation is a congruence as soon as all operators
concerned have operational rules in the GSOS format.
See the handout on structural operational semantics
for details.
As a consequence of this result, strong bisimilarity is a congruence
for all operators of ACP, CCS and CSP.
It is not hard to show that (strong) partial trace equivalence is a
congruence for encapsulation. This follows because the partial traces of
∂_{H}(P) are completely determined by the partial
traces of the process P itself (namely by removing all partial
traces with actions from H).
The same argument can be used to see that partial
trace equivalence is a congruence for all operators of CCS and ACP.
Partial trace equivalence is inadequate to deal with deadlock.
Adding deadlock information to partial trace semantics
yields completed trace semantics. However, completed trace equivalence
is not a congruence for all operators. It fails for the restriction
operator of CCS, which is the same as the encapsulation operator of ACP.
It also fails for the parallel composition of CSP.
A counterexample: the processes $ab+ac$ and $a(b+c)$ are completed
trace equivalent, but after placing them in a context $\partial_{\{b\}}(\_)$
or $\_\backslash b$ or $\_\_{\{b\}}0$ they are not.
Strong bisimulation equivalence on the other hand is a congruence for all
operators of ACP, CCS and CSP. This is a reason to use bisimulation rather
then partial or completed trace equivalence.
The congruence closure ~^{c} of ~ (w.r.t. a language L)
is defined by
P ~^{c} Q if and only if C[P] ~ C[Q]
for any Lcontext C[ ].
It can be shown that ~^{c} is the coarsest congruence finer than ~.
(Note that there are three statements in that last sentence.)
The congruence closure of strong completed trace equivalence can be
characterised as strong failures equivalence (not defined yet).
It is finer than strong completed trace equivalence, but finer than
strong bisimilarity. A similar thing applies to weak equivalences.
Weak and branching bisimilarity fail to be congruences for the choice
operator (+) of ACP or CCS. A counterexample was presented in class.
There are two ways to remedy such a situation: to chance the equivalence
(by taking instead its congruence closure) or to change the language
(by dropping the offending operator).
Weak bisimulation congruence, the congruence closure of weak bisimilarity,
is the default semantic equivalence used in the language CCS.
It can be characterised as rooted weak
bisimilarity:
two processes are weak bisimulation congruent, iff after applying
an operation (rootunwinding) that preserves strong bisimilarity and
removes any loops going through the initial states they are
related by a rooted weak bisimulation, one that relates initial
states with initial states only.
Likewise, branching bisimulation congruence, the congruence closure of branching bisimilarity,
is the default semantic equivalence used in the language ACP.
It can be characterised as rooted branching bisimilarity.
The language CSP follows the other approach: it drops the offending
operator +. Its operational semantics is given here.
The syntax of CSP, the theory of Communicating Sequential Processes,
features the operations
 0 (inaction, originally written STOP)
 a.P (action prefix, originally written a→P) for each visible action a.
 P☐Q (external choice)
 P⊓Q (internal choice)
 P_{S}Q (parallel composition, enforcing
synchronisation over the set S of visible actions)
 P/a (concealment, originally written P\a) for each action a.
The atomic actions of CSP consists of
all a, b, c, etc. from a given set A of
visible actions. The syntax of CSP does not provide for invisible
actions, although τ does occur in the process graph semantics of
CSP; it is introduced by internal choice and by concealment.
Here are the definitions, in terms of process graphs without
predicates and without a notion of final states, of the operators
mentioned above:

0 is the graph with one state and no transitions.
This one state is the initial state.
 a.G is obtained from G by adding a new state, which will be
the initial one, and a new alabelled transition from this new
state to the initial state of G (which is no longer initial in a.G).
 G☐H is an operator that acts like a parallel composition
without synchronisation until a visible action from one of the two components
occurs, and from that moment onwards (internal or external) actions from
that component only are possible.
 G⊓H is defined to be τG + τH.
 G_{S}H is obtained by taking the Cartesian product of the states of
G and H; that is, the states of GH are pairs (s,t) with
s a state from G and t a state from H. The initial state
of GH is the pair of initial states of G and H. The transitions are
 (s,t) a> (s',t) whenever a is not in S
and s a> s' is a transition in G,
 (s,t) a> (s,t') whenever a is not in S
and t a> t' is a transition in H,
 and (s,t) a> (s',t') whenever a is in S
and s a> s' is a transition in G
and t a> t' is a transition in H.
Unlike the parallel compositions of CCS and ACP, the parallel
composition of CSP not only allows, but also enforces synchronisation.
For this reason CSP does not need a separate restriction or
encapsulation operator.
 G/a is just G, but with all actions a renamed into τ.
Tuesday 2442018
LTL and CTL are defined on Kripke structures,
transition systems of which the states rather than the transitions are labelled.
Traditionally it is required that each state has at least one outgoing transition.
The statelabels are atomic propositions; a state could be labelled with zero, one or more of them.
Write $s \models \phi$ if the temporal logic formula $\phi$ (in CTL or LTL) holds for the state $s$.
To apply CTL or LTL to labelled transition systems we need a translation $\eta$ that maps
(states in) labelled transition systems to (states in) Kripke structures. We then define
validity of an LTL or CTL formula $\phi$ on a state $p$ in an LTL by $$p \models_\eta \phi \Leftrightarrow \eta(p) \models \phi$$
A suitable transition $\eta$ introduces a new state halfway each
nonτ transition, and transfers the transition label to that state.
Two processes (states in a deadlockfree LTS) satisfy the same LTL
formulas if they have the same infinite traces. For finitelybranching
processes, we even have "iff".
Two processes (states in a deadlockfree LTS) satisfy the same LTL_{X}
formulas if they have the same weak infinite traces (obtained by
leaving out τ's).
Two processes (states in a deadlockfree LTS) satisfy the same CTL
formulas if they are bisimilar. For finitelybranching processes, we even have "iff".
For general processes, we also have "iff", provided that we use CTL
with infinite conjunctions.
Two processes (states in a deadlockfree LTS) satisfy the same CTL_{X}
formulas if they are branching bisimilar with explicit divergence.
We even have "iff", provided that we use CTL_{X}
with infinite conjunctions.
I presented another characterisation of weak bisimulation congruence
=_{WBC} (the congruence closure of weak bisimilarity):
Two processes P and Q are RWB equivalent iff
 whenever P a> P' then Q ==>a>==> Q' and P' =_{WB} Q'
 and when Q a> Q' then P ==>a>==> P' and P' =_{WB} Q'.
Crucial here is that the mimicked astep is not optional.
RWB equivalence equals weak bisimulation congruence; it is finer,
i.e. more discriminating, than BB equivalence,
and it is a congruence for all operators of ACP, CCS and CSP.
The congruence closure of branching bisimilarity can be obtained in a
similar way: two processes are branching bisimulation congruent,
iff after applying
an operation (rootunwinding) that preserves strong bisimilarity and
removes any loops going through the initial states they are
related by a rooted branching bisimulation, one that relates initial
states with initial states only.
It turned out that two processes P and Q are RBB equivalent iff
 whenever P a> P' then Q a> Q' and P' =_{BB} Q'
 and when Q a> Q' then P a> P' and P' =_{BB} Q'.
Then I presented the HennessyMilner logic (HML), following
this handout.
(In the section "modal logic" take polymodal logic with an empty set
of propositions.)
The syntax of HML is given by:
\[\varphi,\psi ::= {\color{green} \top} \mid {\color{red} \bot} \mid
\color{green} \varphi \color{green}\wedge \color{green}\psi \mid \color{red} \varphi \color{red}\vee \color{red}\psi \mid
\neg\varphi \mid
\color{green} \langle \color{green}\alpha \color{green}\rangle \color{green}\phi \mid
\color{red}[\color{red}\alpha\color{red}] \color{red}\phi \]
Infinitary HML (HML$^\infty$) moreover has an infinitary conjunction $\bigwedge_{i\in I}\phi_i$.
By Theorems 1 and 2 in the handout, two processes are strong bisimulation equivalent iff they
satisfy the same infinitary HMLformulas. Thus, to show that two processes are not strongly
bisimilar, it suffices to find an infinitary HMLformulas that is satisfies by one, but not by the other.
Moreover, if we know that at least one of the processes is imagefinite, we do not need infinite conjunction.
Above, the red connectives and modalities are redundant, because they are expressible in the green
connectives and negation. Likewise, the green connectives are expressible in terms of the red ones
and negation. Finally, if we keep both the red and the green connectives and modalities, then
negation is redundant. It is the latter form we need to extend HML into the modal $\mu$calculus.
It additionally has modalities $\mu X.\phi(X)$ and $\nu X.\phi(X)$. Here $\phi(X)$ is a modal
$\mu$formula that may contain the variable $X$.
$\mu X.\phi(X)$ denotes the least fixed point of the equation $X=\phi(X)$, and $\nu X.\phi(X)$ the greatest.
For these fixed points to exist, the formulas $\phi(X)$ need to be monotone in $X$, and to achieve
that we needed to eliminate negation.
A formal denotational semantics of the modal $\mu$calculus can be found in
wikipedia.
There, negation is not eliminated, but instead a variable needs to be under an even number of
negation operators. This treatment is equivalent.
Tuesday 152018
Last week we say how CTL and LTL could be
applied to analyse properties of systems given as process graphs, or
states in a labelled transition system. This involved a translation
η from LTSs to Kripke structures. Since Kripke structures are
defined to be total, meaning that each state must have at least
one outgoing transitions, we could only analyse systems (given as
process graphs) that are deadlockfree.
Today I extended this approach to LTSs that may
have deadlocks (states without outgoing transitions).
There are several approaches found in the literature:
 Translate any LTS (possibly with deadlock) into a Kripke
structure without deadlock by adding a selfloop to any state in the
Kripke structure that otherwise would be a deadlock.
Then use the standard interpretation (in terms of infinite paths)
of the LTL or CTL operators on deadlockfree Kripke structures.
 Stick to the translation defined earlier, thereby obtaining
Kripke structures that could have deadlocks.
Change the interpretation of LTL and CTL by using complete
paths instead of infinite paths for the interpretation of formulas.
The complete paths are the infinite paths, together with all finite
paths ending in a deadlock sate.
 Translate any LTS (possibly with deadlock) into a Kripke
structure without deadlock by adding a new state labelled with a
selfloop, labelled by a fresh proposition δ (deadlock)
and a transition to this state from any state in the
Kripke structure that otherwise would be a deadlock.
Then use the standard interpretation (in terms of infinite paths)
of the LTL or CTL operators on deadlockfree Kripke structures.
Now we have the following extensions of last weeks results:
Two processes $P$ and $Q$ (in an arbitrary LTS) satisfy the same LTL
formulas if P =_{CT}^{∞} Q.
For finitelybranching processes, we even have "iff".
Two processes $P$ and $Q$ (in an arbitrary LTS) satisfy the same LTL_{X}
formulas if P =_{WCT}^{∞} Q, that is,
if they have the same (possibly infinite) weak completed traces (obtained by
leaving out τ's).
Two processes $P$ and $Q$ (in an arbitrary LTS) satisfy the same CTL
formulas if they are strongly bisimilar. For finitelybranching processes, we even have "iff".
For general processes, we also have "iff", provided that we use CTL
with infinite conjunctions.
Two processes $P$ and $Q$ (in an arbitrary LTS) satisfy the same CTL_{X}
formulas if they are branching bisimilar with explicit divergence.
We even have "iff", provided that we use CTL_{X}
with infinite conjunctions.
The last result, however, only works with approach 3 above.
With approach 1 or 2 we would get a weaker form of branching bisimilarity with
explicit divergence, which would identify deadlock and livelock.
(Not the "weakly divergence preserving branching bisimilarity" defined below.)
Branching bisimilarity with explicit divergence ($BB^\Delta$),
also known as
divergence preserving branching bisimilarity is
the finest useful weak equivalence found in the literature. It
distinguishes all four modes of progress discussed
earlier: deadlock, livelock, possibly fair divergence and normal progress.
Branching bisimulation equivalence with explicit divergence
can be defined in terms of consistent colourings:
State s is internally divergent w.r.t. a colouring if there is an
infinite path of τtransitions starting from s, only
passing through states with the same colour as s.
The colouring preserves internal divergence if for any colour either all
states of that colour are divergent w.r.t. the colouring, or none are.
Now two states are branching bisimulation equivalence with explicit
divergence if they receive the same colour by some consistent
colouring that preserves internal divergence.
(State s is divergent if there is an
infinite path of τtransitions starting from s.
A colouring preserves divergence if for any colour either all
states of that colour are divergent, or none are.
Now two states are weakly divergence preserving branching bisimilar
if they receive the same colour by some consistent
colouring that preserves divergence.
I showed by a proof and a counterexample that this notion is strictly
weaker than divergence preserving branching bisimilarity.)
The below is an
equational axiomatisation of bisimulation equivalence on the language
of (very) basic CCS (only using 0, action prefixing and +).
The axioms for BCCS are
$$\begin{array}{cc}
(P + Q) + R = P + (Q + R) & \mbox{(associativity of the +)} \\
P + Q = Q + P & \mbox{(commutativity of the +)} \\
P + P = P & \mbox{(idempotence of the +)} \\
P + 0 = P & \mbox{(0 is a neutral element of the +)} \\
\end{array}$$
This set of axioms is sound, meaning that they only equate
bisimilar processes, and complete, meaning that each true
equation follows from the ones above.
Such equational axiomatisations all use the following rules
of equational logic:
$$P=P \qquad \frac{P=Q}{Q=P} \qquad \frac{P=Q \quad Q=R}{P=R} \qquad
\frac{P=Q}{C[P] = C[Q]}
\mbox{for any context} C[\ ].$$
Here, the equality signs = can be read as bisimilarity or any other
appropriate equivalence.
If we want to axiomatise partial trace equivalence instead of
bisimilarity, there is an extra axiom
$$\alpha.(P+Q) = \alpha.P + \alpha.Q$$
Expansion Theorem for CCS: Let $P := \sum_{i\in I}\alpha_i.P_i$
and $Q := \sum_{j\in J}\beta_j.Q_j$. Then
$$P  Q = \sum_{i\in I} \alpha_i (P_i  Q) + \sum_{j\in J} \beta_j (P  Q_j) +
\sum_{\begin{array}[t]{c}i\in I, j \in J \\ \alpha_i =
\bar\beta_j\end{array}}\tau.(P_i  Q_j).$$
By means of this theorem, and a similar treatment of the restriction
and renaming operators of CCS,
any CCS expression (with only guarded recursion) can be rewritten into
a bisimulation equivalent CCS
expression of the form $\sum_{i\in I}\alpha_i.P_i$. Such an
expression is called a head normal form.
If there is no recursion at all, this process can be iterated, and any
recursionfree CCS expression
can be rewritten in to a Basic CCS expression: one built solely
from 0, $+$ and action prefixing.
The above complete equational axiomatisation of (strong)
bisimulation equivalence for
basic CCS now extends to a complete equational axiomatisation of
all of recursionfree CCS.
I also handed out 8 pages of the book "Process Algebra" by Baeten &
Weijland (1990). It shows a verification of the alternating bit protocol
in the process algebra ACP.
The handout uses equational reasoning (applying axioms and such)
to show that the implementation is rooted branching bisimilar to the
specification. This mode of proving things is called equivalence checking.
Monday 752018
Today, Joshua Lau gave a presentation on Algorithms for Strong
Bisimilarity on Finite Transition Systems.
Here are his slides. He gave detailed
description of the partition refinement algorithm by Kanellakis & Smolka,
as improved by Paige & Tarjan. It has complexity $O(m.(log(n)+log(l))$, where
$m$ is the number of transitions, $n$ is the number of states, and $l$
is the number of different action labels in a finite system (when
assuming n < m+2, which can be achieved by collapsing deadlock states).
(Often $l$ is taken to be a constant, and not counted.)
Recommended reading
The partition refinement algorithm for strong bisimilarity treated
earlier can be easily adapted to deal with branching bisimilarity.
I will fill in some details tomorrow.
Possibly read Sections 1, 2, 3 and 6 of
An O(m log n) Algorithm for Branching Bisimulation.
Or (better) Section 1, 2, 3 and 7 of
the published version.
To decide weak bisimilarity for finite transition systems we first
compute the transitive closure: add a transition $p \stackrel{\alpha}\longrightarrow r$
whenever we have $p \stackrel{\tau}\longrightarrow q \stackrel{\alpha}\longrightarrow r$
or $p \stackrel{\alpha}\longrightarrow q \stackrel{\tau}\longrightarrow r$;
also add a $\tau$loop in every state. After this transformation weak
bisimilarity coincides with strong bisimilarity, and we can use the algorithm to decide the latter.
But the complexity of performing the above transitive closure is
higher than that of partition refinement; the total complexity of
deciding weak bisimilarity is $O(l.n^{2.376})$.
On Tuesday 152018 we discussed a complete equational axiomatisation
of strong bisimilarity on recursionfree CCS.
To axiomatise rooted weak bisimilarity we need three extra axioms:
$$\begin{array}{c}
(T1) & \alpha.\tau.P = \alpha.P \\
(T2) & \tau.P = \tau.P + P \\
(T3) & \alpha.(\tau.P + Q) = \alpha.(\tau.P + Q) +\alpha.P\\
\end{array}$$
When using equational reasoning to verify that a specification is
equivalent to an implementation we use, in addition to the machinery
of equational logic, the recursive definition principle (RDP)
and the recursive specification principle (RSP).
RDP says that a system satisfies its recursive definition; RSP says
that two processes satisfying the same guarded recursive
equation, must be equal. It holds for strong and rooted weak or
branching bisimilarity. These principles are applied in last week's
handout on the verification of correctness of the alternating bit protocol.
Tuesday 852018
The above axioms (T2) and (T3) describe a kind of transitive closure on the LTS: if we
have two transitions
p τ> q a> r or p a> q τ> r,
then we obtain weakly bisimulation congruent systems by adding a
transition p a> r.
Delay bisimilarity and ηbisimilarity are semantic equivalences
intermediate between weak and branching bisimilarity, defined in the
handouts. Rooted
delay bisimilarity is axiomatised by (T1) and (T2); rooted
ηbisimilarity by (T1) and (T3). Rooted branching bisimilarity is
completely axiomatised by
$$\alpha.(\tau.(P+Q)+Q) = \alpha.(P+Q)$$
on top the of axioms for strong bisimilarity. This axiom is a
strengthening of (T1). There does not exists a congruence that is
axiomatised by (T1) plus the axioms of strong bisimilarity alone.
In this direction, rooted branching bisimilarity is at good as it gets.
I explained the partition refinement algorithm for branching
bisimilarity using the the example LTS given by the CCS expression
$a.(b+\tau.(b+c)) + a.(c+\tau.(b+c)) + a.(b+\tau.c) + a.(c+\tau.b))$.
Upon minimisation up to BB, we ended up with a system of 7 states.
There are two crucial differences with the partition refinement
algorithm for strong bisimilarity:
 when checking whether a state in block B has an
$\alpha$transition to block C, it is OK if we can move
through block B by doing only $\tau$transitions, and then reach a
state with an $\alpha$transition to block C.
 We never check $\tau$transitions from block B to block B.
($\tau$transitions to another block are fair game, though).
The complexity is the same as for strong bisimilarity, and thus better
than for weak bisimilarity. To obtain this complexity we must maintain
a more tricky data structure, that is not covered in class.
I sketched the space of semantic equivalence relations
between process that to some extent take causal independence into
account. In one dimension semantic equivalences can be ordered
from lineartime to branchingtime semantics.
Typical examples of lineartime semantics are partial trace and
completed trace semantics. Typical branchingtime semantics are
strong bisimulation or branching bisimulation semantics.
The canonical equation that holds for
lineartime but not for branchingtime is a(b+c) = ab+ac.
Orthogonal to this classification is a spectrum that ranges
from interleaving semantics, in which parallelism is treated
by arbitrary interleaving of atomic actions, and causality respecting
semantics, in which causal relations between actions are explicitly
taken into account. A typical equation that holds in interleaving
semantics but not in causal semantics is ab = ab+ba.
It is rejected in causal semantics because in ab the two
actions a and b are causally independent, whereas
in ab+ba either a depends on b or vice versa.
A causal version of partial trace equivalence can be obtained by
taking partial orders of actions instead of sequences. This is why
causal semantics are sometimes called partial order semantics.
One argument to use noninterleaving semantics is realtime consistency.
If we imagine that actions take time to execute, interleaving
semantics may equate processes that have different running times (it
is not realtime consistent), whereas this cannot happen in
causal semantics. An other argument is action refinement:
we want that two equivalent processes remain equivalent after
systematically splitting all occurrences of an action a
into a1;a2, or refining a into any more complicated
process. This works for causal semantics, but not for interleaving semantics.
Monday 1452018
Today Adrian Goldwaser gave a presentation on Petri nets.
Here are his slides.
After the break I added transition labels to nets, and presented a
denotational semantics of CCS in terms of labelled Petri nets,
skipping the treatment of recursion.
The I continued the story from last Tuesday:
Between interleaving and causal semantics lays step semantics.
It differs from interleaving semantics by explicitly taking into
account the possibility that two concurrent actions a
and b by pure chance happen at the exact same (realtime) moment.
Here ab differs from ab+ba because only the former has
a trace where a and b happen at exactly then same time.
Between step semantics and causal semantics lays interval semantics.
It takes causality into account only to the extend that it manifest
itself by durational actions overlapping in time.
A map of the equivalences discussed above appears on page 3 of
this paper.
I indicated how causal trace semantics can be represented in terms
of pomsets, partial ordered multisets. The process
$a(b(c+de)))$ for instance has two completed pomsets, which were
drawn in class. A normal trace can be regarded as a totally ordered
multiset of actions.
Tuesday 852018
I described the essence of safety properties, saying
that something bad will never happen, and liveness properties,
saying that something good will happen eventually.
The terminology "liveness" used here stems from Lamport 1977, who was
inspired by the older use of the term "liveness" in Petri nets, whose
meaning however is different. We formulated these properties in LTL and CTL.
Safety properties are often established by means of invariants,
properties of states that hold in the initial state, and are
preserved under doing a transition. In the formulation of invariants,
states are often determined by the values of certain variables, and
the invariants are formulated in terms of these variables. Whether a
"bad" outgoing transition occurs in a state also depends on the values
of those variables, and the invariants we need says this never happens.
Whether liveness properties of systems hold depends often on whether
or not we make appropriate progress and fairness assumptions.
I discussed 4 such assumptions: progress, justness,
weak fairness and strong fairness. Please read page 7 of
this paper
for an informal description of these properties.
When assuming justness but not weak fairness, there are two bisimilar
systems with different liveness properties. Have a look at the section
"Process algebras without fairness assumptions, and their limitations"
in this paper.
Thus dealing with liveness properties under justness assumptions
(without assuming weak fairness) calls for a semantic equivalence
that distinguishes processes that are interleaving bisimilar.
In fact, this opens a new dimension in our lattice of equivalences.
A example illustrating the above 4 progress and fairness assumptions appears in Figure 4 of
this paper.
That net displays a shop with two clerks I and II and two returning
customers A and B. There are 4 interactions between the clerks and the
customers: I serves A, II serves A, I serves B, II serves B.
In this net there are many traces that describe possible complete runs
of the systems. The stronger our fairness assumption, the fewer runs
are admissible. When assuming nothing, all partial traces model runs.
When assuming progress, this rules out all finite traces; now only
infinite traces model real completed runs. When assuming justness a
run that only contains "I serves A" is ruled out, because the
preconditions of "II serves B" are always met, and these tokens are
never used for anything else. However, a run that alternates between
"I serves A" and "II serves A" is still fine. Such a run is ruled out
by weak fairness however, as in each state "I serves B" is enabled,
yet customer B is never served. Weak fairness, however, allows the run
that alternates between "I serves A" and "I serves B", because "II
serves A" is not enabled in every state (due to customer A leaving
repeatedly). Yet, this run is ruled out by strong fairness.
This examples deals with global fairness assumption.
It is also possible to point at a Petri net (or some other system
description) and mark certain choices as fair, and others
as unfair; this gives local fairness assumptions.
Due to lack of time we do not treat those here.
The correctness of the alternating bit protocol (ABP) hinges on
a fairness assumption, saying that if you try to send a message
infinitely often, sooner or later it will come through correctly.
Without relying on such an assumption, the protocol is obviously
incorrect, on grounds that there can be no way to ensure correct message delivery.
Fairness assumptions are important in verification. If we refuse to
make them we declare any protocol like the ABP a priori incorrect, and
miss an opportunity to check the protocol logic for flaws that have
nothing to do with the unreliable nature of the communication channel.
A strong fairness assumption is already incorporated in the notions of
(rooted) branching (or weak) equivalence seen before.
The notions of (rooted) branching (or weak) bisimulation equivalence
with explicit divergence do not make fairness of justness assumptions.
A progress assumption is build in in the logics CTL and LTL (for they
work with infinite paths rather than partial paths).
My personal opinion is that it is often reasonable to assume (global)
progress. In the absence of that, no nontrivial liveness property can
be proven. When assuming progress I find it reasonable to also assume
justness (globally). However, the use of global (weak or strong)
fairness assumption should not be our default; it should be used only
if for some reason we are convinced that such an assumption is
warranted, or because we can only obtain a meaningful liveness
property conditionally on making such an assumption.
Course evaluation
Please fill in the
course and instructor evaluation forms.
(The school holds it against the instructor if there are not enough responses.)
Monday 2152018
Today, QuacAn Ho spoke on
Safety and Liveness properties in labelled transition systems,
based on the first 9 pages of this paper.
Here are his slides.
After the break, Patrick Gilfillan presented Chu spaces.
Here are his slides.
In the last 10 minutes I presented "reactive CTL/LTL".
A few weeks ago we say how to interpret CTL and
LTL on process graphs (or states in labelled transition systems) with deadlocks.
We discussed 3 methods to deal with deadlock. In the context of CTL or
LTL minus the nextstate operator, the 3rd method was the best.
But when using full CTL or LTL, with the nextstate operator, the 2nd
method is just as good, and this is what I use today.
In a deadlock state in a Kripke structure (a state without outgoing
transitions), the CTL formula AXfalse
$\neg$EXtrue holds, and in a
state with an outgoing transition it does not hold. Hence this formula
could be used to describe deadlock.
We use the De NicolaVaandrager translation from process graphs
to Kripke structures.
Formally, a process graph is a triple (S,I,→), with S a set
(of states), I∈S an initial state,
and → a set of triples (s,a,t) with s,t
∈S and a an action, drawn from a set Act.
The associated Kripke structure is $(S',I,\rightarrow',\models)$,
where $S' := S \cup\{(s,a,t)\in{\rightarrow}\mid a\neq \tau\}$,
and ${\rightarrow'} := \{(s,(s,a,t)),((s,a,t),t) \mid (s,a,t)\in S'\}
\cup \{(s,t)\mid (s,\tau,t) \in {\rightarrow}\}$. This Kripke
structure has the same states and transitions as the process graph,
except that a new state $(s,a,t)$ is added halfway a transition from
$s$ to $t$ labelled $a\neq\tau$. As atomic propositions we take the
set Act of actions, except for $\tau$, and declare that a state
satisfies the atomic proposition $a\in Act$ iff that state has the
form $(s,a,t)$. So $r \models a$ iff $r=(s,a,t)$.
Now we consider all possible subsets $B$ of $Act\setminus\{\tau\}$.
$B$ contains the actions that may be blocked by the environment.
We define a $B$complete path of a process graph as a path that is
either infinite or ends in a state of which all outgoing transitions
have a label from $B$. A $B$complete path in a Kripke structure that
arises as the De NicolaVaandrager translation of a process graph is a
path that ends in a state, all of whose successor states are labelled
with atomic propositions from $B$. Thus, the De NicolaVaandrager
translation maps $B$complete paths in process graphs to $B$complete
paths in the associated Kripke structures.
An LTL formula $\phi$ is said to hold for the Kripke structure
$K$ when the actions from $B$ might be blocking, notation $K \models_B \phi$,
iff each $B$complete path from the initial state of $K$ satisfies $\phi$.
Likewise, $\models_B$ can be defined for CTL by simply replacing all
references to infinite paths by $B$complete paths.
As an example, consider a vending machine that takes in a coin $c$ and
produces a pretzel $p$. We assume that taking in the coin requires
cooperation from the environment, but producing the pretzel does not.
A process algebraic specification is $$VM=a.p.VM$$
In oldfashioned LTL we have $VM \models G(c\Rightarrow Fp)$.
This formula says that whenever a coin is inserted, eventually a
pretzel is produced. This formula is intuitively true indeed.
But we also have $VM \models G(p\Rightarrow Fc)$.
This formula says that whenever a pretzel is produced, eventually a new
coin will be inserted. This formula is intuitively false.
This example shows that oldfashioned LTL is not suitable to describe
the behaviour of this vending machine. Instead we use reactive LTL,
and take $B=\{c\}$. This choice of $B$ says that the environment may
bock the action $c$, namely by not inserting a coin; however, the
environment may not block $p$. As intuitively expected, we have
$VM \models_B G(c\Rightarrow Fp)$ but $VM \not\models_B G(p\Rightarrow Fc)$.
Note that the oldfashioned LTL/CTL interpretation $\models$ is simply
$\models_\emptyset$, obtained by taking the empty set of blocking actions.
There are 3 ways to incorporate the above idea.

One is to pick a set $B$ that is fixed for a given application.
In case of the vending machine above, we take $B=\{c\}$.
Then we can leave out the subscript $B$ from $\models$.

Alternatively, we use a specification of our system that is neutral on
which actions are blocking, and instead use subscripts $B$ with $\models$.
A formula $P \models_B \phi$ then simply says that "under the
assumption that actions from $B$ may block, process $P$ satisfies
formula $\phi$". Thus, information about blocking is taken aboard only
when writing down the validity of temporal formulae.
 The third option is to stick to oldfashioned CTL or LTL, and
instead add a $\tau$loop to each state that intuitively has only
blocking outgoing actions. This approach says that we assume that
any system will make progress, unless we explicitly indicate that it
can stay in a state by means of a $\tau$loop.
In this approach our vending machine would have to be adapted to
$VM=\tau.VM + a.p.VM$.
Tuesday 852018
Today Ray Li talked about failures semantics.
Here are his slides.
This was the last lecture of this course.