# Course notes for COMP3152/9152: Comparative Concurrency Semantics

This file will keep track of the material covered so far, and list the handouts distributed to date.

## Thursday 1-3-2012

Today I introduced the topics to be treated in this course: formalising specifications as well as implementations of concurrent systems, studying the criteria for deciding whether an implementation meets a specification, and techniques for proving this.

Both specifications and implementations can be represented by means of models of concurrency such as labelled transition systems (LTSs). I presented the definition of LTSs; it also appears in the first section of this handout.

Another way to represent concurrent systems is as process algebraic recursive equations. And specifications can be given by means of temporal logic, a formalisation of English-language specifications telling what should happen when.

I illustrated how an implementation could be given as parallel composition of components, enforcing synchronisation on certain actions. In general, parallel composition of many components gives rise to the state explosion problem. One way to combat it is by means of compositionality: proving things for each component separately and lifting the result to the entire parallel composition by application of some general theorem.

For whether an implementation meets a specification, we use equivalence relations and preorders.

## Tuesday 6-3-2012

Today I repeated to introduction to labelled transition systems. By Thursday you should all do the first homework to get familiar with this notion.

Specifications and implementations can not only be represented by LTSs or other models of concurrency, but also by process algebraic expressions, where complex processes are build up from constants for atomic actions using operators for sequential, alternative and parallel composition. The most popular process algebraic languages from the literature are CCS (the Calculus of Communicating Systems), CSP (Communicating Sequential Processes) and ACP (the Algebra of Communicating Processes). Today I started explaining the syntax of ACP, focusing on the partially synchronous parallel composition operator. I will continue with that on Thursday. The corresponding textbook is

J.C.M. Baeten & W.P. Weijland (1990): Process Algebra, Cambridge University Press

or

W.J. Fokkink (2000): Introduction to Process Algebra, Texts in Theoretical Computer Science, An EATCS Series, Springer.

1. ε (successful termination)
3. a (action constant) for each action a.
4. P.Q (sequential composition)
5. P+Q (summation, choice or alternative composition)
6. P||Q (parallel composition)
7. H(P) (restriction, or encapsulation) for each set of visible actions H.
8. τI(P) (abstraction) for each set of visible actions I.
The atomic actions of ACP consists of all a, b, c, etc. from a given set A of visible actions, and one special action τ, that is meant to be internal and invisible to the outside world. For each application, a partial communication function γ: AxA -> A is chosen that tells for each two actions a and b whether they synchronise (namely if γ is defined), and if so, what is result of their synchronisation: γ(a,b). The communication function is required to be commutative and associative. The invisible action cannot take part in synchronisations.

Here are the definitions of these operators, in terms of process graphs with a single predicate ✔ that signals successful termination.

• ε is the graph with one state and no transitions. This one state is the initial state. It is marked with ✔.
• δ is the graph with one state and no transitions. This one state is the initial state. It is not marked as terminating.
• G.H is the process that first performs G, and upon successful termination of G proceed with H.
It is homework to write down a formal definition.
• G+H is obtained by taking the union of copies of G and H with disjoint sets of states, and adding a fresh state root, which will be the initial state of G+H. For each transition IG --a--> s in G, where IG denotes the initial state of G, there will be an extra transition root --a--> s, and likewise, for each transition IH --a--> s in H, where IH denotes the initial state of H, there will be an extra transition root --a--> s.
root is labelled with ✔ if either IG or IH is.
• G||H is obtained by taking the Cartesian product of the states of G and H; that is, the states of G|H are pairs (s,t) with s a state from G and t a state from H. The initial state of G||H is the pair of initial states of G and H. The transitions are
• (s,t) --a--> (s',t) whenever s --a--> s' is a transition in G,
• (s,t) --a--> (s,t') whenever t --a--> t' is a transition in H, and
• (s,t) --c--> (s',t') whenever s --a--> s' is a transition in G and t --b--> t' is a transition in H, and γ(a,b)=c.
Intuitively, G||H allows all possible interleavings of actions from G and actions from H. In addition, it enables actions to synchronise with their communication partners.
• H(G) is just G, but with all actions in H omitted.
It is used to remove the remnants of unsuccessful communication, so that the synchronisation that is enabled by parallel composition, is enforced.
• τI(G) is just G, but with all actions in I renamed into τ.

Then I defined completed trace equivalence, and mentioned bisimulation equivalence. I illustrated their differences with the example a(b+c) versus ab+ac. These two process are completed trace equivalent, but they will turn out not to be bisimulation equivalent.

## Thursday 8-3-2012

Today I explained the notion of a denotational semantics of a system description language like ACP, and treated the denotational semantics of ACP given above. In the tutorial we translated back and forth between process graphs and recursive specifications in ACP.

## Tuesday 13-3-2012

I made the definition of completed traces more precise. For process graph without a termination predicate, a finite completed trace goes from the initial state to any state from which no further moves are possible. We write CTfin(p) for the set of finite completed traces of process p, and CT(p) for the set of all finite completed traces of p together with its infinite traces. When writing just CT(p) it depends on context which of the two is meant (i.e. this differs from one paper to the next).

Partial traces are defined likewise, but can end anywhere. We write PTfin(p) for the set of finite partial traces of process p, and PT(p) for the set of all finite partial traces of p together with its infinite traces.

On processes that are equipped with a termination predicate ✔ on states (like the accepting states from automata theory), partial traces and completed traces are defined in such a way that we could just as well have thought about the termination predicate that holds in a state s as a special transition marked ✔ that goes from s to a fresh state that is not connected in any other way. Thus, for example, CT(a.(ε+b.δ)) = {a, ab}.

Besides the denotational semantics of ACP presented above, I also gave an operational semantics of that language. The foundations of structural operational semantics are presented in this handout. I also introduced the language CCS and its operational semantics.

The syntax of CCS, the Calculus of Communicating Systems, features the operations

1. 0 (inaction)
2. a.P (action prefix) for each action a.
3. P+Q (summation, choice or alternative composition)
4. P|Q (parallel composition)
5. P\a (restriction) for each action a.
The atomic actions of CCS consists of all a, b, c, etc. from a given set A, together with the complementary actions a for each a in A, and one special action τ, that is meant to be internal and invisible to the outside world. By convention, if we put an overbar over a complementary action a, we get a again.

Here are the definitions, in terms of process graphs without predicates and without a notion of final states, of the operators mentioned above:

• 0 is the graph with one state and no transitions. This one state is the initial state.
• a.G is obtained from G by adding a new state, which will be the initial one, and a new a-labelled transition from this new state to the initial state of G (which is no longer initial in a.G).
• G+H is obtained by taking the union of copies of G and H with disjoint sets of states, and adding a fresh state root, which will be the initial state of G+H. For each transition IG --a--> s in G, where IG denotes the initial state of G, there will be an extra transition root --a--> s, and likewise, for each transition IH --a--> s in H, where IH denotes the initial state of H, there will be an extra transition root --a--> s.
We can think of this operator, as unwinding the initial states (roots) of G and H, and then gluing together these roots to form the new initial state of G+H.
• G|H is obtained by taking the Cartesian product of the states of G and H; that is, the states of G|H are pairs (s,t) with s a state from G and t a state from H. The initial state of G|H is the pair of initial states of G and H. The transitions are
• (s,t) --a--> (s',t) whenever s --a--> s' is a transition in G,
• (s,t) --a--> (s,t') whenever t --a--> t' is a transition in H, and
• (s,t) --τ--> (s',t') whenever s --a--> s' is a transition in G and t --a--> t' is a transition in H.
Intuitively, G|H allows all possible interleavings of actions from G and actions from H. In addition, it enables actions a to synchronise with their complementary actions a; the result of such an synchronisation is the internal action τ.
• G\a is just G, but with all actions a and a omitted.
It is used to remove the remnants of unsuccessful communication, so that the synchronisation that is enabled by parallel composition, is enforced.

## Thursday 15-3-2012

We have treated the first paragraph of the introduction as well as Definition 1 from the handout on structural operational semantics. In addition we practiced the formal derivation of transitions out of the transition rules of the structural operational semantics of a language. While doing so I illustrated the role of the encapsulation operators H(P) of ACP. Formally there is one such operator for every subset H of Act.

# (Lectures given by Peter Höfner)

## Tuesday 20-3-2012

Today I showed how to design a process algebra for routing protocols for wireless mesh networks (WMNs). I explained some background on WMNs, showed their importance and set up the requirements. (e.g., the need of modelling a highly dynamic topology)

Essential features for the process algebra are

• conditional unicast
• data structures and data handling
Today I set up the syntax for the process algebra and developed parts of the sos interference rules. In particular, we discussed sequential processes, which offer a syntax similar to an imperative program. Major primitives are broadcast, groupcast, unicast, non-deterministic choice, sequential composition and assignment.

Thursday we will discuss the remaining rules. Focus was set on how to model data structure in process algebra. The lecture followed the first part of Section 4 of a technical report

## Thursday 22-3-2012

In today's lecture I continued to present and derive a process algebra for wireless mesh networks. In particular I discussed and derived structural operational semantics for parallel processes and for network expressions ip:P:R.

The content of the lecture can again be found in Section 4 of a technical report. Unfortunately I could not finish the presentation of all interference rules. They will be presented in the next lecture.

## Organisation

Please note that the next lecture is April 5th, 12:00pm.

## Thursday 05-4-2012

In today's lecture I continued to present and derive a process algebra for wireless mesh networks. In particular I discussed and derived structural operational semantics for network expressions.

The content of the lecture can again be found in Section 4 of a technical report.

Lessons to be learned about the process algebra for Wireless Mesh Network

• data can be included into process algebraic approaches
• process algebras like CSP, ACP can be extended in a way to tackle "real-life problems"
• the sos-rules of the presented algebra are closely related to the ones of CSP (e.g. synchronisation)
• how to write specifications using our process algebra

## Tuesday 10-4-2012

This week we discussed Büchi automata (aka. omega automata)
Formally, a Büchi automaton is a tuple A = (Q,Σ,Δ,q0,F) that consists of the following components:

• Q is a finite sett. The elements of Q are called the states of A.
• Σ is a finite set called the alphabet of A.
• Δ: Q × Σ → 2Q is a transition function.
• q0 is an element of Q, called the initial state.
• FQ is the acceptance condition. A accepts exactly those runs in which at least one of the infinitely often occurring states is in F. A language accepted by some Büchi automaton is called ω-regular.

In a deterministic Büchi automaton, the transition function Δ is replaced with a transition relation δ that returns a single state. Note that the expressiveness of deterministic and non-deterministic Bücchi automata is not equivalent. We have discussed other accepting criteria such as the criteria of Muller and Rabin. Moreover we have seen that Büchi automata are closed under union ∪ and complementation. The proof of the latter one is very complex and was not shown.

## Thursday 12-4-2012

Today we proved that Büchi automata are closed under intersection ∩.

# (Lectures resumed by Rob van Glabbeek)

## Tuesday 17-4-2012

An equivalence relation =X is finer or more discriminating than an equivalence relation =Y, if if the equivalence classes of =X are included in those of =Y. Then =Y is coarser or less discriminating; in formulas
p =X q ==> p =Y q.
So completed trace equivalence is finer than partial trace equivalence, and bisimulation equivalence is even finer. We drew a spectrum that orders equivalence relations on labelled transition systems by how fine they are, drawing the finer ones at the top. Bisimulation equivalence is at the top of that spectrum and partial trace equivalence near the bottom. Completed trace equivalence and simulation equivalence are somewhere in between. Bisimulation equivalence is called a branching time equivalence, because it takes into account at which point two different traces branch off. Partial and completed trace equivalence at the other hand are linear time equivalences, as they only focus on the linear executions of a process, without taking branching into account.

The definition of bisimulation equivalence is in the first handout. We also defined when one process simulates another. This is like bisimulation, but in the transfer property the initiative is always taken by the simulated process, and the matching move should be taken by the simulated process. Two processes are simulation equivalent if one simulates the other and vice versa. This is not the same as bisimulation equivalence, for there we insists on a single relation that is at the same time a simulation in both directions. The processes ab+a(b+c) and a(b+c) simulate each other, but are not bisimulation equivalent.

Equivalently, we can say define a colouring to be consistent by saying that two states with the same colour should have the same coloured traces. Here a coloured trace is a trace interspersed with the colours of the states one travels through. It turns out to be the case that in order to make sure that two states have the same coloured traces, it suffices to check all traces with only one action in it.

On processes that are equipped with a termination predicate on states (like the accepting states from automata theory), partial traces and completed traces are defined in such a way that we could just as well have thought about the termination predicate that holds in a state s as a special transition marked ✔ that goes from s to a fresh state that is not connected in any other way. In addition, we two processes are called language equivalent if they have the same traces leading to terminating states. In automata theory such traces are called words, accepted by s. The set of all words accepted by s is the language accepted by s. It turns out that language equivalence is coarser than partial trace equivalence, whereas completed trace equivalence is finer than partial trace equivalence.

My paper The Linear Time - Branching Time Spectrum I is a small encyclopedia of equivalence relations; language equivalence appears in Section 19.

## Thursday 19-4-2012

Today we explained the Hennessy-Milner Logic in depth.

## Tuesday 25-4-2012

First I explained the partition refinement algorithm for deciding whether or not two states in an LTS are strong bisimulation equivalent. I handed out Section 3.1 of Chapter 6 of the Handbook of Process Algebra; it describes this algorithm.

There are two runners R1 and R1 participating in the relay race, and while running they carry a stick. Runner 1 starts running, and at the halfway-point passes the stick to runner 2, who is waiting there. Then runner 2 runs the rest of the race and delivers the stick at the finish.
We can describe the behaviour of the two runners by the CCS expressions:

R1 = start.pass-on.0
R2=pass-on.finish.0
Here start is the action of runner 1 starting the race,
pass-on is the finish of runner 1 at the halfway-point, where R1 passes the stick on to R2,
pass-on is the complementary action of runner 2 to accept the stick from runner 1 and start running,
and finish is the action of runner 2 of arriving at the finish.
A CCS expression describing the whole interaction is
(R1 | R2) \ pass-on
We drew the corresponding process graph in class, and it turned out to be bisimilar to
start.τ.finish.0
However, from a very high level of abstraction, where we abstract from the relay in the relay race, and rather focus on the stick, which we see departing by the action start, and subsequently arriving at the finish by the action finish, we may want to specify the overall behaviour of this race as
start.finish.0
To formally prove that the implementation matches this specification, we need a semantic equivalence that abstracts from the internal action τ, and makes start.τ.finish.0 equivalent with start.finish.0. Such equivalences are called weak or abstract. In contrast, an equivalence that treats τ like any other action is strong or concrete.

The definitions of language equivalence, partial and completed trace equivalence and bisimulation equivalence that we encountered last week can all be regarded as definitions of strong partial trace equivalence, strong bisimulation, etc., as long as we treat τ like any other action. To arrive at a definition of weak partial trace equivalence, we define a weak partial trace of a process p as a strong partial trace of p from which all the occurrences of τ have been omitted. Now two processes are called weak partial trace equivalent if they have the same weak partial traces.
Weak language equivalence and weak completed trace equivalence can be defined in the same way. However, a definition of weak bisimulation is relatively tricky. As we saw by example, just contracting all τ-moves in a labelled transition system doesn't yield what we want to get. There are several, subtly different, approaches in the literature.

On Thursday we will define one of these approaches, called branching bisimulation equivalence in terms of consistent colourings. A research paper in which this notion is defined can be found here. All we will cover is the introduction to Section 3 (on page 7), half of Definition 4, and the statement of Lemma 3 (half a page in total).

The intuition is that for two states to be equivalent, it should be possible to provide a colouring of all states such that the two states in question get the same colour. This colouring should be consistent, which is a requirement that guarantees that all states with the same colour have the same possible abstract behaviours, and hence can be considered equivalent. This requirement is formalised by saying that two states with the same colour should have the same abstract coloured traces. Here a concrete coloured trace is a trace interspersed with the colours of the states one travels through, and an abstract coloured trace is made from it by removing all τ-steps between states of the same colour. It turns out to be the case that in order to make sure that two states have the same abstract coloured traces, it suffices to check all traces with at most one non-τ-action in it, or at most one change of colour.

As a consequence, τ-transitions between states with the same colour can be contracted without changing the behaviour of the system in any essential way. Such transitions are called inert. However, τ-transitions between states that necessarily have different colours may not be contracted, because those states have different potentials for future behaviours.

## Thursday 26-4-2012

As stated above, today we defined branching bisimulation equivalence in terms of consistent colourings. You should also understand the definition in terms of bisimulation relations between states; this is explained in the first handout.

A colouring of a labelled transition system is just a function that associates a colour with every state. (Thus a colouring induces an equivalence relation on states, and any equivalence relation induces a colouring.) Now given a colouring, the potentials of a state s of a certain colour, are all pairs (a,C) with a an action and C a colour, such that s can do some inert transitions followed by an non-inert a-transition to a state with colour C. Here a transition is inert if it is labelled τ and moreover goes between two states of the same colour. Thus if (a,C) is a potential of a state s, then either a≠τ or C is not the colour of s. The colouring is consistent if all states of the same colour also have the same potentials. And two states are branching bisimulation equivalent iff there exists a consistent colouring that gives them the same colour.

In fact, the definition of a consistent colouring in terms of "potentials" is equivalent to the one in terms of "abstract coloured traces".

## Tuesday 1-5-2012

First I extended the partition refinement algorithm to deal with branching bisimulation. The key idea is that τ-moves between states within the same block are skipped over, whereas τ-moves between different blocks are treated as if τ is any normal action.

I also introduced weak bisimulation equivalence and discussed the difference with branching bisimulation equivalence. See the first handout.

Next I explained what it means for an equivalence relation to be a congruence for an operator, or, in other words, for the operator to be compositional for the equivalence. This is what makes formal verification of big systems given as parallel compositions feasible.

It is not hard to show that (strong) partial trace equivalence is a congruence for encapsulation. This follows because the partial traces of δH(P) are completely determined by the partial traces of the process P itself (namely by removing all actions from H). The same argument can be used to see that (strong or weak) partial trace equivalence is a congruence for all operators of CCS and ACP.

Partial trace equivalence is inadequate to deal with deadlock. Adding deadlock information to partial trace semantics yields completed trace semantics. However, completed trace equivalence is not a congruence for all operators. It fails for the restriction operator of CCS, which is the same as the encapsulation operator of ACP. Strong bisimulation equivalence on the other hand is a congruence for all operators of ACP, CCS and CSP. This is a reason to use bisimulation rather then partial or completed trace equivalence.

Yet, to be truly effective in the verification, we need a weak equivalence, one that abstracts from internal transitions. It turns out that neither weak nor branching bisimulation are congruences for the +-operator of CCS and ACP. This could be a reason for the language CSP not to use the +-operator, but instead an internal and external choice, to be explained next time. All operators of CSP turn out to be compositional for weak and branching bisimulation equivalence. The people behind CCS and ACP chose a different solution, namely to keep the +, but to change the bisimulations a bit to make them congruences for the +, and all the other operators.

Two processes P and Q are RBB equivalent if

So a rooted branching bisimulation acts like a strong bisimulation in the first step.

RBB equivalence is finer, i.e. more discriminating, than BB equivalence, and it is a congruence for all operators of ACP, CCS and CSP.

## Thursday 3-5-2012

Today I explained the infinitary version of the Hennessy-Milner Logic, HML. It uses abbreviations like ⟨anφ for ⟨a⟩⟨a⟩ ... ⟨a⟩φ, and ∧n=1 for an infinite conjunction.

Next, I presented the system description language CSP. The syntax of CSP, the theory of Communicating Sequential Processes, features the operations

1. 0 (inaction, originally written STOP)
2. a.P (action prefix, originally written a→P) for each visible action a.
3. PQ (external choice)
4. P⊓Q (internal choice)
5. P||SQ (parallel composition, enforcing synchronisation over the set S of visible actions)
6. P/a (concealment, originally written P\a) for each action a.
The atomic actions of CSP consists of all a, b, c, etc. from a given set A of visible actions. The syntax of CSP does not provide for invisible actions, although τ does occur in the process graph semantics of CSP; it is introduced by internal choice and by concealment.

Here are the definitions, in terms of process graphs without predicates and without a notion of final states, of the operators mentioned above:

• 0 is the graph with one state and no transitions. This one state is the initial state.
• a.G is obtained from G by adding a new state, which will be the initial one, and a new a-labelled transition from this new state to the initial state of G (which is no longer initial in a.G).
• GH is an operator that acts like a parallel composition without synchronisation until a visible action from one of the two components occurs, and from that moment onwards (internal or external) actions from that component only are possible.
• G⊓H is defined to be τG + τH.
• G||SH is obtained by taking the Cartesian product of the states of G and H; that is, the states of G||H are pairs (s,t) with s a state from G and t a state from H. The initial state of G||H is the pair of initial states of G and H. The transitions are
• (s,t) --a--> (s',t) whenever a is not in S and s --a--> s' is a transition in G,
• (s,t) --a--> (s,t') whenever a is not in S and t --a--> t' is a transition in H,
• and (s,t) --a--> (s',t') whenever a is in S and s --a--> s' is a transition in G and t --a--> t' is a transition in H.
Unlike the parallel compositions of CCS and ACP, the parallel composition of CSP not only allows, but also enforces synchronisation. For this reason CSP does not need a separate restriction or encapsulation operator.
• G/a is just G, but with all actions a renamed into τ.
I didn't cover the parallel composition above yet (this will happen on Tuesday), but I also presented a structural operational semantics of the internal and external choice operators. By means of the example that shows that the CCS +-operator is not compositional for branching bisimulation equivalence I showed how the CSP external choice behaves better in that regard.

## Tuesday 8-5-2012

The example of the relay race, presented earlier, is useful to illustrate a key difference between CCS, CSP and ACP. To express the interaction between the two runners, we need to do three things:
1. Parallel composition (modelled by a Cartesian product construction)
2. Allowing synchronisation (by adding diagonals to the product)
3. Enforcing synchronisation (by removing remnants of unsuccessful communication)
4. Abstraction (hiding of internal actions, often the result of synchronisation, by renaming them into τ)
In CSP the parallel composition ||S takes case of 1, 2 and 3, and there is a separate concealment operator to handle 4.
In CCS the parallel composition | takes case of 1, 2 and 4, and there is a separate restriction operator to handle 3.
In ACP the parallel composition || takes care of 1 and 2, and there are separate encapsulation and abstraction operators to handle 3 and 4, respectively.

## Thursday 10-5-2012

Today we discussed deadlock, livelock and (possibly fair) divergence, versus normal progress under a progress assumption. In weak partial trace semantics, deadlock, livelock, divergence and normal progress are all identified. In weak completed trace semantics as treated in this course (there also are different variants), deadlock, livelock and divergence are identified, and distinguished from normal progress. In CSP, as well as in David Walker's version of bisimulation semantics, livelock and divergence are identified, and distinguished from deadlock and normal progress, thereby implicitly assuming choices to be unfair (a safe worst case scenario). In weak and branching bisimulation semantics as used in CCS and ACP, choices are assumed to be fair, and fair divergence is identified with normal progress, and distinguished from livelock. Moreover, there is no distinction between deadlock and livelock. This helps in the verification of communication protocols, where you try to send a message as long as needed until it succeeds and assume it will eventually arrive correctly. I promised to present a variant of branching bisimulation next weak, the branching bisimulation with explicit divergence, which is the finest useful weak equivalence found in the literature. It distinguishes all four modes of progress.

## Tuesday 15-5-2012

First I introduced a variant of branching bisimulation semantics that distinguishes all four phenomena mentioned above: deadlock, livelock, possibly fair divergence and normal progress. This branching bisimulation equivalence with explicit divergence can best be defined in terms of consistent colourings.

State s is divergent w.r.t. a colouring is there is an infinite path of τ-transitions starting from s, only passing through states with the same colour as s. The colouring preserves divergence if for any colour either all states of that colour are divergent w.r.t. the colouring, or none are. Now two states are branching bisimulation equivalence with explicit divergence if they receive the same colour by some consistent colouring that preserves divergence.

Then I introduced Petri nets, and the way they model concurrent systems. I indicated how to define the main operators from CCS/CSP and ACP on Petri nets.

## Thursday 17-5-2012

Process algebras such as ACP come with complete axiomatisations to reason about processes without studying their semantics, i.e. the associated process graphs. The axioms are equations with variables that are true modulo bisimulation (or some other equivalence relation) for any processes filled in for the variables. A set of axioms is complete if every statement about two processes being equivalent can be derived from the equations. It is impossible to give a complete axiomatisation with only finitely many equations for languages like ACP, unless you add some auxiliary operators. Such operators are the left-merge and the communication merge. I showed how a parallel composition between two recursive processes could be eliminated in favour of choice and prefixing. More information in wikipedia.

## Tuesday 22-5-2012

Seminar presentations by Cheng, on Probabilistic ACP, and by Sandeep on LTL and Büchi automata

## Thursday 24-5-2012

Seminar presentation on modelling concurrency in LOTOS, by Marco.

## The final exam is on Friday June 15, 9:00-12:00.

Friday June 15, 9:00-12:00 in ASB 107 (Building E12). You are allowed to take 4 pages (or 2 sheets double-sides) of notes (either handwritten or typed) with you.
 Rob van Glabbeek homework rvg@cse.unsw.edu.au