Homework 1, Monday 1-6-2020

(Due: 6-6-2020 at 5pm; accepted without penalty until 8-6-2020 at 10am)
  1. A binary relation ~ on a set D is an equivalence relation if it is
    • reflexive: p ~ p for all processes p in D,
    • symmetric: if p ~ q then q ~ p,
    • and transitive: if p ~ q and q ~ r then p ~ r.
    1. Give an example of an equivalence relation on the domain of all people.
    2. Give 2 examples of a relation on people that is not an equivalence relation because exactly one of the three requirements above fails. Which one?
    3. If ~ is an equivalence relation on D, and p is in D, then [p]~ denotes the set of all processes in D that are equivalent to p. Such a set is called an equivalence class. Show (formally, by means of a mathematical proof) that the collection of all equivalence classes form a partitioning of D. This means that every p in D belongs to one and only one equivalence class.
    [None of this has been treated in class; (c) is a test of prior mathematical exposure to the notion of a mathematical proof. If this is hard, it is a good warm-up for bits of mathematical proof yet to come.]
  2. Consider a vending machine that accept pieces of 20 cents, 50 cents and 1 dollar, and that can dispense chocolate and pretzels, each of which costs 1 dollar. Draw an LTS whose transaction are labelled with "20c", "50c" or "$\$$1", indicating the reception by the machine of a coin inserted by a user, and by the actions "chocolate" or "pretzel", indicating the opening of the glass door of the machine behind which chocolate or pretzels are stored. (We abstract from the closing of the door after extraction of the pretzel, which happens automatically, and the appearance of a new pretzel behind the glass door afterwards.) Assume that the machine does not give change, and that it accepts any amount of surplus money. Try to make the number of states in your LTS as small as possible.
  3. Draw a process graph for the ACP expression a(bc+b(c+d)).
  4. Look at the definition given in the notes of the choice operator (+) of ACP in terms of LTSs. An alternative definition could be obtained by "identifying" the states IG and IH, i.e. by merging them together into one state. Is there any difference between this approach and the one of the notes? If so, illustrate this by means of examples, and tell which definition you prefer.
  5. Give a formal definition of the sequential composition operator of ACP (see notes), in the style of the notes.

Homework 2, Tuesday 2-6-2020

(Also due: 6-6-2020 at 5pm; accepted without penalty until 8-6-2020 at 10am)
  1. Give an example of two processes without infinite traces that have the same partial traces but different completed traces.
  2. Give an example of a process that has no finite completed traces at all.
  3. Give an example of two processes without infinite traces that are weakly completed trace equivalent and also strongly partial trace equivalent but not strongly completed trace equivalent.
  4. Consider a more bureaucratic version of the relay race that was presented in class. This time runner 1 may only pass the stick to runner 2 in the presence of a bureaucrat, who is stationed halfway the start and finish. The bureaucrat will watch the transaction and subsequently make a note of it in his notebook, that will be kept for posterity. Consider a specification that mentions the start of the race (start), the delivery of the parcel (finish) and the making of the bureaucratic note (note). Describe the specification and the implementation in ACP.

Homework 3, Tuesday 9-6-2020: Linear time versus branching time

(Due: 13-6-2020 at 5pm; accepted without penalty until 15-6-2020 at 10am)
    1. Out of the following processes, which pairs are weak bisimulation equivalent?
      a.(b+τ.c)+τ.a.d a.(b+τ.c)+τ.a.d+a.b a.(b+τ.c)+τ.a.d+a.c a.(b+τ.c)+τ.a.d+a.d
      Show the bisimulation or informally describe why there isn't any.
    2. Also tell which pairs are weak completed trace equivalent, and why or why not.
    3. And which pairs are strong completed trace equivalent, and why or why not.
    4. And which pairs are branching bisimulation equivalent, and why or why not.
  1. Show, for labelled transition systems without τ-transitions, and without termination predicate, that bisimulation equivalence is strictly finer than infinitary completed trace equivalence. This means that it makes strictly fewer identifications. This involves two things:
    • show that whenever two processes are bisimulation equivalent, they are surely also infinitary completed trace equivalent.
    • give an example of two processes that are infinitary completed trace equivalent but not bisimulation equivalent.
  2. Given an example of two processes of which some of the states may be marked with the termination predicate ✔, such that the two are (strong) bisimulation equivalent when discarding the termination predicate (without changing the states and transitions in any way), yet with the termination predicate taken into account they are completed trace equivalent but not bisimulation equivalent.
  3. We write $Q \Longrightarrow Q$ if there are $Q_0$, $Q_1$, ... , $Q_n$ for some $n \geq 0$, such that $Q=Q_0$, $Q=Q_n$, and $Q_{i-1} \stackrel{\tau}\longrightarrow Q_i$ for all $i=1,...,n$.
    Here $Q$, $Q$ and the $Q_i$ are either processes or states in labelled transition systems. (In the next lecture we will drop the distinction between processes and states.) We write $P =_{BB} Q$ if there exists a branching bisimulation that relates the states $P$ and $Q$.

    1. Show that if $P =_{BB} Q$, $Q \Longrightarrow Q'$ and $P =_{BB} Q'$, then $P =_{BB} Q_i$ for any process $Q_i$ on the $\tau$-path from $Q$ to $Q'$.
    2. Does the same property also holds for weak bisimilarity $=_{WB}$? Give a proof or a counterexample.
    3. In class I defined a branching bisimulation $\mathcal{B}$ to satisfy the following transfer property:

      if $P \mathrel{\mathcal{B}} Q$ and $P \stackrel{\alpha}\longrightarrow P'$, then there are $Q_0$, $Q_1$, ... , $Q_n$ for some $n \geq 0$, and $Q'_0$, $Q'_1$, ... , $Q'_m$ for some $m \geq 0$
      such that $Q=Q_0 \stackrel{\tau}\longrightarrow Q_1 \stackrel{\tau}\longrightarrow \dots \stackrel{\tau}\longrightarrow Q_n \stackrel{(\alpha)}\longrightarrow Q'_0 \stackrel{\tau}\longrightarrow Q'_1 \stackrel{\tau}\longrightarrow \dots \stackrel{\tau}\longrightarrow Q'_m$,
      and moreover $P \mathrel{\mathcal{B}} Q_i$ for all $i=1,...,n$ and $P' \mathrel{\mathcal{B}} Q'_j$ for all $j=0,...,m$.

      The handout, on the other hand, requires the following transfer property:

      if $P \mathrel{\mathcal{B}} Q$ and $P \stackrel{\alpha}\longrightarrow P'$, then there are $Q_0$, $Q_1$, ... , $Q_n$ for some $n \geq 0$,
      such that $Q=Q_0 \stackrel{\tau}\longrightarrow Q_1 \stackrel{\tau}\longrightarrow \dots \stackrel{\tau}\longrightarrow Q_n \stackrel{(\alpha)}\longrightarrow Q'$,
      and moreover $P \mathrel{\mathcal{B}} Q_n$ and $P' \mathrel{\mathcal{B}} Q'$.

      Prove that these two definitions lead to the very same notion of branching bisimilarity.
      (As always, you may also answer the question by giving a counterexample showing that this is not the case.)

Homework 4, Monday 15-6-2020: Recursion, CCS, and operational semantics

(HW4 and 5 are due on 20-6-2020 at 5pm; accepted without penalty until 22-6-2020 at 10am)
  1. Let P be the process given by the recursive equation X = a.(b.X + c.X)
    and Q be the process (c+e).a. Assume an ACP communication function γ given by γ(a,c)=γ(c,a)=d.
    1. Draw a process graph representation of P and Q.
    2. Draw a process graph representation of P||Q.
    3. Draw a process graph representation of {a,c}(P||Q).
  2. Consider a process called "gatekeeper" that makes sure that of 2 other processes "1" and "2", at most one is in its critical section. Its action are "hear request of process i to enter critical section", or h(i) for short; "admit process i into critical section", or a(i); and "observe that process i exits the critical section", e(i). Here i can have the values 1 or 2. Keep in mind that each of the processes i may enter the critical section multiple times. The actions h(i), a(i) and e(i) all synchronise with corresponding actions h(i), a(i) and e(i) from the processes 1 and 2. Processes 1 and 2 have the form P(i)=h(i).a(i).e(i).P(i). The gatekeeper should be able to hear (and remember) a request in any state. Deadlock should be avoided.
    1. Draw a process graph describing the correct behaviour of the gatekeeper.
    2. Express the behaviour of the gatekeeper by a system of recursive equations.
  3. Describe the relay race in CCS in a similar way that the class notes describe it in ACP.
  4. For which possible choices of Q and R are there transitions of the form
    a.(c+b) | ((c+a) | b) --τ--> Q --τ--> R
    in the labelled transition system of CCS? Give a proof tree of the formal derivation of one of those transitions from the structural operational rules of CCS.
    (From the perspective of ACP, CCS uses the communication function given by γ(a,a)=γ(a,a)=τ for all actions a.)
  5. Two process graphs, given as states in a labelled transition system, are isomorphic if they only differ in the identity of the nodes. Here is a formal definition. Given a CCS expression P, one can use the denotational semantics given in the notes to give the meaning of P as a process graph, or we can distill a process graph from the LTS generated by the structural operational semantics of CCS, namely by taking all states and transitions reachable from P. It is a theorem that those two graphs, both representing the meaning of P, are always bisimulation equivalent.

    Give a counterexample, showing that they are not always isomorphic.

Homework 5, Tuesday 16-6-2020: Deadlock, livelock and divergence

  1. Modify the definition of weak completed trace semantics in six different ways, namely such that
    1. deadlock = livelock and fair divergence = normal progress,
    2. fair divergence = normal progress, and livelock differs from deadlock and from normal progress,
    3. deadlock = livelock and fair divergence differs from livelock and from normal progress,
    4. livelock = fair divergence, different from deadlock and progress,
    5. deadlock = livelock = fair divergence, different from progress,
    6. all four phenomena are distinguished.
    In all cases deadlock should be distinguished from normal progress. Your notions of weak completed trace equivalence should all be strictly finer than finitary weak partial trace equivalence, as defined in the class notes, and strictly coarser than strong bisimilarity. Also, the definitions should be such that upon restricting attention to processes without divergence, the resulting equivalences coincide with (finitary or) infinitary weak completed trace equivalence as in the notes. They should also be congruences for abstraction (τI); no need to prove this. Finally, they should make sense from the perspective of comparing general processes (not just these examples).

Homework 6, Monday 22-6-2020

(HW6 and 7 are due on 27-6-2020 at 5pm; accepted without penalty until 29-6-2020 at 10am)
  1. Sequencing is a binary operation defined on any process graphs; it starts with its second component as soon as its first component cannot do any further actions.
    Sequential composition, on the other hand, is defined on process graphs equipped with a state-labelling, where some states are marked with the symbol for successful termination, ✔. Here the second component starts only when the first one successfully terminates. This is the operator that occurs in ACP.

    In (c) and (d) below, work with a form of LTS in which any state (possibly with outgoing transitions) can have the label $\surd$. Moreover, the label $\surd$ can occur in a (partial or completed) trace, as explained in the notes.

    1. Give a definition of sequencing as an operator on process graphs.
    2. Give an example that shows the difference between these two operators.
    3. Is sequential composition (as used e.g. in ACP) compositional for partial trace equivalence? Give a counterexample, or show why.
    4. Is sequential composition compositional for completed trace equivalence? Give a counterexample, or show why.
    5. Is sequencing compositional for partial trace equivalence? Give a counterexample, or show why.
    6. Is sequencing compositional for (strong) completed trace equivalence? Give a counterexample, or show why.
    7. Is partial trace equivalence a congruence for sequencing? Show why your answer is correct.
    8. Is (strong) completed trace equivalence a congruence for sequencing? Show why your answer is correct.
  2. Show that (strong) completed trace equivalence is a congruence for the abstraction operator. This means that whenever P =CT Q and I is any set of actions, it must be the case that τI(P) =CT τI(Q).
  3. An equivalence ~ is a congruence for a language L if P ~ Q implies that C[P] ~ C[Q] for every context C[ ]. Here a context C[ ] is an L-expression with a hole in it, and C[P] is the result of plugging in P for the hole. The congruence closure ~L of ~ is defined by

    P ~L Q if and only if C[P] ~ C[Q] for any L-context C[ ].

    Prove that ~L is the coarsest congruence finer than ~.
  4. Consider a language L without recursion. A different definition of a congruence ~ requires for any $n$-ary operator $f$ that

    $P_i \sim Q_i$ for $i=1,...,n$ implies $f(P_1,...,P_n) \sim f(Q_1,...,Q_n)$.

    Show that this definition is equivalent to the one above.

Homework 7, Tuesday 23-6-2020

  1. Define the unfolding operator on process graphs (without termination predicate). U(G) should be a tree that behaves exactly like G. Prove that U(G) and G are strong bisimulation equivalent.
  2. Consider the processes a.0 + b.0, τ.(a.0 + b.0), τ.a.0 + b.0, a.τ.0 + b.0 and τ.(a.0 + b.0) + a.0.
    1. Which of these processes are weak bisimulation equivalent to each other, and why (or why not)?
    2. Which of these processes are branching bisimulation equivalent to each other, and why (or why not)?
    3. Which of these processes are rooted weak bisimulation equivalent to each other, and why (or why not)?
    4. Which of these processes are rooted branching bisimulation equivalent to each other, and why (or why not)?
  3. Recall the specification and implementation of the relay race in CCS (HW 4, Q3):

    Specification:    start.finish.0

    Implementation:    (R1 | R2)\$\{$handover$\}$    where
    R1 = start.handover.0
    R2 = handover.finish.0

    Prove the rooted weak bisimulation equivalence between the specification and the implementation by means of equational logic, using the equational axiomatisation of CCS.
  4. Rooted Weak Bisimilarity =RWB is an explicit characterisation of $=_{\it WB}^c$, in the sense that there is a constructive method to show P =RWB Q and from that it follows that $P =_{\it WB}^c Q$. Can you find an explicit characterisation of the congruence closure of =WCT for the CCS +-operator only (so for the coarsest congruence for + that is contained in weak completed trace equivalence)?

Homework 8, Monday 29-6-2020: CSP, and Structural operational semantics

(HW8 and 9 are due on 4-7-2020 at 5pm; accepted without penalty until Tuesday 7-7-2020 at 10am)
  1. Model the relay race in CSP.
  2. How would you translate the CSP expressions (ab) ⊓ c and (ab) ☐ c into CCS? Here I omitted trailing .0's.
  3. Give an example of TSS that is almost in GSOS format, but fails the requirement that all variables occurring in the source of a rule must be different. Show that bisimulation fails to be a congruence for this TSS.
  4. Here is an operational semantics of the sequencing operator:
    x --a--> x'
    ------------------ (for x' != 0)
    x;y --a--> x';y
    x --a--> 0
    ------------
    x;y --a--> y
    Is it in GSOS format? What is wrong with this operational semantics? Show that strong bisimulation is not a congruence for the operator defined by these operational rules (in the context of CCS).
  5. Give a correct structural operational semantics in GSOS format of the sequencing operator. (See HW6 Q1 for a definition of the sequencing operator.)
  6. Hard question, for bonus marks:
    This question uses authentic CSP notation: a -> P stands for a.P in CCS notation, STOP stands for 0, and μX. P stands for $\langle X | X = P \rangle$ in ACP notation.
    Suppose we were to introduce a sequencing operator ; to CSP. Consider the process μX. ((a -> X; b -> STOP) ⊓ STOP).
    Show that this process can be expressed using the existing operators of CSP.

Homework 9, Tuesday 30-6-2020

  1. In the paper available at http://www.cse.unsw.edu.au/~rvg/pub/spectrum1.ps.gz there are a number of pictures, called Counterexample 1, 2, 3, etc. For each of counterexamples 2 to 6, provide a HML formula (if needed with infinite conjunctions) that holds for the left process, but not for the right one.
  2. Give an example of a process that satisfies the HML formulas [a]⟨b⟩T and ⟨a⟩([b]⟨c⟩T) but does not satisfy [a]⟨b⟩⟨c⟩T.
  3. Can be found here. (I accidentally posted a draft of this homework, which I now corrected. You may choose between answering the question as originally posted, or the current one. However, the 3 follow-up question that will come proceed from the question as presently stated here.)

On Monday 13-7-2020 at 15:15 there will be a seminar presentation on Zoom. Anything mentioned in these seminar presentations will be seen as material treated in class, on which exam questions may be asked. So if you have questions about them, ask them to the presenter at the end of the zoom lecture (or during if they are urgent for continuous understanding).

Homework 10, Monday 13-7-2020: CTL and LTL

(HW 10 and 11 are due on 18-7-2020 at 5pm; accepted without penalty until 20-7-2020 at 10am)
  1. Consider the CTL formulas AF EGp and AG EFp. Give examples of Kripke structures in which one holds and not the other, and vice versa. (Partial credit for showing Kripke structures illustrating the difference between the four formulas AFp, EFp, EGp and AGp.)
  2. Model as an LTL formula:
    1. If at any time the condition φ is satisfied then from that point onwards we will surely reach a state where ψ holds.
    2. If the condition φ is satisfied infinitely often then we will surely reach a state where ψ holds.
  3. This question is about mutual exclusion algorithms for two processes. Such an algorithm deals with two concurrent processes A and B that want to alternate critical and noncritical sections. Each of these processes will stay only a finite amount of time in its critical section, although it is allowed to stay forever in its noncritical section. It starts in its noncritical section. The purpose of the algorithm is to ensure that processes A and B are never simultaneously in their critical section, and to guarantee that both processes keep making progress. It is correct if these requirements are met.

    We assume such an algorithm to be modelled as a Kripke structure. Among the atomic propositions that label the states of this Kripke structure is IntentA, saying that Process A wants to enter the critical section, but has not done so yet. We also have an atomic proposition CritA, stating that Process A is in its critical section. Likewise we have atomic propositions IntentB and CritB.

    1. Formulate an LTL property in terms of these four atomic propositions (or others if needed) expressing the correctness of the algorithm.
    2. (harder) Formulate an LTL property saying that if Process $A$ intends to go critical, then Process $B$ will enter its critical section at most once before $A$ does.
  4. Give a CTL-X formula that shows the difference between the processes d+τ.(ac+b) and d+ac+τ.(ac+b), or show that there isn't any.
  5. Give a CTL-X formula that shows the difference between the processes b+τ.a and b+a+τ.a, or show that there isn't any.
    15-7-2020: Oops, what I mean to ask was: Give a CTL-X formula that shows the difference between the processes b+τ.(a+b) and b+a+τ.(a+b), or show that there isn't any. At this time you are free to answer either version (or both). But tell me which one.
  6. The dual of the LTL modality F is G. What is the dual of U, and how can it be expressed in the other modalities without using negation?

Homework 11, Tuesday 14-7-2020: Partition refinement

  1. Use the partition refinement algorithm on the states of a.(a(a+aa)+aa)+aaa. Show the steps. With how many bisimulation equivalence classes do you end up?
  2. Use the partition refinement algorithm for branching bisimilarity on the states of τ.(τ.(a+τ)+τ.a)+τ.(τ(a+τ)+a). Show the steps. Provide a branching bisimilar process graph with as few states as possible.
  3. Provide an efficient algorithm for deciding rooted branching bisimilarity of process graphs. You may use the partition refinement algorithm for branching bisimilarity as a subroutine, without explaining it further.

Homework 12, Monday 20-7-2020

(HW 12 and 13 are due on 25-7-2020 at 5pm; accepted without penalty until 27-7-2020 at 10am)
  1. Let's simplify the alternating bit protocol by omitting the alternating bit. Instead, the receiver send back the signal "success" or "failure" depending on whether it received a correct or a corrupted message. The sender simply resends the last message if it receives a "failure" or the corrupted acknowledgement $\bot$; upon receiving a "success" it proceeds with the next message. Does this version of the protocol work correctly? Explain your answer.
  2. This process graph constitutes the correct answer to Howework 9 Question 3(i). The grey shadows represent copies of the states at the opposite end of the diagram, so the transitions on the far right and bottom can be considered to loop around. The name of a state $\ell_i m_j T$ where $T$ is $A$ or $B$ (and in the drawing displayed above $\ell_i m_j$) indicates that process $A$ is in state $\ell_i$, process $B$ in state $m_j$ and the variable $turn$ has value $T$. The actions ${\it en}_A$, ${\it ln}_A$, ${\it en}_B$, ${\it ln}_B$, ${\it ec}_A$, ${\it lc}_A$, ${\it ec}_B$ and ${\it lc}_B$, are visible: they represent entering and leaving the non-critical and critical sections of processes A and B in Peterson's protocol. All unlabelled transitions are $\tau$'s.
    1. How many reachable states are in the process graph after identifying strongly bisimilar states? Tell or show which states get identified (for instance by drawing lines between equivalence classes on the sheet with the graph). (The most secure way to figure this out is by using the partition refinement algorithm, or reasoning in the spirit of that algorithm.)
    2. How many reachable states are there after identifying branching bisimilar states? (In other words: what is the least amount of colours needed in a consistent colouring of this graph?) Tell or show which states get identified.
  3. Now interpret the above process graph as a Kripke structure, by forgetting all transition labels, and adding the state labels IntentA, CritA, IntentB and CritB discussed in HW10 Q3.
    1. Tell which of the states in this Kripke structure ought to be labelled by the above four atomic proposition. Explain your answer.
    2. Give an LTL formula that says that the processes A and B are never simultaneously in their critical section. (This is part of HW10 Q3(a).) Does this formula hold for Peterson's algorithm? Explain why or why not.
    3. Give an LTL formula that says that if process A ever wants to enter the critical section, it will surely succeed in doing so. (This is part of HW10 Q3(a).) Does this formula hold for Peterson's algorithm? Explain why or why not.
    4. Let $\psi$ be the LTL formula that forms the correct answer to HW10 Q3(b). Does this formula hold for Peterson's algorithm? Explain why or why not.

Homework 13, Tuesday 21-7-2020: Petri nets and non-interleaving semantics

  1. Give a Petri net for the following ACP expressions:
    1. a(b + c)
    2. a(b + (c||d))
    3. a(b + c) || ad
    4. a+X where X is given by the recursive equation X=aX.
    The communication function is given by γ(b,d)=a.
  2. Represent Peterson's algorithm (see HW9) as a Petri net with 22 places: 8 places for each of the 8 internal states of the processes A and B and 2 places for each of the 3 variables. You need 20 transitions: 2 copies of $\ell_3$ and $m_3$, and 1 copy of each of the other transitions.
  3. Let's define a step-trace of a Petri net to be sequence, not of actions, but of multisets of actions, such that the represented system can perform the sequence in succession, at each step performing the required multiset of actions simultaneously. For example, the process a(b + (c||d)) would have the following completed step-traces:

    {ab, acd, adc, a[cd]}

    Here [cd] denotes the multiset containing c and d one time each.
    1. Give the set of completed step traces of a(b + c) || d.
    2. Give an example of two processes that are strong bisimulation equivalent but have different step-traces.
    3. Give an example of two processes that are strong bisimulation equivalent but have different minimal running times when associating durations to actions. Here durations are allocated by a function from the set of all actions to the positive real numbers. For instance duration(a)=1 and duration(b)=2. Now all occurrences of the action b will take 2 time units. You may choose any function duration that works for you. Explain why the minimal running times will be different.
    4. Show that strong bisimilarity is not preserved under action refinement. That is, give two processes that are strongly bisimilar, such that after replacing each copy of a single action a with a more complicated process (like a1;a2) they are no longer strongly bisimilar.
    5. Give an example of two processes that have the same completed step traces, but have different minimal running times when associating durations to actions. Explain why the minimal running times will be different.
    6. Use the same example to argue that completed step trace equivalence is not preserved under action refinement. That is, give two processes with the same completed step traces, such that after replacing each copy of a single action a with a more complicated process (like a1;a2) they are no longer completed step trace equivalent.
  4. Consider a process with the following set of completed step traces: $\{bad,[ab]d,abd,a[bd],adb,acd,a[cd],adc\}$. Thus, in each run, the actions a, d and one out of c or b occur once each, subject to the following restrictions:
    • b and c can never occur in the same trace.
    • if c happens, it must be after a, and
    • a happens before d.
    Moreover, the choice between b and c should be made only after execution of a.
    1. Provide a Petri net with the above set of completed step traces and branching requirement.
    2. What are the completed partial ordered traces of your net?
    3. Give a CSP expression for this process that uses at most one occurrence of a choice operator (☐ or ⊓).
    4. Give an ACP expression for this process that uses at most one +.
    5. Give a CCS expression for this process that uses at most one +. You may have to smuggle in an internal action τ to make this possible.

Course evaluation

Please fill in the
course and instructor evaluation forms.

Homework 14, Monday 27-7-2020

  1. This a follow-up to HW12, Q3.
    The correct answer to HW12, Q3a is that CritA holds in all states with the label $\ell_6$. Namely the state $\ell_5$ is
    by definition the state right before the action ${\it ec}_A$ takes place; hence the process is not yet in its critical section. However, I gave full points to people who put $\ell_5$ in the critical section as well.

    The correct answer as to where IntendA should hold is disputed in academic circles. Many people, especially ones that like to formally verify Peterson's algorithm by means of process algebra, claim that it holds for states $\ell_3$, $\ell_4$ and $\ell_5$ only. The argument is that the intention to enter the critical section occurs when the process speaks up about it, by performing the action readyA := true. But some say that a better model is obtained by additionally including $\ell_2$ in IntendA. The students in this class that answered Q3 were evenly divided between including $\ell_2$ or not. Of course, full points were available for both points of view.

    Please answer HW12 Q3 parts (c) and (d) again, but now starting from the opposite postulate on whether IntendA holds in $\ell_2$ compared to your previous answer.

  2. Give a small collection of LTL formula $\phi$ without next-state operator, and progress graphs, process algebra expressions or Petri nets $P$, that show the difference between the judgements $P \models^C \phi$, for $C \in \{\emptyset, {\it Pr}, J, {\it WF}, {\it SF}\}$.
  3. Consider a variant of Peterson's mutual exclusion protocol that involves 4 parallel processes instead of 2, when not counting the 3 shared variables. Instead of the party "Process A" we have "Client A" and "Interface A". The client may engage in critical and/or non-critical sections. The interface does not have critical or non-critical sections; it merely negotiates with the other party when the client may enter the critical section. The same holds for B. We will only describe Interface A and Interface B, not the clients. For the interfaces should work correctly with any well-behaved clients.

    Client A and Interface A communicate through 3 shared actions:
    $r_A$ is a request by the client to the interface to enter the critical section;
    $g_A$ is a signal from the interface to the client, allowing entry to the critical section -- $g$ stands for "granting the request";
    $e_A$ is a signal from the client to the interface, saying it is exiting the critical section.

    We assume the clients are well behaved. This means:
    Client $i$ (where $i\in\{A,B\}$) will engage in the actions $r_i$, $g_i$ and $e_i$ only in this order (looping back to $r_i$ after $e_i$).
    Whenever client $i$ gets permission to enter ($g_i$), it will, after a finite period of time, leave the critical section and signal this by sending the signal $e_i$.
    When the interface performs the action $g_i$, it cannot be blocked by the client.
    There is no obligation for a client to ever do $r_i$.

    1. Modify Peterson's protocol Process A into Interface A, such that it fits the above description (and works as well as it could). Do not make more modifications than necessary.
    2. Describe the correctness property $\phi$ of the protocol as an LTL formula, using only the atomic propositions $r_i$, $g_i$ and $e_i$. These are interpreted as transitions labels (all others are $\tau$), and we use the standard (De Nicola-Vaandrager) translation from LTS to Kripke structures in order to interpret LTL.
    3. Let $\it MP$ be the above modified version of Peterson's protocol, and consider the statements ${\it MP} \models^{\it C}_B \phi$, with $\phi$ the above LTL formula. What are reasonable values for $B$, and why?
    4. For which of the completeness criteria $C$ treated in class is the above statement true? Explain your answer.
    5. Let $\psi$ be the LTL formula expressing correctness of mutual exclusion protocols, constituting the right answers to HW12, Q3a. It uses the atomic propositions IntendA, IntendB, CritA and IntendA. Tell in which states IntendA should hold, so that this correctness criteria is satisfactory from the perspective of a client. (Thus, two clients should never be simultaneously in the critical section, and whenever a client wants to enter it should eventually do so.)
    6. Discuss the difference with your answer to HW12 Q3, if any.
    7. Consider only the safety part of of the correctness property: two processes may not be simultaneously in the critical section. Express this property as safety(B), where B is a set of (bad) traces of the protocol. That is, what is B in this case?
  4. In the notes you find 3 LTL formulae expressing weak fairness. Prove that all 3 are equivalent.
    A good way to do this is to propose a number of algebraic laws like $\neg \textbf{F}\phi = \textbf{G} \neg \phi$, and prove each of them correct by showing that, on arbitrary paths in arbitrary Kripke structures, the left-hand side is valid iff the right-hand side is valid. This involves the definition of the modalities involved. Then obtain the requested equivalences by combining those algebraic laws.

Homework 15, Tuesday 28-7-2020

  1. Give a small collection of LTL formula $\phi$ without next-state operator, and progress graphs, process algebra expressions or Petri nets $P$, that, for a given value of $B$, show the difference between the judgements $P \models^C_B \phi$, for $C \in \{\emptyset, {\it Pr}, J, {\it WF}, {\it SF}\}$. Your examples should be such that if you take $B$ either to small or too large, they cease to be examples for what they show. Explain.
  2. Give a complete inductive definition of the valid judgements $s \models^C_B \phi$, where $s$ is state in a Kripke structure, $\phi$ a CTL formula (using the modalities EX, AX, EF, AF, EG, AF, EU and AU), $C$ is a completeness criterion, and $B \subseteq A$ a set of blockable actions.
  3. Consider a group of 3 agents $G=\{1,2,3\}$. The epistemic operator ${\bf D}_G^{\mbox{}}\phi$ says that the combined knowledge of all these agents is enough to known $\phi$. Its formal semantics in terms of Kripke structures was given in Courtney's talk. Please give an example in the shape of a Kripke structure that disproves the conjecture that ${\bf D}^{\mbox{}}_G \phi \equiv {\bf K}_1\phi \vee {\bf K}_2\phi \vee {\bf K}_3\phi$. Here ${\bf K}_i \phi$ says that agent $i$ knows (or can know) that $\phi$ holds, and $\equiv$ says that the two formula are equivalent, meaning that in any Kripke structure they hold for the same states.

Practice exam

Here is the final exam I gave when I taught this course at a previous occasion. It should give a good indication of what kind of questions (how many and how hard) to expect.
Note: Question 2(d) covers material encountered during a student presentation. It is not covered this year. We also didn't cover Q4(f). All the rest is covered.

Homework 16, Monday 3-8-2020: Fair schedulers and mutual exclusion

(HW16 Q1 and HW17 are due on 7-9-2020 at 5pm; accepted without penalty until 7-10-2020 at 10am. The answers may be revealed in the last class, which is on Zoom on Monday 7-10-2020 at 3pm.)
  1. As we have seen in previous homeworks, and in Lecture 16, Peterson's mutual exclusion algorithm, as we modelled it in CCS, does not satisfy the liveness property G(IntendA $\Rightarrow$ F CritA), when including in IntendA the state $\ell_2$, right before Process A sets readyA to true. To be precise, if $MP$ is Peterson's protocol, then the judgement $MP \models^{\it WF(T)}$ G(IntendA $\Rightarrow$ F CritA), for some appropriate collections $T$ of tasks, is true, but the judgements $MP \models^{\it Pr}$ G(IntendA $\Rightarrow$ F CritA) and $MP \models^{\it J}$ G(IntendA $\Rightarrow$ F CritA) are not.

    Now consider the process algebra CCS upgraded with timeout actions ${\rm t}.\!P$. The operational semantics is exactly the same as for every other action prefixing operator $\alpha.\!P$. It yields a transition $\stackrel{\rm t}{\longrightarrow}$. The passage of time is modelled in the following way.

    When a system arrives in a state $P$, and at that time $X$ is the set of actions allowed by the environment, there are two possibilities. If $P$ has an outgoing transition $P \stackrel\alpha\longrightarrow Q$ with $\alpha \in X\cup \{\tau\}$, the system immediately takes one of the outgoing transitions $P \stackrel\alpha \longrightarrow Q$ with $\alpha \in X\cup \{\tau\}$, without spending any time in state $P$. The choice between these actions is entirely nondeterministic. The system cannot immediately take a transition $\stackrel b\longrightarrow$ with $b\in A{\setminus}X$, because the action $b$ is blocked by the environment. Neither can it immediately take a transition $P\stackrel{\rm t}\longrightarrow Q$, because such transitions model the end of an activity with a finite but positive duration that started when reaching state $P$.

    In case $P$ has no outgoing transition $P \stackrel\alpha\longrightarrow Q$ with $\alpha \in X\cup \{\tau\}$, the system idles in state $P$ for a positive amount of time. This idling can end in two possible ways. Either one of the time-out transitions $P \stackrel{\rm t}\longrightarrow Q$ occurs, or the environment spontaneously changes the set of actions it allows into a different set $Y$ with the property that $P \stackrel a\longrightarrow Q$ for some $a \in Y$. In the latter case a transition $P \stackrel a\longrightarrow Q$ occurs, with $a \in Y$. The choice between the various ways to end a period of idling is entirely nondeterministic. It is possible to stay forever in state $P$ only if there are no outgoing time-out transitions $P \stackrel{\rm t}\longrightarrow Q$.

    A crucial law that is valid in this setting is $\tau.P + {\rm t}.Q = \tau.P$.

    As I told in lecture 16, Peterson's Protocol can be made to satisfy the liveness property, by inserting a t-action between entering and leaving the critical section.

    1. Do we now have $MP \models^{\it Pr}$ G(IntendA $\Rightarrow$ F CritA) or $MP \models^{\it J}$ G(IntendA $\Rightarrow$ F CritA)? Explain.
    2. Suppose that we insert the t-action not between entering and leaving the critical section but between entering and leaving the non-critical section. Does this work as well? What is the weakest completeness criteria $C$ for which $MP \models^C$ G(IntendA $\Rightarrow$ F CritA) holds?
  2. This exercise is a bit unusual. It takes the shape of a partially collaborate and partially competitive game to be played by the class by posting to Piazza (instead of using the homework collection interface). The question will be revealed on Tuesday right after the seminar presentation by Aidan. It might conclude on Thursday around tutorial time. In the meantime you can contribute by posting fragments of the answer, either elaborating or undermining previous parts of the answer posted.

    Consider a mutual exclusion protocol $ME$ with as visible actions ${\it en}_A$, ${\it ln}_A$, ${\it en}_B$, ${\it ln}_B$, ${\it ec}_A$, ${\it lc}_A$, ${\it ec}_B$ and ${\it lc}_B$. They represent entering and leaving the non-critical and critical sections of processes A and B. All other transitions are $\tau$'s. As in HW14 Q3, our protocol merely models an interface; the process A and B that really engage in critical and noncritical sections are specified by

    PrA := (noncritical stuff).${\it ln}_A$${\it ec}_A$.(critical stuff).${\it lc}_A$.${\it en}_A$.PrA

    (and similarly for PrB), and synchronise with the protocol $ME$ on the visible actions of $ME$.

    The goal of this exercise is to arrive at a sound and complete list of requirements for $ME$, each of which has the form $ME \models^C_B \phi$ for a completeness criterion $C$, a set of visible actions $B$, and an LTL formula $\phi$. The criterion is sound if it is satisfied by a decent mutual exclusion protocol (such as the one mentioned in the previous question). The list is complete, if it is not satisfied by systems that should not be regarded as correct mutual exclusion protocols. The closer your list is to complete, the more of these non-ME systems it rules out.

    The mode of delivery for this exercise is a bit different then usual. Rather then using the homework interface, we invite you to post your contributions to Piazza, so that it is visible to the entire class at once. We can imagine a blue team and a red team; you can be a member of either or both, and may switch allegiances when you want.

    As a member of the blue team you can add a requirement to list of requirements posted so far. List the previously posted set of requirements you agree with, and then your addition. It is worth credit if your requirement is sound, and rules out at least one more non-ME systems.

    As a member of the red team, you establish the non-completeness of the requirements posted so far, by proposing a system that satisfies the posted requirements, yet is not a correct mutual exclusion protocol. Please list in your post the posted requirements satisfied by your solution. Part of the game is to take the letter rather then the spirit of the requirements. So you system might be one that satisfies the requirements as posted, yet is clearly unintended. Instead of this you may also undermine a posted requirement by arguing that it is too strong and will not be satisfied by any correct mutual exclusion algorithm.

Homework 17, Tuesday 4-8-2020: Value-passing process algebra

  1. In a suitable value-passing process algebra define a process with a local variable count which at any time can receive a message containing an integer on channel c. Whenever an integer is received, it should be added to count, and the new value of count is transmitted on channel d.
  2. Now define a process that at any time can receive an integer n on channel b, and performs a complex mathematical operation on n that returns another integer. This operation f could for instance be to extract the largest prime factor of n. The process now outputs f(n) on channel d. The problem is that the operation f takes significant execution time, and we don't want to miss new integers that arrive while f(n) is being computed. For this reason, incoming integers need to be stored in a queue while they are waiting to be processed.

    For this exercise you can assume the existence of a data structure [Int], which represents a queue of integers (e.g. [1,5,7,2,3]). The empty list is denoted by [ ]. Without further definition you can use the standard (partial) functions head, tail, and append that return the "oldest" element in the queue, remove the "oldest" element, and add an integer to the queue, respectively. Examples are

    head([1,5,7,2,3]) = 1
    tail([1,5,7,2,3]) = [5,7,2,3]
    append(17,[1,5,7,2,3]) = [1,5,7,2,3,17]


Rob van Glabbeek notes rvg@cse.unsw.edu.au