COMP[39]161 Concepts of Programming Languages
Semester 2, 2018

Tute (Week 3)

Table of Contents

1 Red Black Trees

Red-black trees are binary search trees where each node in the tree is coloured either red or black. In C, red-black trees could, for example, be encoded with the following user-defined data type:

typedef struct redBlackNode* link;

struct redBlackNode { 
  Item item; // Item type defined elsewhere
  link left, right; 
  int  isRed; // 1 if node is red, 0 if black
};

and in Haskell (again, we assume the Item type is defined elsewhere)

data RBColour
  = Red
  | Black

data RBTree
  = RBLeaf
  | RBNode RBColour Item RBTree RBTree 
  1. The user defined Haskell data type characterises a set of terms as RBTree. List the inference rules to define the same set. Assume there already exists a judgement \(x\ \mathbf{Item}\) which is derivable for all \(x\) in the Item type.
  2. In a proper red-black tree, we can never have a red parent node with a red child, although it is possible to have black nodes with black children. Moreover, the root node cannot be red. Therefore, not all terms of type RBTree are real red-black trees. Define inference rules for a property \(t\ \mathbf{OK}\), such that \(\mathbf{OK}\) is derivable if the term \(t\) represents a proper red-black tree.

1:

\[ \dfrac{ }{\mathit{Red}\ \mathbf{RBColour}} \quad \dfrac{ }{\mathit{Black}\ \mathbf{RBColour}} \]

\[ \dfrac{ }{\mathit{RBLeaf}\ \mathbf{RBTree}} \quad \dfrac{c\ \mathbf{RBColour}\quad x\ \mathbf{Item}\quad t_1\ \mathbf{RBTree}\quad t_2\ \mathbf{RBTree}}{(\mathit{RBNode}\ c\ x\ t_1\ t_2)\ \mathbf{RBTree}} \]

2: We define this using an additional property, \(\mathbf{OK}^R\). \(\mathbf{OK}\) is a red-black tree with a black root node or a leaf, whereas \(\mathbf{OK}^R\) trees can also have red root nodes. If a node has a red root node, both children have to have black root nodes.

\[ \dfrac{ }{\mathit{RBLeaf}\ \mathbf{OK}}\quad\dfrac{t_1\ \mathbf{OK}^R\quad t_2\ \mathbf{OK}^R }{ (\mathit{RBNode}\ \mathit{Black}\ x\ t_1\ t_2)\ \mathbf{OK}} \] \[ \dfrac{t_1\ \mathbf{OK}\quad t_2\ \mathbf{OK} }{ (\mathit{RBNode}\ \mathit{Red}\ x\ t_1\ t_2)\ \mathbf{OK}^R}\quad \dfrac{n\ \mathbf{OK}}{n\ \mathbf{OK}^R} \]

2 Ambiguity and Syntax

Consider the language of boolean expressions (with operators: \(\land\) (and), \(\lor\) (or), \(\neg\) (not), constant values \(\mathit{True}\) and \(\mathit{False}\), and parentheses).

  1. Give a simple (non-simultaneous) inductive definition for this language using only judgements of the form \(e\ \mathbf{Bool}\).
  2. Identify how this language is ambiguous: That is, find an expression that can be parsed in multiple ways.
  3. Now, give a second definition for the same language which is not ambiguous. (to disambiguate: \(\neg\) has the highest precedence; \(\land\) has a higher precedence than \(\lor\), both are left associative).
  4. Assume you want to prove a property \(P\) for all boolean expressions using rule induction over the rules of the first version. Which cases do you have to consider? What is the induction hypothesis for each case?

1: \[ \dfrac{e_1\ \mathbf{Bool}\quad e_2\ \mathbf{Bool}}{e_1 \lor e_2\ \mathbf{Bool}} \quad \dfrac{e_1\ \mathbf{Bool}\quad e_2\ \mathbf{Bool}}{e_1 \land e_2\ \mathbf{Bool}} \] \[ \dfrac{c \in \{\top, \bot\}}{c\ \mathbf{Bool}} \quad \dfrac{e\ \mathbf{Bool}}{\texttt{(}e\texttt{)}\ \mathbf{Bool}} \quad \dfrac{e\ \mathbf{Bool} }{\neg e\ \mathbf{Bool}} \]

2: \(\bot \land \top \lor \bot\) can be interpreted in multiple ways with different semantic results.

3: \[ \dfrac{e_1\ \mathbf{OrEx}\quad e_2\ \mathbf{AndEx}}{e_1 \lor e_2\ \mathbf{OrEx}}\quad\dfrac{e\ \mathbf{AndEx}}{e\ \mathbf{OrEx}} \] \[ \dfrac{e_1\ \mathbf{AndEx}\quad e_2\ \mathbf{Atom}}{e_1 \land e_2\ \mathbf{AndEx}}\quad\dfrac{e\ \mathbf{Atom}}{e\ \mathbf{AndEx}} \] \[ \dfrac{c \in \{\top, \bot\}}{c\ \mathbf{Atom}} \quad \dfrac{e\ \mathbf{OrEx}}{\texttt{(}e\texttt{)}\ \mathbf{Atom}} \quad \dfrac{e\ \mathbf{Atom} }{\neg e\ \mathbf{Atom}} \]

4:

There would be five cases:

  1. Prove \(P(c)\) where \(c \in \{\top,\bot\}\).
  2. Prove \(P(\neg e)\) where \(e\ \mathbf{Bool}\) with inductive hypothesis \(P(e)\).
  3. Prove \(P(\texttt{(}e\texttt{)})\) where \(e\ \mathbf{Bool}\) with inductive hypothesis \(P(e)\).
  4. Prove \(P(e_1 \lor e_2)\) where \(e_1\ \mathbf{Bool}\) and \(e_2\ \mathbf{Bool}\) with inductive hypotheses \(P(e_1)\) and \(P(e_2)\).
  5. Prove \(P(e_1 \land e_2)\) where \(e_1\ \mathbf{Bool}\) and \(e_2\ \mathbf{Bool}\) with inductive hypotheses \(P(e_1)\) and \(P(e_2)\).

3 Simultaneous Induction

In the lecture, we discussed two alternative definitions of a language of parenthesised expressions:

\[ \dfrac{}{\varepsilon\ \mathbf{M}}{M_E}\quad \dfrac{s\ \mathbf{M}}{\texttt{(}s\texttt{)}\ \mathbf{M}}{M_N}\quad \dfrac{s_1\ \mathbf{M}\quad s_2\ \mathbf{M}}{s_1s_2\ \mathbf{M}}{M_J} \]

\[ \dfrac{}{\varepsilon\ \mathbf{L}}{L_E}\quad \dfrac{s\ \mathbf{L}}{\texttt{(}s\texttt{)}\ \mathbf{N}}{N_N}\quad \dfrac{s_1\ \mathbf{N}\quad s_2\ \mathbf{L}}{s_1s_2\ \mathbf{L}}{L_J} \]

In the lecture (and in this document), we showed that if \(s\ \mathbf{M}\), then \(s\ \mathbf{L}\). Show that the reverse direction of the implication is also true, and therefore \(\mathbf{M}\) defines the same language as \(\mathbf{L}\).

Hint: similar to the proof discussed in the lecture, it is not possible to prove it using straight forward rule induction. However, first try induction and to see what is missing, then adjust your proof accordingly.

If \(s\ \mathbf{L}\), then either:

Base Case: \(s = \varepsilon\) in this case, we can directly infer \(s\ \mathbf{L}\) as

\[ \dfrac{ }{\varepsilon\ \mathbf{L}}{L_E} \]

Inductive Case: \(s = s_1 s_2\), for

  • (\(A_1\)) \(s_1\ \mathbf{N}\)
  • (\(A_2\)) \(s_2\ \mathbf{L}\)

As only one of these assumptions (\(A_2\)) mentions \(\mathbf{L}\), we only get one inductive hypothesis, that \(s_2\ \mathbf{M}\):

\[ \dfrac{\dfrac{???}{s_1\ \mathbf{M}} \quad \dfrac{ }{s_2\ \mathbf{M}}{I.H} }{s_1s_2\ \mathbf{M}}{M_J} \]

The problem with this approach is that our proof goal is too specific, which makes our induction hypothesis too weak.

Instead we will prove that if either \(s\ \textbf{L}\) or \(s\ \textbf{N}\), then \(s\ \textbf{M}\). This will mean we have to deal with the cases arising from rule \(N_N\) as well as \(L_E\) and \(L_J\), but it also means that we get the second inductive hypothesis we need in the \(L_J\) case.

Inductive Case: \(s = s_1 s_2\), for

  • \(s_1\ \mathbf{N}\)
  • \(s_2\ \mathbf{L}\)

We now get inductive hypotheses \(s_1\ \mathbf{M}\) and \(s_2\ \mathbf{M}\).

\[ \dfrac{\dfrac{ }{s_1\ \mathbf{M}}{I.H}_1 \quad \dfrac{ }{s_2\ \mathbf{M}}{I.H}_2 }{s_1s_2\ \mathbf{M}}{M_J} \]

Inductive Case: \(s = \texttt{(}s_1\texttt{)}\), for \(s_1\ \mathbf{L}\). We get the inductive hypothesis that \(s_1\ \mathbf{M}\).

\[ \dfrac{\dfrac{}{s_1\ \mathbf{M}}{I.H}}{\texttt{(}s_1\texttt{)}\ \mathbf{M}}{M_N} \]

2018-11-16 Fri 19:37

Announcements RSS