## Sequential Covering

### Introduction

This is the most popular algorithm implementing Rule Learning. In the scope of this page, only the algorithm for propositional training examples is discussed.

A covering algorithm, in the context of propositional learning systems, is an algorithm that develops a cover for the set of positive examples - that is, a set of hypotheses that account for all the positive examples but none of the negative examples.

This is called sequential covering because it learn one rule at a time and repeat this process to gradually cover the full set of positive examples. The most effective approach to Learn-One-Rule is beam search.

### Algorithm

The algorithm - given a set of examples:

2. Using Learn-One-Rule to find the best hypothesis.
3. If the Just-Learnt-Rule satisfies the threshold then
• Put Just-Learnt-Rule to the Cover
• Remove examples covered by Just-Learnt-Rule.
• Go to step 2.
4. Sort the Cover according to its performance over examples.
5. Return: Cover

### Example

An example of a set of propositional examples in this system is as follow:
 Id Size Colour Shape Weight Expensive 1 Big Red Square Heavy Yes 2 Small Blue Triangle Light Yes 3 Small Blue Square Light No 4 Big Green Triangle Heavy No 5 Big Blue Square Light No 6 Big Green Square Heavy Yes 7 Small Red Triangle Light Yes
 The final set of rules is like follow:
 Expensive = Yes if: Colour = Red. Or (Colour = Green & Shape = Square). Or (Colour = Blue & Shape = Triangle). (covers example 1,7) (covers example 6)  (covers example 2)

### Properties

One characteristic of this algorithm is it requires high accuracy, meaning the prediction should be correct with high probability. Another is that it is possibly low coverage, meaning that it does not make prediction for all examples. There might be some examples that do not classified by the algorithm.

Developed by Canh Hao Nguyen and Hong Chung Nguyen.