IntroductionThis is the most popular algorithm implementing Rule Learning. In the scope of this page, only the algorithm for propositional training examples is discussed.
A covering algorithm, in the context of propositional learning systems, is an algorithm that develops a cover for the set of positive examples - that is, a set of hypotheses that account for all the positive examples but none of the negative examples.
This is called sequential covering because it learn one rule at a time and repeat this process to gradually cover the full set of positive examples. The most effective approach to Learn-One-Rule is beam search.
The algorithm - given a set of examples:
ExampleAn example of a set of propositional examples in this system is as follow:
|The final set of rules is like follow:|
|Expensive = Yes if:|
|Colour = Red.
Or (Colour = Green & Shape = Square).
Or (Colour = Blue & Shape = Triangle).
|(covers example 1,7)
(covers example 6)
(covers example 2)
PropertiesOne characteristic of this algorithm is it requires high accuracy, meaning the prediction should be correct with high probability. Another is that it is possibly low coverage, meaning that it does not make prediction for all examples. There might be some examples that do not classified by the algorithm.