Machine Learning

In 1956, Bruner, Goodnow and Austin published their book A Study of Thinking, which became a landmark in psychology and would later have a major impact on machine learning. The experiments reported by Bruner, Goodnow and Austin were directed towards understanding a human's ability to categorise and how categories are learned.
We begin with what seems a paradox. The world of experience of any normal man is composed of a tremendous array of discriminably different objects, events, people, impressions...But were we to utilize fully our capacity for registering the differences in things and to respond to each event encountered as unique, we would soon be overwhelmed by the complexity of our environment... The resolution of this seeming paradox ... is achieved by man's capacity to categorize. To categorise is to render discriminably different things equivalent, to group objects and events and people around us into classes... The process of categorizing involves ... an act of invention... If we have learned the class "house" as a concept, new exemplars can be readily recognised. The category becomes a tool for further use. The learning and utilization of categories represents one of the most elementary and general forms of cognition by which man adjusts to his environment.
The first question that they had to deal with was that of representation: what is a concept? They assumed that objects and events could be described by a set of attributes and were concerned with how inferences could be drawn from attributes to class membership. Categories were considered to be of three types: conjunctive, disjunctive and relational.
...when one learns to categorise a subset of events in a certain way, one is doing more than simply learning to recognise instances encountered. One is also learning a rule that may be applied to new instances. The concept or category is basically, this "rule of grouping" and it is such rules that one constructs in forming and attaining concepts.
The notion of a rule as an abstract representation of a concept in the human mind came to be questioned by psychologists and there is still no good theory to explain how we store concepts. However, the same questions about the nature of representation arise in machine learning, for the choice of representation heavily determines the nature of a learning algorithm. Thus, one critical point of comparison among machine learning algorithms is the method of knowledge representation employed.

In this section we will do a quick tour of a variety of learning algorithms and seen how the method of representing knowledge is crucial in the following ways:

BIBLIOGRAPHY

Aha, D. W., Kibler, D., & Albert, M. K. (1991). Instance-Based Learning Algorithms. Machine Learning, 6(1), 37-66.

Banerji, R. B. (1980). Artificial Intelligence: A Theoretical Approach. New York: North Holland.

Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A Study of Thinking. New York: Wiley.

Buntine, W. (1988). Generalized Subsumption and its Applications to Induction and Redundancy. Artificial Intelligence, 36, 149-176.

Holland, J. H. (1975). Adaptation in Natural and Artificial Systems. Ann Arbor, Michigan: University of Michigan Press.

King, R. D., Lewis, R. A., Muggleton, S., & Sternberg, M. J. E. (1992). Drug design by machine learning: the use of inductive logic programming to model the structure-activity relationship of trimethoprim analogues binding to dihydrofolate reductase. Proceedings of the National Academy Science, 89.

Michalski, R. S. (1973). Discovering Classification Rules Using Variable Valued Logic System VL1. In Third International Joint Conference on Artificial Intelligence. (pp. 162-172).

Michalski, R. S. (1983). A Theory and Methodology of Inductive Learning. In R. S. Michalski, J. G. Carbonell, & T. M. Mitchell (Eds.), Machine Learning: An Artificial Intelligence Approach. Palo Alto: Tioga.

Michie, D., & Chambers, R. A. (1968). Boxes: An Experiment in Adaptive Control. In E. Dale & D. Michie (Eds.), Machine Intelligence 2. Edinburgh: Oliver and Boyd.

Muggleton, S., & Buntine, W. (1988). Machine invention of first-order predicates by inverting resolution. In R. S. Michalski, T. M. Mitchell, & J. G. Carbonell (Eds.), Proceedings of the Fifth International Machine Learning Conference. (pp. 339-352). Ann Arbor, Michigan: Morgan Kaufmann.

Muggleton, S., & Feng, C. (1990). Efficient induction of logic programs. In First Conference on Algorithmic Learning Theory, Tokyo: Omsha.

Odetayo, M. (1988) Genetic Algorithms for Control a Dynamic Physical System. M.Sc. Thesis, Strathclyde University.

Plotkin, G. D. (1970). A Note on Inductive Generalization. In B. Meltzer & D. Michie (Eds.), Machine Intelligence 5. (pp. 153-163). Edinburgh University Press.

Quinlan, J. R. (1979). Discovering rules by induction from large collections of examples. In D. Michie (Eds.), Expert Systems in the Micro-Electronic Age. Edinburgh: Edinburgh University Press.

Quinlan, J. R. (1987). Generating production rules from decision trees. In Proceedings of the Tenth International Joint Conference on Artificial Intelligence. (pp. 304-307). San Mateo, CA: Morgan Kaufmann.

Quinlan, J. R. (1990). Learning Logical Definitions from Relations. Machine Learning, 5, 239-266.

Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann.

Reynolds, J. C. (1970). Transformational Systems and the Algebraic Structure of Atomic Formulas. In B. Meltzer & D. Michie (Eds.), Machine Intelligence 5. (pp. 153-163).

Robinson, J. A. (1965). A Machine Oriented Logic Based on the Resolution Principle. Journal of the ACM, 12(1), 23-41.

Sammut, C. A., & Banerji, R. B. (1986). Learning Concepts by Asking Questions. In R. S. Michalski Carbonell, J.G. and Mitchell, T.M. (Eds.), Machine Learning: An Artificial Intelligence Approach, Vol 2. (pp. 167-192). Los Altos, California: Morgan Kaufmann.

Shapiro, E. Y. (1981). Inductive Inference of Theories From Facts (Technical Report No. 192). Yale University.

CRICOS Provider Code No. 00098G