Neural networks are intended to analyse data in a similar way to the way biological neural networks do.
To understand what a neural net is, it helps to understand what a single neuron does.
Figure 2.8: The internal structure of a neuron
A simple neuron is shown in figure 2.8. As can be seen, the neuron contains a set of inputs (), a set of weights (). The product of these input-weight pairs () are then added together. An increasing monotonic function is then applied to the sum, such as the sigmoid function (). This output is then available. Typically the weights are considered to be between 0 and 1 and the the inputs and outputs similarly. However, the internal sum can be anywhere between 0 and the number of inputs.
This can be thought of in many ways. For example, this can be thought of as simply an information condenser -- it takes a set of inputs and reduces it to a single output. The output can also be then can be interpreted in a number of ways. If instead of a sigmoid function a simple binary function is used, then the neuron can be thought of as a ``decision-maker'' -- it takes a set of inputs and using its weights decides whether to output a ``0'' or a ``1''. The sigmoid function allows a ``maybe'' guess, somewhere between a ``0'' and a ``1''.
Another way at looking at the function of the neuron is that it partitions an n-dimensional space, where n is the number of inputs into the neuron. The weights describe a n-1 dimensional hyperplane through the origin, and depending on the values of the inputs, the point represented by the inputs is either ``above'' or ``below'' the plane.
A single neuron in isolation is interesting, but neurons are usually combined together to form neural nets, increasing their power by orders of magnitude, as shown in figure 2.9. Typically, these neurons are considered on a layer-by-layer basis. The first layer, known as the input layer, takes external input. The hidden layer then takes as input the outputs from the previous layer of input. There may be several hidden layers before the output layer, the layer that connects back to the real world, is reached. Each layer takes as input the previous layer's output.
The real power of a neural networks comes from the way that weights are set. Using supervised learning, the weights initially start out random and examples are fed through the neural net. The outputs are then compared to what the outputs should be, and using one of a variety of techniques (such as back-propagation), the weights of the neurons are adjusted. In this way, the neural net ``learns'' what the desired output is and this is where it becomes useful.
There are a number of ways for using a neural net as a classifier. One is to have an output neuron for each possible class, thus each neuron will output a value ``closer'' to 1 when that particular class is indicated. A problem arises when more or less of the neurons claim that it is of that class. A second, less frequently used option, is to encode the classes in a binary manner. This alleviates the above problem, but may result in unusual behaviour, since the sequencing of the encoding affects the accuracy of the output function.
It will be noted that neurons are extremely ``tweakable'' -- that is to say, there are a great number of variables that can be played with, such as the output encoding, the number of hidden layers and the number of neurons in each hidden layer, the interconnection of neurons (there can be for example feed-back neural networks), the neuron output function, the error function used for the learning algorithm and several others. What is worse, however, is that there is, so far, no structured way of choosing the correct configuration for a particular task -- it is essentially a black art.
Figure 2.9: A neural network -- a collection of neurons connected together.