Murakami and Taguchi [MT91] investigated the use of recurrent neural nets for sign language recognition. A recurrent neural net is a neural net with a ``feedback'' loop in it. In this case, a copy of the hidden layer is taken and stored in the context layer, which is then used as input in the next cycle into the hidden layer. This is shown in figure 2.15.
Figure 2.15: The structure of the recurrent neural network used by Murakami and Taguchi.
First,they trained the system on 42 handshapes in the Japanese finger alphabet, using a VPL Dataglove. This system was successfully built and obtained success rates of approximately 98 per cent.
Then, they managed to recognise continuous finger-spelling by only accepting a symbol as positive when a certain threshold was exceeded in the output layer of the neuron.
They then chose ten distinct signs (mostly in pairs -- such as father and mother, brother and sister, memorise and forget, skilled and unskilled, like and hate -- which in Japanese Sign Language are very distinct gestures).
Their neural network was very large: 93 in the input layer (consisting of the past three frames, each frame having finger position and absolute and relative position, as well as orientation information), with 150 in the hidden layer and the mirroring 150 in the context layer.
It recognised the beginning of signs by checking if it met any of a set of postures, and would stop when one of the output nodes consistently output a particular value.
It was quite successful, with accuracy of approximately 96 per cent. However, there was only a small number of signs (10) and it is not clear whether the technique would have generalised well (considering that the system already had in excess of 400 neurons already, it is highly likely that this will need to be increased, which will slow it down even further).