S E M I N A R
Indian Statistical Institute
The response of a multilayered perceptron network (MLP) on points which are far away from the boundary of its training data is generally not reliable. Unfortunately, this issue is ignored in the literature. Ideally a network should not respond to a data point which lies far away from the boundary of its training data. This strange behavior of MLP type network could be disastrous in many critical applications areas such medical, defence, mining and ..... We propose a new training scheme for MLPs as classifiers, which ensures a well behaved generalization. Our scheme involves training subnets for each class present in the training data. Training each subnet requires data from the class which the subnet represents along with some points outside the "boundary" of that class. For this purpose we propose an easy but approximate method to generate points outside the boundary of a pattern class. The trained subnets are then merged to solve the multi-class classification problem. We show through simulations that an MLP trained by our method does not respond to points which lie outside the boundary of its training sample. Unlike conventional training schemes, our approach can properly deal with OVERLAPPED classes giving a better insight into the data. In addition, this scheme enables incremental training of an MLP, i.e. the MLP can learn new knowledge without forgetting the old knowledge. perceptron type networks while the second method is for radial basis function type network.
DATE: July 01, 2003, Tuesday @ 14:30