From: Kadi on
Hi

I have a training data (P) of size 19 X 1516 (1516 samples and 19 features) and target class information (T) for the training data 1 X 1516. All the samples in the training dataset were given a target of either 0 or 1. I have applied newff backpropogation neural network algorithm on the training data as follows

NETff = newff(P,T,[50],{'tansig'},'trainbfg','learngdm','msereg');
NETff.trainParam.epochs = 1000;
NETff.trainParam.goal = 0.0001;
NETff= train(NETff,P,T);
Yff = sim(NETff,P);

To test the trained neural network I used a test data, a sample that has not been used for training as follows

Y=sim(NETff,testdata);

but when I looked at the value of Y, it was not always between 0 and 1.

Instead of using 'purelin' transfer function for the output layer I tried to change it to 'softmax'

NETff = newff(P,T,[50],{'tansig'},'trainbfg','learngdm','msereg');
NETff.layers{size(NETff.layers,1)}.transferFcn = 'softmax';
NETff.trainParam.epochs = 1000;
NETff.trainParam.goal = 0.0001;
NETff= train(NETff,P,T);
Yff = sim(NETff,P);

but all the samples in the negative class (class 0) were getting misclassified.

I found the following thread in Matlab central that talks about a similar problem
http://www.mathworks.com/matlabcentral/newsreader/view_thread/270041#707733

So instead of changing the output transfer function to softmax, I tried removing the processing functions
NETff.outputs{size(NETff.outputs,2)}.processFcns = {};

but still the output was not between 0 and 1.
Any thoughts why???

Appreciate any kind of suggestions
Thanks
Kadi