Prev: tracking
Next: Matlab 2007; Visual Studio 2008
From: pipa on 10 May 2010 13:55 Hi All, I am using NEWFF function to approximate a smooth surface which is more or less monotonically decreasing. Its a 2D surface and I have only 24 values of X and Y on the surface for which I know Z. The value of Z at X=0 (for any Y) is also known. I populate the line at X=0 with this known value of Z for all the Ys with an increment of 0.5 (Y goes from 0 to 90). Therefore I get a lot of data points as the input data. The input data is divided into test and training data. Mostly, test data is from part of the surface which is away form X=0. I choose 10 points (10 pairs of X and Y) as test points and rest (including a lot of points at X=0) as training data. Based on I-H-O criterion by Greg, I choose the number of neurons. The final surface I get look similar to the data provided to the network but the error at the test points in still large. I run this single hidden layer network with random initializations from 0 to 1000 and choose the network which produces minimum error at the test data. Can anybody suggest me why the error at the test points is still large? Thanks!
|
Pages: 1 Prev: tracking Next: Matlab 2007; Visual Studio 2008 |