From: Manish Chowdhury on
Dear Friends
I have implemented Multilayer Perceptron (MLP) with one hidden layer and 20 nodes (by using the rule of thumb) but whenever i tried to train with 500 epochs , 1000 epochs or 1500 epochs , it is stop in 5 or 6 iteration and in command window it is showing that performance goal is met. (i have used net.performance goal = 0.1 to 0.9)
my input layer is 81 features and out put layer is 5. My motive is to remove the performance goal parameter becoz i want to train upto the 500 epochs , 1000 epochs or 1500 epochs.

Please help me anyone , thanks in advance

with regards
Manish
From: Steven Lord on

"Manish Chowdhury" <manishchowdhury_2005(a)yahoo.com> wrote in message
news:hq9sfh$243$1(a)fred.mathworks.com...
> Dear Friends
> I have implemented Multilayer Perceptron (MLP) with one hidden layer and
> 20 nodes (by using the rule of thumb) but whenever i tried to train with
> 500 epochs , 1000 epochs or 1500 epochs , it is stop in 5 or 6 iteration
> and in command window it is showing that performance goal is met. (i have
> used net.performance goal = 0.1 to 0.9)
> my input layer is 81 features and out put layer is 5. My motive is to
> remove the performance goal parameter becoz i want to train upto the 500
> epochs , 1000 epochs or 1500 epochs.

Then just make the performance goal something essentially unachievable, like
exactly 0. [I don't remember if the toolbox will allow negative performance
goals; if it does, use a negative goal instead.]

--
Steve Lord
slord(a)mathworks.com
comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ


From: Manish Chowdhury on
"Steven Lord" <slord(a)mathworks.com> wrote in message <hq9u6k$5h4$1(a)fred.mathworks.com>...
>
> "Manish Chowdhury" <manishchowdhury_2005(a)yahoo.com> wrote in message
> news:hq9sfh$243$1(a)fred.mathworks.com...
> > Dear Friends
> > I have implemented Multilayer Perceptron (MLP) with one hidden layer and
> > 20 nodes (by using the rule of thumb) but whenever i tried to train with
> > 500 epochs , 1000 epochs or 1500 epochs , it is stop in 5 or 6 iteration
> > and in command window it is showing that performance goal is met. (i have
> > used net.performance goal = 0.1 to 0.9)
> > my input layer is 81 features and out put layer is 5. My motive is to
> > remove the performance goal parameter becoz i want to train upto the 500
> > epochs , 1000 epochs or 1500 epochs.
>
> Then just make the performance goal something essentially unachievable, like
> exactly 0. [I don't remember if the toolbox will allow negative performance
> goals; if it does, use a negative goal instead.]
>
> --
> Steve Lord
> slord(a)mathworks.com
> comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
>
Thanks a lot sir,
I have done it but msg showing that "goal is not met"
i want to hide this line , how should i do and then train .

From: Greg Heath on
On Apr 16, 10:33 am, "Manish Chowdhury"
<manishchowdhury_2...(a)yahoo.com> wrote:
> Dear Friends
> I have implemented Multilayer Perceptron (MLP) with one hidden layer and 20 nodes
>(by using the rule of thumb)

Which rule of thumb?
How much data do you have?

N = ?


>but whenever i tried to train with 500 epochs , 1000 epochs or 1500 epochs , it is stop in 5 or 6 iteration and in command window it is showing that performance goal is met. (i have used net.performance goal = 0.1 to 0.9)
> my input layer is 81 features and out put layer is 5. My motive is to remove the performance goal  parameter becoz i want to train upto the 500 epochs , 1000 epochs or 1500 epochs.
>
> Please help me anyone , thanks in advance

The simplest answer to your question is: Don't specify a goal.
Then it will use the default of 0. However, if it still terminates
because of a small gradient, set the gradient limit to 0

However, PLEASE, PLEASE explain why you want to train past
your performance goal!

Most of the time I use the goal

net.trainParam.goal = MSE00/100;

where MSE00 is the mse for the naive constant model

y00 = repmat(mean(t,2),1,N);
e00 = t-y00;
MSE00 = mse(e00)

Then, at convergence the R-squared statistic is 0.99

y = sim(net,p);
e = t-y;
MSE = mse(e)
R2 = 1-MSE/MSE00

However, for important problems I modify the mse goal
so that the degree-of-freedom adjusted R-squared
statistic is 0.99.

R2a = 1 - ((N-1)/(N-Nw))*MSE/MSE00

where Nw is the number of weights and thresholds
that are estimated for the I-H-O (= 81-20-5) MLP

Nw = (I+1)*H+(H+1)*O = O+(I+O+1)*H
= 5 + 87*20 = 1745

Of course the number of training equations

Neq = N*O = 5*N

should be considerably larger than Nw, the number
of unknowns. So I hope you have

N >> 349

Hope this helps.

Greg

P.S. Search in CSSM using

greg heath pretraining advice
greg heath Neq Nw
From: Manish Chowdhury on
Greg Heath <heath(a)alumni.brown.edu> wrote in message <646fba50-a906-47fa-8444-e45df6e9857b(a)y21g2000vbf.googlegroups.com>...
> On Apr 16, 10:33 am, "Manish Chowdhury"
> <manishchowdhury_2...(a)yahoo.com> wrote:
> > Dear Friends
> > I have implemented Multilayer Perceptron (MLP) with one hidden layer and 20 nodes
> >(by using the rule of thumb)
>
> Which rule of thumb?
> How much data do you have?
>
> N = ?
>
>
> >but whenever i tried to train with 500 epochs , 1000 epochs or 1500 epochs , it is stop in 5 or 6 iteration and in command window it is showing that performance goal is met. (i have used net.performance goal = 0.1 to 0.9)
> > my input layer is 81 features and out put layer is 5. My motive is to remove the performance goal  parameter becoz i want to train upto the 500 epochs , 1000 epochs or 1500 epochs.
> >
> > Please help me anyone , thanks in advance
>
> The simplest answer to your question is: Don't specify a goal.
> Then it will use the default of 0. However, if it still terminates
> because of a small gradient, set the gradient limit to 0
>
> However, PLEASE, PLEASE explain why you want to train past
> your performance goal!
>
> Most of the time I use the goal
>
> net.trainParam.goal = MSE00/100;
>
> where MSE00 is the mse for the naive constant model
>
> y00 = repmat(mean(t,2),1,N);
> e00 = t-y00;
> MSE00 = mse(e00)
>
> Then, at convergence the R-squared statistic is 0.99
>
> y = sim(net,p);
> e = t-y;
> MSE = mse(e)
> R2 = 1-MSE/MSE00
>
> However, for important problems I modify the mse goal
> so that the degree-of-freedom adjusted R-squared
> statistic is 0.99.
>
> R2a = 1 - ((N-1)/(N-Nw))*MSE/MSE00
>
> where Nw is the number of weights and thresholds
> that are estimated for the I-H-O (= 81-20-5) MLP
>
> Nw = (I+1)*H+(H+1)*O = O+(I+O+1)*H
> = 5 + 87*20 = 1745
>
> Of course the number of training equations
>
> Neq = N*O = 5*N
>
> should be considerably larger than Nw, the number
> of unknowns. So I hope you have
>
> N >> 349
>
> Hope this helps.
>
> Greg
>
> P.S. Search in CSSM using
>
> greg heath pretraining advice
> greg heath Neq Nw


Respected Sir,
Thanks a lot first of all.
Secondly, Rule of Thumb is use to calculate the number of nodes/neuron in single hidden layer (i.e sqrt(input*output));

third my code is as shown in below
tic
traindata = xlsread('C:\MLP\MLPDATA\500database.xls');
p = traindata;
p = p';
[trainP,valP,testP,trainInd,valInd,testInd] = dividerand(p,0.3,0,0.7);
testdata = xlsread('C:\MLP\MLPDATA\500Class.xls');
t = testdata';
[trainT,valT,testT] = divideind(t,trainInd,valInd,testInd);
net = newff(p,t,[20 5],{'logsig' 'purelin'},'trainlm','learngdm');
net.divideParam.valRatio = 0;
net.trainParam.show = 1;
net.trainParam.lr = 0.5;
net.trainParam.lr_inc = 1.05;
net.trainParam.mc = 0.95;
net.trainParam.epochs = 500;
net.trainParam.goal = 0.00000000001;
%net.trainParam.goal = 0;
net = train(net,trainP,trainT);
toc

my problem is that this program will met the goal with in few iteration, how it is possible this explanation i am not getting. i want to run the program upto my max epochs, how should i do . please help me