From: Eric on
I'm new to NN so this may be a series of dumb questions; however, here it goes....

I constructed a standard feed-forward backpropagation network using newff with one hidden layer. From what I read, it seems that there is no good rule for determining the optimal number of hidden neurons. I was planning on running several training runs, varying the number of hidden neurons and looking for a convergence of performance. Is this the right approach?

For training I am using the standard "trainlm". I'm using 6 input vectors and have enough data to potentially use 42 million elements for each vector. I tried training using all the data, but quickly ran out of memory. How much data is typically necessary for training a network? That may be a problem specific question.

It seems like there are some options to reduce the memory requirement for training. There is an option to change net.trainParam.mem_reduc. The user's guide is not very specific on this. If I change it from 1 to 2 by what factor does this effect memory and performance?

Also, there are some other methods to train a network. The documentation says that they are more memory efficient but slower. Do they also effect the performance of the network. Will I have the same network assuming all the same initial conditions and just using different training algorithms?

To sum up the questions:
1) What is the best method for determining the number of hidden neurons?
2) How do you estimate the number of input elements necessary for optimally training a network?
3) Quantitatively, what does changing the parameter net.trainParam.mem_reduc for trainlm do?
4) Do different training algorithms effect the performance of the network?

Many thanks in advance.

Eric
From: Greg Heath on
On Jul 22, 3:25 pm, "Eric " <ebau...(a)gmail.com> wrote:
> I'm new to NN so this may be a series of dumb questions; however, here it goes....
>
> I constructed a standard feed-forward backpropagation network using newff with one hidden layer. From what I read, it seems that there is no good rule for determining the optimal number of hidden neurons. I was planning on running several training runs, varying the number of hidden neurons  and looking for a convergence of performance. Is this >the right approach?

Yes. However, be guided by common sense. Search

greg heath advice for newbies
greg heath Nw Neq

for a little practical advice.

>
> For training I am using the standard "trainlm". I'm using 6 input vectors and have enough data to potentially use 42 million elements >for each vector.

Unclear.


>I tried training using all the data, but quickly ran out of memory. How much data is typically necessary for training a network? That may >be a problem specific question.

For a MLP with I-H-O topology, the sizes of the input
and target matrices are

[I Ntrn] = size(p)
[O Ntrn] = size(t)

The corresponding numbers of unknowns weights
and training equations are

Nw = (I+1)*H+(H+1)*O
Neq = Ntrn*O

Typically, choose Ntrn so that the system of
equations is well overdetermined, i.e.,

Neq >> Nw.

> It seems like there are some options to reduce the memory requirement for training. There is an option to change net.trainParam.mem_reduc. The user's guide is not very specific on this. If I change it from 1 to 2 by what factor does this effect memory and performance?
>
> Also, there are some other methods to train a network. The documentation says that they are more memory efficient but slower. Do they also effect the performance of the network. Will I have the same network assuming all the same initial conditions and just using different training algorithms?

No.

> To sum up the questions:
> 1) What is the best method for determining the number of hidden neurons?
> 2) How do you estimate the number of input elements necessary for optimally training a network?

The input (I) and output (O) dimensions are determined
by the problem.

> 3) Quantitatively, what does changing the parameter net.trainParam.mem_reduc for trainlm do?

Dunno. If I have memory ptoblems I use conjugate gradient.

> 4) Do different training algorithms effect the performance of the network?

Of course.

Read the NN documentation for more general advice.

Hope this helps.

Greg