From: Eugene on
I am trying to train a neural network for speech recognition. I am using 10ms frames and computing feature vectors of size 64. I am runnning into out of memory issues with matlab due too the large amount of data I have(on the order of say 400000 feature vectors). I ve found that due to matlabs training algorithm the memory is dependent on the amount of training data and I can only train with roughly 10000 data points before I run into memory issues. My question is there a way to set the matlab training algorithm to iterate sequeuntially over the training data and not attempt to use it all immediately so that it doesnt run out of memory?
From: Greg Heath on
On Feb 6, 4:03 pm, "Eugene " <eugen...(a)gmail.com> wrote:
> I am trying to train a neural network for speech recognition.
I am using 10ms frames and computing feature vectors of size 64.
I am runnning into out of memory issues with matlab due too the
large amount of data I have(on the order of say 400000 feature
vectors). I ve found that due to matlabs training algorithm the
memory is dependent on the amount of training data and I can
only train with roughly 10000 data points before I run into
memory issues. My question is there a way to set the matlab
training algorithm to iterate sequeuntially over the training
data and not attempt to use it all immediately so that it doesnt
> run out of memory?

First question why you think you need to use 64 input variables.
Then question why you think you need to train using 400K
training vectors. If you have a MLP with topolgy I-H-O (e.g.,
I = 64) you need to estimate Nw = (I+1)*H+(H+1)*O weights using
Neq = Ntrn*O equations. Good estimates can result with
the ratio r = Neq/Nw >> 1. Although typical ratios are in the
interval 2-20, it is best to find the smallest reliable
value H = Hopt by trial and error. Depending on the size
of the data and the net, you can use a combination of forward
and backwards search techniques to bracket Hopt.

I generally start by looking at H values that yield r ~ 10
and then halve or double H depending on the result.

Ntrn only has to be large enough to maintain a sufficiently
large value of r. It is unlikely that anything near 400K
is needed.

For regression, the easiest way to reduce the input dimension
is to use the dominant PCs. However, other techniques have
been recently discussed in the archives.

Hope this helps.

Greg

 | 
Pages: 1
Prev: Fourier Series Coefficients
Next: swap values