From: Greg Heath on

Greg Heath wrote:
> Gian Piero Bandieramonte wrote:
> > > No, you can use the fact that
> > > PRINCOMP(X./repmat(std(X),n,1)) = PRINCOMP(Xn)
> >
> > I do
> > Xn=prestd(X);
> > [En,Xtn,Ln] = princomp(Xn);
> > [Egg,Xtgg,Lgg] = princomp(X./repmat(std(X),n,1));
> >
> > and En is not equal to Egg...
>
> Strange. I'll look into it.

Xn=prestd(X')';

Hope this helps.

Greg

> > I see that princomp automatically centers the data. But is it better
> > to run princomp with the data additionally scaled?
>
> I always standardize my data. See my post on pretraining advice.
>
> > > Hint: Compare the expressions for err10 and err12. Then
> > > compare E2 and E4 again.
> >
> > err10 and err12 are vectors with zero's... It doesn't tell me very
> > much about the difference between E2 and E4 with respect to E.
>
> That's because you looked at the values.
>
> I said look at the expressions.
>
> Hope this helps.
>
> Greg

From: Greg Heath on

Greg Heath wrote:
> Gian Piero Bandieramonte wrote:
> > > No, you can use the fact that
> > > PRINCOMP(X./repmat(std(X),n,1)) = PRINCOMP(Xn)
> >
> > I do
> > Xn=prestd(X);
> > [En,Xtn,Ln] = princomp(Xn);
> > [Egg,Xtgg,Lgg] = princomp(X./repmat(std(X),n,1));
> >
> > and En is not equal to Egg...
>
> Strange. I'll look into it.

Xn=prestd(X')';

or

[Xnt meanXt stdXt] = prestd(X');
Xn = Xnt';
meanX = meanXt';
stdX = stdXt';

Hope this helps.

Greg

> > I see that princomp automatically centers the data. But is it better
> > to run princomp with the data additionally scaled?
>
> I always standardize my data. See my post on pretraining advice.
>
> > > Hint: Compare the expressions for err10 and err12. Then
> > > compare E2 and E4 again.
> >
> > err10 and err12 are vectors with zero's... It doesn't tell me very
> > much about the difference between E2 and E4 with respect to E.
>
> That's because you looked at the values.
>
> I said look at the expressions.
>
> Hope this helps.
>
> Greg

From: Gian Piero Bandieramonte on
Ok...
I got a little problem with the target vector. My target vector has
252 entries, on which all are zero's, except for one entry, which is
a one (I can't insert more 1's in my target vector). If I train using
backpropagation, the net would fit all the zeros correctly, and would
fit the only 1 as a zero, which is incorrect. I feel this happens
because there are many zeros, and the difference between the values
of each of the inputs whose associated targets is zero and the value
of the input whose associated target is 1 is very small (like 0.01).
I could solve this using RBF (using a Huge amount of the SPREAD
parameter) because it does exact interpolation. Got any suggestion on
solving this with backprop? I want to solve using backprop to see if
the generalization properties are better than in the case of RBF.

When I preprocess my data using prepca, I first do prestd,
transforming my input and target vector in prestd, and training the
net with the new Xn and tn (the transformed target vector). When I
simulate the net, I must do poststd to transform the output simulated
to its original form.
Is there any problem if I just train the net using the targets
without being transformed by prestd and not using poststd? When I do
this, the outputs returned by the net in the simulation is different
between both cases.
From: Greg Heath on

Gian Piero Bandieramonte wrote:
> Ok...
> I got a little problem with the target vector. My target vector has
> 252 entries, on which all are zero's, except for one entry, which is
> a one (I can't insert more 1's in my target vector).

That is a major problem.

1. Use NEWRB starting with the single "1" vector.
2. Use NEWFF by duplicating the "1" vector 250 times and using
a training set with 502 vectors.
3, Instead of duplication, you may want to use jittering.

Go to Google Groups and search with

greg-heath unbalanced
greg-heath jitter
greg-heath jittering

Hope this helps.

Greg

> If I train using
> backpropagation, the net would fit all the zeros correctly, and would
> fit the only 1 as a zero, which is incorrect. I feel this happens
> because there are many zeros, and the difference between the values
> of each of the inputs whose associated targets is zero and the value
> of the input whose associated target is 1 is very small (like 0.01).
> I could solve this using RBF (using a Huge amount of the SPREAD
> parameter) because it does exact interpolation. Got any suggestion on
> solving this with backprop? I want to solve using backprop to see if
> the generalization properties are better than in the case of RBF.
>
> When I preprocess my data using prepca, I first do prestd,
> transforming my input and target vector in prestd, and training the
> net with the new Xn and tn (the transformed target vector). When I
> simulate the net, I must do poststd to transform the output simulated
> to its original form.
> Is there any problem if I just train the net using the targets
> without being transformed by prestd and not using poststd? When I do
> this, the outputs returned by the net in the simulation is different
> between both cases.

From: Gian Piero Bandieramonte on
When I preprocess my data using prepca, I first do prestd,
transforming my input and target vector in prestd, and training the
net with the new Xn and tn (the transformed target vector). When I
simulate the net, I must do poststd to transform the output simulated
to its original form.
Is there any problem if I just train the net using the targets
without being transformed by prestd and not using poststd? When I do
this, the outputs returned by the net in the simulation is different
between both cases.

It must be also taken in account that when my targets are processed
by prestd, the range of tha targets fall outside the range of the
activation function I use (radbas [0,1]). Must the range of the
targets be the same as the range of the activation function? If so,
must I normalize the processed targets to the [0,1] range?
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9
Prev: A bug in avifile?
Next: Parallel port