From: Mic N on
Hello, I would like some advice for a project of mine. Basically, I need to optimize a process which I am going to model using a neural network. The process has more than one outputs. For the following scenarios, which MATLAB optimization method is most suitable?

1. Optimizing all the outputs. (use gamultiobj?)
2. Optimizing a single output while constraining the other outputs. (use fmincon?)

I have used some of the MATLAB optimization functions before, but I am unsure of how effective they are when it comes to neural networks. Thanks in advance for the advice!
From: Greg Heath on
On Aug 2, 7:32 pm, "Mic N" <oroba...(a)gmail.com> wrote:
> Hello, I would like some advice for a project of mine.
Basically, I need to optimize a process which I am going
to model using a neural network. The process has more
than one outputs. For the following scenarios, which
MATLAB optimization method is most suitable?
>
> 1. Optimizing all the outputs. (use gamultiobj?)
> 2. Optimizing a single output while constraining the
other outputs. (use fmincon?)
>
> I have used some of the MATLAB optimization functions
before, but I am unsure of how effective they are when
it comes to neural networks.
>Thanks in advance for the advice!

You are sparse in relevant details.
Nevertheless, consider the following:

1. Standardize the I inputs and O outputs to
have zero-mean and unit-variance.
2. Use newff to design a I-H-O MLP with
H tansig hidden layer activation functions
(disable the automatic normalization)
3. You will have Nw = (I+1)*H+(H+1)*O
unknown weights to optimize
4. There will be Neq = Ntrn*O training
equations from Ntrn training vectors
5. Unless you are using regularization
via trainbr, it is desirable to choose
H so that Neq >> Nw, so set your search
bounds on H accordingly.
6. Typically, The function to minimize
is NMSEa, the normalized mean-square-error
(with degree-of-freedom adjustments):

[I Ntrn] = size(p)
[O Ntrn] = size(t)

%For a naive constant output model

y00 = repmat(mean(t,2),1,Ntrn);
Nw = O
e00 = t-y00;
MSE00 = sse(e00)/(O*Ntrn) % = mse(e00)
MSEa00 = sse(e00)/(O*(Ntrn-O))% DOF adjusted

%For an I-H-O NN model
y = sim(net,p);
e = t-y;
MSE = sse(e)/(O*Ntrn) % = mse(e)
MSEa = sse(e)/(O*(Ntrn-Nw))
NMSEa = MSEa/MSEa00

7. For each candidate value of H and a
NMSEa goal of 0.01 set

MSEgoal = 0.01*(1-Nw/Ntrn)*MSEa00
net.trainParameter.goal = MSEgoal;
net.trainParameter.show = inf;

then run several trials and choose
the net that yields the minimum NMSEa.

8. I don't use a fancy optimization
program; just a simple representative
search within reasonable bounds for H.
For example,

a. Hnew = 2*Hold
b. Hnew = Hold + dH
c. Hnew = (Hhigh+Hlow)/2

Hope this helps.

Greg
From: Mic N on
Hello Greg. I believe your post is on training and optimizing the neural network parameters itself? Sorry if I didn't make it clear, I meant that after getting a 'good' NN model of my process out, I would like to either minimize or maximise one or all of the process outputs i.e. find out which values of the process inputs would give the desired optimal output values from the NN. In other words, my original post was seeking advice on which methods of optimization is best suitable to use for my process, given that it will be modelled by an ANN. Still, your post was very helpful since I have yet to outline a detailed plan on building my neural network. Thanks a lot!