From: Fernando Cunha on
You can use:

net.IW{1,1} to find out the weights from inlet to layer 1 (net is the name of the network you used)

then

net.LW{i,j} to find out the weights from layer j to layer i.

You can find the bias simply by using net.b(i), where i is the layer of interest.

I hope this could help you!

"Greg Heath" <heath(a)alumni.brown.edu> wrote in message <1156517720.595265.4930(a)h48g2000cwc.googlegroups.com>...
> xinglifan(a)gmail.com wrote:
> > First,i divert
>
> Replace the term "divert" with "partition" (my preference) or
> "split".
>
> > dataset into training dataset and test dataset. Then
> > i use three-fold cross validation way to divert training dataset
> > into training dataset and validation dataset.
>
> What you are describing below is not called 3-fold XVAL. The
> proper terms are "Early Stopping" and "Stopped Training". See
> the comp.ai.neural-nets FAQ. It also explains both f-fold and
> leave-v-out cross-validation.
>
> Search in Google Groups using
>
> greg-heath XVAL
> greg-heath cross-validation
>
> for more details on cross-validation.
>
> > use early stopping by validation dataset.Then i calculate average
> > MSE
>
> delete the adjective "average" ; the "M" in MSE already implies
> averaging over the individual input vector squared errors.
>
> Use the adjective "average" when you are averaging over the MSE
> of different designs (e.g. in 10-fold XVAL).
>
> > and choose the structure of neural network.
>
> Do you mean number of hidden nodes, weights, or both?
>
> > But i do not know how to
> > determine weights of neural network and get final neural network
> > model which i can use test dataset to evaluate the neural network
> > model.
> > Please help me!
>
> Make multiple runs over (say) 10 to 30 different weight initializations
> and choose the best design based on validation set error.
>
> The test set is used for the final evaluation once the best design is
> chosen.
>
> If the test set results are unsatisfactory, the data set should be
> repartitioned for a new design in order to make sure that the
> new test set is independent of the new design.
>
> Hope this helps.
>
> Greg
>