Prev: Calling Database Procedure in Matlab
Next: 6th Int. Conf. on Technology and Medical Sciences – Announce & Call for Papers
From: Mohammed Ibrahim on 2 Apr 2010 13:53 "Burak " <newsreader(a)mathworks.com> wrote in message <f9sagf$dfn$1(a)fred.mathworks.com>... > Hi there, > > I am a graduate student working on particle swarm > optimization. I wanna to learn more about ANN training with > PSO. Although there is a good PSO toolbox release, it seems > complicated as I observe the source code for neural network > training. There are some articles about this issue, but it > is not clear how they implement PSO to ANN training > Thanks for your answers and help > > Burak
From: George on 2 Apr 2010 22:15 "Mohammed Ibrahim" <hammudy20(a)yahoo.com> wrote in message <hp5au2$3iu$1(a)fred.mathworks.com>... > "Burak " <newsreader(a)mathworks.com> wrote in message <f9sagf$dfn$1(a)fred.mathworks.com>... > > Hi there, > > > > I am a graduate student working on particle swarm > > optimization. I wanna to learn more about ANN training with > > PSO. Although there is a good PSO toolbox release, it seems > > complicated as I observe the source code for neural network > > training. There are some articles about this issue, but it > > is not clear how they implement PSO to ANN training > > Thanks for your answers and help > > > > Burak Burak, to train an ANN using PSO, firstly, identify a well-performing ANN for your application. Find characteristics that seem to work well for problems similar to yours: e.g. novel concepts, number of hidden layers, number of inputs, and types of inputs. Keep a detailed bibliography and save all relevant papers. Though you will train with PSO, you should keep notes of other good training algorithms with which to compare in order to fully demonstrate the validity of PSO as a training mechanism. Secondly, find a PSO type suitable to your application. For example, RegPSO [1: Chapters 4-6], [2] does a great job of escaping from premature convergence in order to find increasingly better solutions when there is time to regroup the swarm, which would seem to be the case for training ANN's in general. Thirdly, locate a good PSO toolbox - preferably one that is already capable of implementing the strain of PSO you would like to use. Ideally, the toolbox would contain standard gbest and lbest PSO's as well as the more evolved PSO type found in step two. If the variation of PSO you would like to use is not available in a suitable toolbox, locate a powerful toolbox, and contribute the code. The PSO toolbox doesn't need to have code for training ANN's since you can locate solid code for implementing ANN's and simply interface the best of both worlds. Fourthly, locate a good ANN code to interface with the toolbox - preferably written in the same langauge. As long as you can implement the ANN with code alone (e.g. as with MATLAB's neural net toolbox) rather than necessarily depending on a GUI, the two can be interfaced. Fifthly, interface the PSO toolbox and ANN code by creating a new objective function for the toolbox. If you use the "PSO Research Toolbox," just create a new file called "benchmark_BurakANN.m" using the pseudo code below to interface the two codes: function [f] = benchmark_BurakANN(x, np) global dim f = zeros(np, 1); for Internal_j = 1:np f(Internal_j, 1) = the result of passing x(Internal_j, :) into your ANN code end What makes sense to me is that each function value in column vector "f" would reflect the error (e.g. the difference or biased difference) between the ANN's prediction and the actually desired function value since it is the error that you want to minimize. To be more in line with real-world applications, you could translate each error into financial cost and minimize that value. FYI, the problem dimensionality will be equal to the number of ANN parameters that you wish to optimize so that each dimension represents one decision variable of the network to be optimized. For example, will you keep the number of hidden layers constant or include this as one dimension to be optimized? Read up on the most recent ANN literature: I could be wrong, but it is my impression that while more complex ANN's (e.g. those with more hidden layers) might be capable of solving more complicated problems, they also tend to memorize the data more quickly, which is a big problem since the goal is not memorization but prediction. I personally would leave the number of hidden layers constant at whatever seems to have worked best in the literature and possibly experiment with changing it at a later time. Happy Researching! Sincerely, George I. Evers [1] http://www.georgeevers.org/thesis.pdf [2] http://www.georgeevers.org/Evers_BenGhalia_IEEEConfSMC09.pdf
From: George on 5 Apr 2010 17:17
I apologize, the last reply was intended for Mohammed rather than Burak. |