From: Steven Lord on

"Aristeidis " <aris262(a)hotmail.com> wrote in message
news:i0sgec$57l$1(a)fred.mathworks.com...
> > Wait with the thanks till you have tested the idea.
>>
>> Don't be surprised if this cuts run-time with less than 50%
>> of what you do now. Constructing the input cell array X will
>> take at least as long (probably several times as long) as
>> the time James' code saves.
>>
>> Do away with the cell array X in the first place. Once you find
>> out how to do *that*, you can hope to see some *real* speeds
>> improvements.
>>
>> Rune
>
> Rune, I ve taken your comments on board. I ll have to research on that a
> little bit. Cell arrays are possibly not ideal as you say, but on the
> other hand having tried structures I wouldn't say these are ideal either.
> To my knowledge there isn't anything else in Matlab that can hold multiple
> matrices of values in such a structured way. I could be wrong?

I took a quick glance at what you posted and I'm not _sure_ but a 4D array
might be of some use to you.

This thread started with your decision that you needed to use a MEX-file for
performance. But let's take a step back to the beginning -- can you explain
a bit about the original problem that you're trying to solve and the data
structure you've created for your current approach to solving that problem?
Perhaps with that background information someone could suggest an
alternative approach or data structure that would work better/more
efficiently than your cell array of cell arrays.

--
Steve Lord
slord(a)mathworks.com
comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
To contact Technical Support use the Contact Us link on
http://www.mathworks.com


From: Aristeidis on
"Steven Lord" <slord(a)mathworks.com> wrote in message <i0vg78$ep5$1(a)fred.mathworks.com>...
> I took a quick glance at what you posted and I'm not _sure_ but a 4D array
> might be of some use to you.
>
> This thread started with your decision that you needed to use a MEX-file for
> performance. But let's take a step back to the beginning -- can you explain
> a bit about the original problem that you're trying to solve and the data
> structure you've created for your current approach to solving that problem?
> Perhaps with that background information someone could suggest an
> alternative approach or data structure that would work better/more
> efficiently than your cell array of cell arrays.
>
> --
> Steve Lord
> slord(a)mathworks.com
> comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
> To contact Technical Support use the Contact Us link on
> http://www.mathworks.com
>

Steve,

The max function described here as an example, as well as others, are part of an algorithm which I have implemented in Matlab (100% m-code). The reason behind this implementation is that although the original code (Matlab\C++, with many mex files) of this algorithm is GNU, I felt more comfortable understanding, building and modifying it in Matlab first without any consideration for speed. Matlab is a good benchmark tool and my C++ skills are as stated previously basic/rusty, so I started this way first. Now, that I' ve reached the validation step (experiments in a wide range of parameters and settings), the processes run without problems in Matlab but slowly. Speed therefore has now become an issue but not necessarily to an extreme situation where I need the whole thing written in C++. Some 30%-40% perhaps reduction in processing speed through debugging and m-code optimisation would
suffice.

In the algorithm more specifically, the input images are segmented into regions (much like visual fields) which are then progressively through layers of functions translated/transformed into vectors. One of the bottom layers calculates the max of these "visual fields". The system, for example, resembles somewhat the convolutional network approach as given by Lecun (eg Lenet-5). I am afraid I cannot describe the approach much further but I do hope it gives you a better understanding.
From: Rune Allnor on
On 6 Jul, 18:07, "Aristeidis " <aris...(a)hotmail.com> wrote:

> In the algorithm more specifically, the input images are segmented into regions (much like visual fields) which are then progressively through layers of functions translated/transformed into vectors.

And these regions are blocks of N x N pixels with N small?

If so, why do you have to store the *data* of each region
individually? It would be far more efficient to store meta
data that define each region and do the processing in the
raw image. This algorithm might, in fact be efficient enough
that you don't need to consider C++ at all.

And if you do use C++, the efficient algoithm might obtain
speeds that defends the efforts of the proting.

Rune



From: Aristeidis on
Rune Allnor <allnor(a)tele.ntnu.no> wrote in message <d1632b5c-066f-42c5-94c6-eba7a23fdc5a(a)w12g2000yqj.googlegroups.com>...
> On 6 Jul, 18:07, "Aristeidis " <aris...(a)hotmail.com> wrote:
>
> > In the algorithm more specifically, the input images are segmented into regions (much like visual fields) which are then progressively through layers of functions translated/transformed into vectors.
>
> And these regions are blocks of N x N pixels with N small?
>
> If so, why do you have to store the *data* of each region
> individually? It would be far more efficient to store meta
> data that define each region and do the processing in the
> raw image. This algorithm might, in fact be efficient enough
> that you don't need to consider C++ at all.
>
> And if you do use C++, the efficient algoithm might obtain
> speeds that defends the efforts of the proting.
>
> Rune
>
>

Yes, N is small and although it could in theory be any value, in order to obey with the general concept it is kept small.

So, if i am getting this right, you mean it is better speed-wise to index the original image and refer to the indices when a function needs the values? Is there such a mapping function already in Matlab? Or do I need to script one?
From: Rune Allnor on
On 6 Jul, 18:43, "Aristeidis " <aris...(a)hotmail.com> wrote:
> Rune Allnor <all...(a)tele.ntnu.no> wrote in message <d1632b5c-066f-42c5-94c6-eba7a23fd...(a)w12g2000yqj.googlegroups.com>...
> > On 6 Jul, 18:07, "Aristeidis " <aris...(a)hotmail.com> wrote:
>
> > > In the algorithm more specifically, the input images are segmented into regions (much like visual fields) which are then progressively through layers of functions translated/transformed into vectors.
>
> > And these regions are blocks of N x N pixels with N small?
>
> > If so, why do you have to store the *data* of each region
> > individually? It would be far more efficient to store meta
> > data that define each region and do the processing in the
> > raw image. This algorithm might, in fact be efficient enough
> > that you don't need to consider C++ at all.
>
> > And if you do use C++, the efficient algoithm might obtain
> > speeds that defends the efforts of the proting.
>
> > Rune
>
> Yes, N is small and although it could in theory be any value, in order to obey with the general concept it is kept small.
>
> So, if i am getting this right, you mean it is better speed-wise to index the original image and refer to the indices when a function needs the values?

As long as we are talking C(++): Yes.

> Is there such a mapping function already in Matlab? Or do I need to script one?

In *matlab* general idea goes like

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
x = imread(filename);
[M,N] = size(x);

K = 5;
y = NaN*ones(M,N,K);

for n = 1:N
for m = 1:M
for k = 3:2:K % only k odd for simplicity
y(m,n,k) = max(x(m-k:m+k,n-k:n+k));
end
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

which likely is a lot faster than what you did since there
is no need to allocate memory.

Do note that the innermost loop over k is highly redundant
(as well as flawed, as there will be an index mis-match in
the K dimension): Each successive search for a maximum searches
the pixels that were already searched at earlier iterations
over k.

So in a C(++) implementation I would keep track of the maximum
from the previous iteration, and only search the pixels
that have been added since the previous iteration.

These two factors together will speed things up by at least
a couple of orders of magnitude.

The principle applies surprisingly often in image processing.
In fact, this ought to be the default first strategy, that
is only scrapped when there are very good reasons to *not*
use it.

Rune