From: Steven Lord on

"Thibault Daoulas" <thibault.daoulas(a)gmail.com> wrote in message
news:hu882k$lsj$1(a)fred.mathworks.com...
> Hi all,
> I have been reading and trying several ways suggested here to handle
> growing matrices over loops. My concern starts the same way :
> - Unkown size of a matrix, and at each step, a new row is concatenated

Do you know an upper bound on the size of your matrix (ideally one that's
not too much larger than the actual size your matrix will be in the end?)
If so, preallocate it to be that size and trim it at the end.

--
Steve Lord
slord(a)mathworks.com
comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
To contact Technical Support use the Contact Us link on
http://www.mathworks.com


From: dpb on
Steven Lord wrote:
> "Thibault Daoulas" <thibault.daoulas(a)gmail.com> wrote in message
> news:hu882k$lsj$1(a)fred.mathworks.com...
>> Hi all,
>> I have been reading and trying several ways suggested here to handle
>> growing matrices over loops. My concern starts the same way :
>> - Unkown size of a matrix, and at each step, a new row is concatenated
>
> Do you know an upper bound on the size of your matrix (ideally one that's
> not too much larger than the actual size your matrix will be in the end?)
> If so, preallocate it to be that size and trim it at the end.

What he said... :)

Was just getting ready to respond to the other posting w/ the
(apparently ubiquitous) "double-the-previously-allocated-size" algorithm
that just never made sense to me as a logical approach unless the size
expected was truly absolutely unknown and there was no way to even guess
whether were reaching the end or not during the process. Continuing to
try to double the size of an allocation just reeks of problems to me;
how well it actually works in general use I don't actually know because
I've never implemented it as a technique owing to the aforementioned
concerns.

--


From: Steve Amphlett on
"Steven Lord" <slord(a)mathworks.com> wrote in message <hu8n1d$qah$1(a)fred.mathworks.com>...
>
> "Thibault Daoulas" <thibault.daoulas(a)gmail.com> wrote in message
> news:hu882k$lsj$1(a)fred.mathworks.com...
> > Hi all,
> > I have been reading and trying several ways suggested here to handle
> > growing matrices over loops. My concern starts the same way :
> > - Unkown size of a matrix, and at each step, a new row is concatenated
>
> Do you know an upper bound on the size of your matrix (ideally one that's
> not too much larger than the actual size your matrix will be in the end?)
> If so, preallocate it to be that size and trim it at the end.
>
> --
> Steve Lord
> slord(a)mathworks.com
> comp.soft-sys.matlab (CSSM) FAQ: http://matlabwiki.mathworks.com/MATLAB_FAQ
> To contact Technical Support use the Contact Us link on
> http://www.mathworks.com
>

Or use the age-old trick of starting with a nominal matrix and when you reach the upper bound, create one double the size and copy. I guess this is easy in Matlab, because you can just use something like:

[m,n]=size(x);
x(m*2,1)=0;

Which will do the doubling and copying for you. Then trim at the end.
From: Alan B on
dpb <none(a)non.net> wrote in message <hu8o1b$ok9$1(a)news.eternal-september.org>...
> Steven Lord wrote:
> > "Thibault Daoulas" <thibault.daoulas(a)gmail.com> wrote in message
> > news:hu882k$lsj$1(a)fred.mathworks.com...
> >> Hi all,
> >> I have been reading and trying several ways suggested here to handle
> >> growing matrices over loops. My concern starts the same way :
> >> - Unkown size of a matrix, and at each step, a new row is concatenated
> >
> > Do you know an upper bound on the size of your matrix (ideally one that's
> > not too much larger than the actual size your matrix will be in the end?)
> > If so, preallocate it to be that size and trim it at the end.
>
> What he said... :)
>
> Was just getting ready to respond to the other posting w/ the
> (apparently ubiquitous) "double-the-previously-allocated-size" algorithm
> that just never made sense to me as a logical approach unless the size
> expected was truly absolutely unknown and there was no way to even guess
> whether were reaching the end or not during the process. Continuing to
> try to double the size of an allocation just reeks of problems to me;
> how well it actually works in general use I don't actually know because
> I've never implemented it as a technique owing to the aforementioned
> concerns.
>
> --
>

I myself haven't used this, but wouldn't it make sense if the possible array size ranges from some small number up to the limit of what fits in your machine's memory? My desktop isn't too happy when I try to allocate 50 million zeros, so I would want to avoid doing so, if that were really my best guess for the size.

Like I said, I don't know much about the Matlab internals, so I have no idea why appending large arrays to a cell array makes sense, but appending zeros to a normal array doesn't. I just figured, if you have no choice but to repeatedly allocate additional space, you may as well minimize the allocation overhead by doing it ~log(N) times instead of ~N times.
From: Steve Amphlett on
"Alan B" <monguin61REM(a)OVETHIS.yahoo.com> wrote in message <hu8r4i$7cq$1(a)fred.mathworks.com>...
> dpb <none(a)non.net> wrote in message <hu8o1b$ok9$1(a)news.eternal-september.org>...
> > Steven Lord wrote:
> > > "Thibault Daoulas" <thibault.daoulas(a)gmail.com> wrote in message
> > > news:hu882k$lsj$1(a)fred.mathworks.com...
> > >> Hi all,
> > >> I have been reading and trying several ways suggested here to handle
> > >> growing matrices over loops. My concern starts the same way :
> > >> - Unkown size of a matrix, and at each step, a new row is concatenated
> > >
> > > Do you know an upper bound on the size of your matrix (ideally one that's
> > > not too much larger than the actual size your matrix will be in the end?)
> > > If so, preallocate it to be that size and trim it at the end.
> >
> > What he said... :)
> >
> > Was just getting ready to respond to the other posting w/ the
> > (apparently ubiquitous) "double-the-previously-allocated-size" algorithm
> > that just never made sense to me as a logical approach unless the size
> > expected was truly absolutely unknown and there was no way to even guess
> > whether were reaching the end or not during the process. Continuing to
> > try to double the size of an allocation just reeks of problems to me;
> > how well it actually works in general use I don't actually know because
> > I've never implemented it as a technique owing to the aforementioned
> > concerns.
> >
> > --
> >
>
> I myself haven't used this, but wouldn't it make sense if the possible array size ranges from some small number up to the limit of what fits in your machine's memory? My desktop isn't too happy when I try to allocate 50 million zeros, so I would want to avoid doing so, if that were really my best guess for the size.
>
> Like I said, I don't know much about the Matlab internals, so I have no idea why appending large arrays to a cell array makes sense, but appending zeros to a normal array doesn't. I just figured, if you have no choice but to repeatedly allocate additional space, you may as well minimize the allocation overhead by doing it ~log(N) times instead of ~N times.

Oops, I should do more reading and less writing!

Anyway, in one of our (non-Matlab) programs, we have a user-defined upper memory limit (akin to a max Matrix size). When we hit that, we decimate by a factor of 2, point our index to the middle and then start writing every 2 steps instead of every 1 step. Etc...