From: Anders Ă–sterling on
Hi all,

I've got an application that creates three large matricies (350 000 x 250) and store these to binary files using fwrite(), later in the algorithm I read these back into memory using fread(). So far, no problem.

When I extend the matricies to be (350 000 x 1 000) I get the classical "out-of-memory" error - there simply ain't enough big contiguous memory-chunks in my machine to store these matricies. I first tried to rewrite the code to iterate over smaller chunks of the matricies and perform the calculations; (i.e. perform 10 iterations over 35 000 x 1 000) matricies, but due to the complex operations to be performed on these three matricies I finally had to give up this sollution. Instead I turned to the spmd and "codistributed" feature. My hopes was that I could codistribute the array over my small matlab cluster (4 dual core machines) and then proceed as normal - I would loose some execution time but this isn't essential. But - I just cant figure out how to do it!

It seems to me that I would need to refer to the different parts of the co-distributed arrays as cells - and then I'm back to the problem I had when iterating over parts of the matricies! So, my question is: 1) is it possible to create a large matrix that resides on several cluster nodes without me changing my code too much, i.e. I would like to do something like "spmd, A=zeros(350000,1000,'codistributed'), end, frwrite(fid,A(:,150),'float')" where A is a codistributed matrix that resides somewhere on the cluster. 2) How would I do this? Does anyone know of a good tutorial or is able to give me a code-snippet? I'm lost in the matlab helpfiles...

With kind regards,
Anders