From: Mithun on
I've seen this on several threads in MATLAB central, but haven't found a suitable solution. Here are some properties of A:

size(A) -> 2061233 2058240
sprank(A) -> 2058240
sprank([A b]) -> 2058241
nnz(A) -> 8232960 ; numel(A) = 4.2425e+012

I'm trying to min ||Ax-b||_2^2 The only function which works is lsqr ; the rest errors out with "Out of Memory".

lsqr is great for some A, but not all and I wanted to experiment with other methods like \, fminbnd (since I'm experimenting with constraints on x).

Any ideas?
From: Matt J on
Sometimes the following iterative algorithm works well:

absA=abs(A);
C=absA*sum(absA,2);

for ii=1:numIter

X=X-A.'*(A*X-b)./C;

end
From: Jacob on
Could you provide some information about this method (name, paper, ...)? Also, if A is size m x n, then sum(A,2) will be of size, m x 1 so, I don't understand that step.

"Matt J " <mattjacREMOVE(a)THISieee.spam> wrote in message <hvqq92$q3n$1(a)fred.mathworks.com>...
> Sometimes the following iterative algorithm works well:
>
> absA=abs(A);
> C=absA*sum(absA,2);
>
> for ii=1:numIter
>
> X=X-A.'*(A*X-b)./C;
>
> end
From: Bruno Luong on
"Jacob " <mithunjacob_oohay(a)yahoo.com> wrote in message <hvqrvj$irq$1(a)fred.mathworks.com>...
> Could you provide some information about this method (name, paper, ...)? Also, if A is size m x n, then sum(A,2) will be of size, m x 1 so, I don't understand that step.

Matt's suggestion is no thing more than the simple (as opposed to conjugate) gradient method with empirical preconditioning. My professors told me never use it beside academic purpose. In any case it certainly can't compete with LSQR.

Bruno
From: Matt J on
"Bruno Luong" <b.luong(a)fogale.findmycountry> wrote in message <hvr2sl$odb$1(a)fred.mathworks.com>...
> "Jacob " <mithunjacob_oohay(a)yahoo.com> wrote in message <hvqrvj$irq$1(a)fred.mathworks.com>...
> > Could you provide some information about this method (name, paper, ...)? Also, if A is size m x n, then sum(A,2) will be of size, m x 1 so, I don't understand that step.
==============

Sorry, I meant to write

C=absA.' * sum(absA,2); %note the transpose

So, yes sum(A,2) will be of size m x 1, but absA.' will be compatibly sized for this multiplication and C will be nx1



> Matt's suggestion is no thing more than the simple (as opposed to conjugate) gradient method with empirical preconditioning. My professors told me never use it beside academic purpose. In any case it certainly can't compete with LSQR.
==================

No, the preconditioning is not empirical. This particular choice of preconditioner C will ensure monotone descent of the algorithm without the need for line searches. You can find some information about it in the paper referenced down at the bottom.

How fast it will descend compared to other algorithms depends a lot on the structure of A. One expects it to be faster for more diagonally concentrated A. The MATLAB documentation doesn't say what algorithm LSQR uses, but judging from a peek inside lsqr.m, the computation per iteration for LSQR seems much greater. So, I don't think it can be a priori obvious which algorithm will perform better.

It's also much easier to add box constraints to this algorithm than others. For example, positivity constraints can be added with the following simple modification. LSQR cannot, apparently, accomodate any constraints.


for ii=1:numIter

X=X-A.'*(A*X-b)./C;

X(X<0)=0;

end


A relevant paper is:

@ARTICLE{
DePa,
author = {A R {De Pierro}},
title = {{On} the relation between the {ISRA} and the {EM} algorithm for positron emission tomography},
journal = {IEEE Tr. Med. Im.},
volume = 12,
number = 2,
pages = {328--33},
month = jun,
year = 1993
}