From: et al. on
Hi!

I am trying to make use of the row-reduce algorithm, without pivoting,
and backward substitution in order to solve systems of linear equations.

I understand the whole algorithm, and it works, however, I'd like to
separate some parts without any success!

As far as I understand, the row reduce goes as follows, being A an n by
n matrix (starting from index 0), b the known term vector, and solving
Ax = b:


for all rows k = 0 ... n-1
begin
b[k] = b[k] / a[k, k]

for all columns j = k ... n-1
a[k, j] = a[k, j] / a[k, k]

for all rows i = k+1 ... n-1
begin
b[i] = b[i] - a[i, k] * b[k]

for all columns j = k ... n-1
a[i, j] = a[i, j] - a[i, k] * a[k, j]
end
end


This works fine. The backward substitution is then easy:


for all rows k = n-1 ... 0
begin
x[k] = b[k]

for all columns j = n-1 ... k+1
x[k] = x[k] - a[k, j] * s[j]
end


I tried it, and it gives me the right solution.

Now my question is, can I row-reduce *without* modifying the know
terms? That would be great, since I am separating the "LU-like
decomposition" phase, from the actual "solution". Every algorithm I
find online seems either to do what I am doing right now, or it adds
partial pivoting, and right now I'd like to avoid pivoting!

Thanks for any pointer you can give me!

et al.




From: Ray Vickson on
On Jul 22, 7:35 am, et al. <etal.nowh...(a)gmail.com> wrote:
> Hi!
>
> I am trying to make use of the row-reduce algorithm, without pivoting,
> and backward substitution in order to solve systems of linear equations.
>
> I understand the whole algorithm, and it works, however, I'd like to
> separate some parts without any success!
>
> As far as I understand, the row reduce goes as follows, being A an n by
> n matrix (starting from index 0), b the known term vector, and solving
> Ax = b:
>
> for all rows k = 0 ... n-1
> begin
>         b[k] = b[k] / a[k, k]
>
>         for all columns j = k ... n-1
>                 a[k, j] = a[k, j] / a[k, k]
>
>         for all rows i = k+1 ... n-1
>         begin
>                 b[i] = b[i] - a[i, k] * b[k]
>
>                 for all columns j = k ... n-1
>                         a[i, j] = a[i, j] - a[i, k] * a[k, j]
>         end
> end
>
> This works fine. The backward substitution is then easy:
>
> for all rows k = n-1 ... 0
> begin
>         x[k] = b[k]
>
>         for all columns j = n-1 ... k+1
>                 x[k] = x[k] - a[k, j] * s[j]
> end
>
> I tried it, and it gives me the right solution.
>
> Now my question is, can I row-reduce *without* modifying the know
> terms?

I'm not sure what you are asking: do you want to leave the original
matrix A unchanged? If so, just use "echelon matrices". In step 1 of
Gaussian elimination, you want to subtract row 1 x a[i,1]/a[1,1] from
row i > 1. The new matrix A' is of the form E1*A, where E1 = matrix
obtained by doing the above operations on the identity matrix; that
is, E1 is obtained from I = n x n identity by subtraction of row 1 x
a[i,1]/a[1,1] from row i. You can compute column 2 of A' as a'[2] =
E1*a[2], where a[2] = column 2 of A. Now you want to subtract a'[i,2]/
a'[2,2] from column i > 2. (Note that you need to compute a'[2] in
order to identify these steps, but there is no reason to compute the
rest of the matrix A'.) These operations give a new matrix A'', which
can be obtained as A'' = E2*A' = E2*E1*A, where E2 is obtained by
doing the above operation on the identity matrix. Continue in this
way.

Note, however, that avoiding (partial) pivoting may prove to be either
(i) impossible; or (ii) inadvisable. For instance, if a'[2,2] = 0 we
cannot do the operations specified above; if a'[2,2] is nonzero but
"small", the above operations are possible but can lead to numerical
instability, because of roundoff errors. In such a case we should
permute the rows of A' (via a permutation matrix P1) so that row 2 of
the permuted matrix A_p'' has a_p'[2,2] "not small"; ideally, we
should choose the largest |a'[i,2]| to go into row 2 after the
interchange, but we need not stick to this rigorously. Anyway, after
the row-interchange apply row-reduction to A_p', to get A'' as A'' =
E2*P1*E1*A, etc. Of course, while doing the operations on A you also
do them on b, so the final system is of the form Ek*P(k-1)*E(k-1)* ...
* P1*E1 A = Ek*P(k-1)*E(k-1)* ... * P1*E1 b, and the new left-hand
side Ek*P(k-1)*E(k-1)* ... * P1*E1 A is upper-triangular.

Doing things this way preserves the original matrix A and original RHS
b, which is convenient if you need to use them again later for some
other purposes.

All this is covered very nicely in the book "Linear Programming", by
Vasek Chvatal: see
http://www.amazon.ca/Linear-Programming-Vasek-Chvatal/dp/0716715872 .

R.G. Vickson



> That would be great, since I am separating the "LU-like
> decomposition" phase, from the actual "solution". Every algorithm I
> find online seems either to do what I am doing right now, or it adds
> partial pivoting, and right now I'd like to avoid pivoting!
>
> Thanks for any pointer you can give me!
>
> et al.