From: Tony Harker on
Dear Steve,

I agree with all you say: the result returned by Version 5 was
anomalous (and obscured the very point I was originally trying to make with
the example, taken from a course, about the different characteristics of
explicit, predictor-corrector, and implicit methods of the same order). I'm
happy with what version 7 is doing -- but I was surprised when I saw the
difference between versions 5 and 7.

Tony

]-> -----Original Message-----
]-> From: schochet123 [mailto:schochet123(a)gmail.com]
]-> Sent: 02 February 2010 08:25
]-> To: mathgroup(a)smc.vnet.net
]-> Subject: Re: Numerical Problem
]->
]-> Actually, the behavior obtained from version 7 is in some sense the
]-> "correct" behavior, and result in version 5.2 is the anomaly. As you
]-> note, the ODE system is stiff, and as Daniel notes, the
]-> approximation
]-> scheme is very ill-conditioned. For example, changing the
]-> initial data
]-> from {2,-1} to {2,-1}+{0,10^{-16}} yields a problem whose exact
]-> solution (without any round-off) blows up rapidly like the solution
]-> obtained in version 7. The solution you obtained in version
]-> 5.2, which
]-> I get in version 6.0, and Daniel gets on some machines and
]-> not others
]-> for all versions, happens because the numerical calculation somehow
]-> manages to keep the numerical approximation to be exactly a multiple
]-> of the one eigenvector {-2,1} for which such blow-up does
]-> not occur in
]-> the exact solution of the approximation scheme. This
]-> perfect accuracy
]-> for the ratio of the two components of the solution is not expected
]-> and should not be relied upon.
]->
]-> Looking into the internals of how Mathematica implements the
]-> approximation scheme numerically is a bad idea, for several reasons:
]-> 1) Whatever fix one might find may not work on future
]-> versions, other
]-> machines, or other problems. 2) The benefits of optimizations
]-> Mathematica has made in organizing computations may be
]-> lost. 3) It is
]-> too much work. In particular, simply calculating with more
]-> digits of
]-> accuracy as mentioned by RJF should be expected to merely delay the
]-> problem for a few time steps, not eliminate it.
]->
]-> The correct approach is to use an approximation scheme that is well-
]-> conditioned. Although this could be accomplished by reducing h
]-> drastically, a more efficient method is to switch to to an implicit
]-> scheme or another type of scheme that is well-conditioned for stiff
]-> problems. For the theory involved, look up stiff problems
]-> in any text
]-> on numerical solutions of ODEs. Mathematica knows how to do this
]-> automatically. Indeed, even in version 7,
]->
]-> thex[t_]=x[t]/. NDSolve[{x'[t] == {{998, 1998}, {-999, -1999}} . x
]-> [t],
]-> x[0] == {2, -1}}, x, {t, 0, 1}][[1]]
]->
]-> yields an accurate solution.
]->
]-> Steve
]->
]->
]-> On Jan 29, 2:49 pm, "Tony Harker" <a.har...(a)ucl.ac.uk> wrote:
]-> > I expect I'm missing something in the documentation, but
]-> I can't see why
]-> > this simple example of Euler's predictor-corrector
]-> applied to a stiff pai=
]-> r
]-> > of equations should behave differently in Version 7 and Version 5.
]-> > =========================
]-> ==========================
]-> =======================
]-> > $Version
]-> > epcm[h_,y_List,f_List]:=y+0.5 h (f.y+f.(y+h f.y))
]-> > fepcm[h_,t0_,tmax_,y0_List,f_List]:=NestList[epcm[h,#,f]&,y0,10]
]-> > app1=fepcm[.1,0.,1,{2.,-1.},{{998,1998},{-999,-1999}}]
]-> >
]-> > 7.0 for Microsoft Windows (32-bit) (February 18, 2009)
]-> >
]-> >
]-> {{2.,-1.},{1.81,-0.905},{1.63805,-0.819025},{1.48243,-0.7412
]-> 17},{1.34027,=
]-> -0.
]-> >
]-> 669464},{-5.34365,5.95073},{-32138.7,32139.3},{-1.57517*10^8
,1.57517*10^8=
]-> },{
]-> >
]-> -7.71992*10^11,7.71992*10^11},{-3.78353*10^15,3.78353*10^15}
,{-1.85431*10=
]-> ^19
]-> > ,1.85431*10^19}}
]-> > =========================
]-> ==========================
]-> ==========================
]-> ==
]-> > $Version
]-> > epcm[h_,y_List,f_List]:=y+0.5 h (f.y+f.(y+h f.y))
]-> > fepcm[h_,t0_,tmax_,y0_List,f_List]:=NestList[epcm[h,#,f]&,y0,10]
]-> > app1=fepcm[.1,0.,1,{2.,-1.},{{998,1998},{-999,-1999}}]
]-> >
]-> > 5.2 for Microsoft Windows (June 20, 2005)
]-> >
]-> > {{2.,-1.},{1.81,-0.905},{1.63805,-0.819025},{1.48244,-0.741218},{
]-> >
]-> 1.3416,-0.670802},{1.21415,-0.607076},{1.09881,-0.549404},{0.9944=
]-> 2,-0.\
]-> >
]-> 49721},{0.899951,-0.449975},{0.814455,-0.407228},{0.737082,-
]-> 0.368541}}
]-> > =========================
]-> ==========================
]-> ==========================
]-> ==
]-> > It's clear that the version 5 result is more acceptable,
]-> as we can do
]-> > everything in integers and convert to reals at the end:
]-> >
]-> > $Version
]-> > epcm[h_,y_List,f_List]:=y+ h (f.y+f.(y+h f.y))/2
]-> > fepcm[h_,t0_,tmax_,y0_List,f_List]:=NestList[epcm[h,#,f]&,y0,10]
]-> > N[app1=fepcm[1/10,0,1,{2,-1},{{998,1998},{-999,-1999}}]]
]-> >
]-> > 7.0 for Microsoft Windows (32-bit) (February 18, 2009)
]-> >
]-> >
]-> {{2.,-1.},{1.81,-0.905},{1.63805,-0.819025},{1.48244,-0.7412
]-> 18},{1.3416,-=
]-> 0.6
]-> >
]-> 70802},{1.21415,-0.607076},{1.09881,-0.549404},{0.99442,-0.4
]-> 9721},{0.8999=
]-> 51,
]-> > -0.449975},{0.814455,-0.407228},{0.737082,-0.368541}}
]-> > =========================
]-> ==========================
]-> ==========================
]-> ==
]-> >
]-> > What I don't understand is why this discrepancy between
]-> Versions 5 and 7
]-> > occurs, and whether I can get version 7 to behave in the
]-> same way as 5.
]-> >
]-> > For something even more exciting, there's
]-> >
]-> > =========================
]-> ==========================
]-> ==========================
]-> ==
]-> >
]-> > $Version
]-> > epcm[h_,y_List,f_List]:=y+0.500000000000000000 h (f.y+f.(y+h f.y))
]-> > fepcm[h_,t0_,tmax_,y0_List,f_List]:=NestList[epcm[h,#,f]&,y0,10]
]-> >
]-> app1=fepcm[.100000000000000000,0.00000000000000000,1.0000000
0000000000,=
]-> {2.00
]-> >
]-> 000000000000000,-1.00000000000000000},{{998.000000000000000,
]-> 1998.00000000=
]-> 000
]-> > 0000},{-999.00000000000000000,-1999.00000000000000000}}]
]-> >
]-> > 7.0 for Microsoft Windows (32-bit) (February 18, 2009)
]-> >
]-> >
]-> {{2.0000000000000000,-1.0000000000000000},{1.810000000000,-0
]-> .905000000000=
]-> },{
]-> >
]-> 1.6380500,-0.8190250},{1.482,-0.741},{0.*10^1,0.*10^1},{0.*1
0^6,0.*10^6},=
]-> {0.
]-> >
]-> *10^10,0.*10^10},{0.*10^15,0.*10^15},{0.*10^20,0.*10^20},{0.
]-> *10^24,0.*10^=
]-> 24}
]-> > ,{0.*10^29,0.*10^29}}
]-> > =========================
]-> ==========================
]-> ==========================
]-> ==
]-> >
]-> > Tony Harker
]->
]->
]->


From: Daniel Lichtblau on
Tony Harker wrote:
> Dear Steve,
>
> I agree with all you say: the result returned by Version 5 was
> anomalous (and obscured the very point I was originally trying to make with
> the example, taken from a course, about the different characteristics of
> explicit, predictor-corrector, and implicit methods of the same order). I'm
> happy with what version 7 is doing -- but I was surprised when I saw the
> difference between versions 5 and 7.
>
> Tony

To see what an anomaly your example was, try this variation. You will
not get "nice" results on any platform.

epcm[h_, y_List, f_List] := y + 1/2*h*(f.y + f.(y + h*f.y))
app[0] = {3., -1.};
mat = {{997, 2997}, {-999, -2999}};

Do[Print[InputForm[app[j] = epcm[1/10., app[j - 1], mat]]], {j, 1, 10}]

Remains true if you use precision tricks (though raising precision will
maintain reasonable results longer.

The reason, as Steve alluded or maybe stated outright, is that we can
represent exactly ratios of 2/1 in floating point approximations. And
this is what happens to the vector components of your original example,
when extended precision downsizes to fit into standard double precision
reals. Cannot do this when the ratio is 3/1, and that's the case for the
component magnitudes of the eigenvector corresponding to the smaller
eigenvalue of the matrix above.

Daniel Lichtblau
Wolfram Research


From: Tony Harker on
Dear Daniel,

Yes, that's a nice demonstration.

Tony

]-> -----Original Message-----
]-> From: Daniel Lichtblau [mailto:danl(a)wolfram.com]
]-> Sent: 03 February 2010 16:27
]-> To: Tony Harker
]-> Cc: mathgroup(a)smc.vnet.net
]-> Subject: Re: Re: Re: Numerical Problem
]->
]-> Tony Harker wrote:
]-> > Dear Steve,
]-> >
]-> > I agree with all you say: the result returned
]-> by Version 5 was
]-> > anomalous (and obscured the very point I was originally
]-> trying to make with
]-> > the example, taken from a course, about the different
]-> characteristics of
]-> > explicit, predictor-corrector, and implicit methods of
]-> the same order). I'm
]-> > happy with what version 7 is doing -- but I was surprised
]-> when I saw the
]-> > difference between versions 5 and 7.
]-> >
]-> > Tony
]->
]-> To see what an anomaly your example was, try this
]-> variation. You will
]-> not get "nice" results on any platform.
]->
]-> epcm[h_, y_List, f_List] := y + 1/2*h*(f.y + f.(y + h*f.y))
]-> app[0] = {3., -1.};
]-> mat = {{997, 2997}, {-999, -2999}};
]->
]-> Do[Print[InputForm[app[j] = epcm[1/10., app[j - 1], mat]]],
]-> {j, 1, 10}]
]->
]-> Remains true if you use precision tricks (though raising
]-> precision will
]-> maintain reasonable results longer.
]->
]-> The reason, as Steve alluded or maybe stated outright, is
]-> that we can
]-> represent exactly ratios of 2/1 in floating point
]-> approximations. And
]-> this is what happens to the vector components of your
]-> original example,
]-> when extended precision downsizes to fit into standard
]-> double precision
]-> reals. Cannot do this when the ratio is 3/1, and that's the
]-> case for the
]-> component magnitudes of the eigenvector corresponding to
]-> the smaller
]-> eigenvalue of the matrix above.
]->
]-> Daniel Lichtblau
]-> Wolfram Research
]->
]->