From: Ray Vickson on
On May 4, 6:00 pm, Blue Venom <mandalayray1...(a)gmail.com> wrote:
> y'=SQRT(1+x2+y2)
> y(0)=0
>
> Assuming there exists a local and unique solution defined on (-d,d),
> prove that the solution y is defined for all x in R and that y(x) >=
> sinh(x) for all x >=0.
>
> Any hint?

Here, finally, is an approach to the proof. It is based on the Picard
Method; see, eg.,
http://math.fullerton.edu/mathews/n2003/picarditeration/PicardIterationProof/Links/PicardIterationProof_lnk_2.html
or http://www-math.mit.edu/~dav/picard.pdf .

Let f(x,y) = sqrt(1+x^2+y^2) and h(x,y) = sqrt(1+y^2). Let y(x) solve
dy/dx = f(x,y), y(0)=0 and z(x) solve dz/dx = h(x,z), z(0)=0. Of
course, z(x) = sinh(x). In Picard's method we would convert the DEs to
integral equations and then solve them iteratively. We have y(x) =
int{f(t,y(t)) dt, t=0..x}, while z(x) = int{ h(t,z(t)) dt, t=0..x} for
x > 0. Start with y_0(x) = z_0(x) = 0 for all x >= 0, and consider the
iterations y_{n+1}(x) = int{f(t,y_n(t)) dt, t=0..x}, z_{n+1}(x) =
int{h(t,z_n(t)) dt, t=0..x}. In the references for the Picard method
there are theorems about the convergence of y_n(x) to y(x) and of
z_n(x) to z(x) as n --> infinity. Assume we have such convergence
(which WILL apply at least in a finite interval 0 <= t <= T0; at t =
T0 you can re-start the method and extend the result to T0 + T1, etc.)
Anyway, notice that for t > 0 we have h(t,w) < f(t,w). We have z_1(x)
= int{h(t,0) dt, t=0..x}= integral(1 dt, t=0..x) = x and y_1(x) =
int{sqrt(1+t^2) dt, t=0..x) > z_1(x) for x > 0. Thus, z_2(x) =
int{g(t,z_1(t)) dt, t=0..x} < int{f(t,z_1(t)) dt, t=0..x} <
int{f(t,y_1(t)) dt, t=0..x) = y_2(x), since f(t,a) < f(t,b) if a < b.
Thus, z_2(x) < y_2(x) for x > 0. Continuing gives z_n(x) < y_n(x) for
x > 0, so in the limit (assuming it is OK) we get y(x) < z(x).

You can clean up the argument a bit, worrying about convergence, etc.,
but that is the basic idea.

R.G. Vickson



From: Torsten Hennig on
> On May 4, 6:00 pm, Blue Venom
> <mandalayray1...(a)gmail.com> wrote:
> > y'=SQRT(1+x2+y2)
> > y(0)=0
> >
> > Assuming there exists a local and unique solution
> defined on (-d,d),
> > prove that the solution y is defined for all x in R
> and that y(x) >=
> > sinh(x) for all x >=0.
> >
> > Any hint?
>
> Here, finally, is an approach to the proof. It is
> based on the Picard
> Method; see, eg.,
> http://math.fullerton.edu/mathews/n2003/picarditeratio
> n/PicardIterationProof/Links/PicardIterationProof_lnk_
> 2.html
> or http://www-math.mit.edu/~dav/picard.pdf .
>
> Let f(x,y) = sqrt(1+x^2+y^2) and h(x,y) =
> sqrt(1+y^2). Let y(x) solve
> dy/dx = f(x,y), y(0)=0 and z(x) solve dz/dx = h(x,z),
> z(0)=0. Of
> course, z(x) = sinh(x). In Picard's method we would
> convert the DEs to
> integral equations and then solve them iteratively.
> We have y(x) =
> int{f(t,y(t)) dt, t=0..x}, while z(x) = int{
> h(t,z(t)) dt, t=0..x} for
> x > 0. Start with y_0(x) = z_0(x) = 0 for all x >= 0,
> and consider the
> iterations y_{n+1}(x) = int{f(t,y_n(t)) dt, t=0..x},
> z_{n+1}(x) =
> int{h(t,z_n(t)) dt, t=0..x}. In the references for
> the Picard method
> there are theorems about the convergence of y_n(x) to
> y(x) and of
> z_n(x) to z(x) as n --> infinity. Assume we have such
> convergence
> (which WILL apply at least in a finite interval 0 <=
> t <= T0; at t =
> T0 you can re-start the method and extend the result
> to T0 + T1, etc.)
> Anyway, notice that for t > 0 we have h(t,w) <
> f(t,w). We have z_1(x)
> = int{h(t,0) dt, t=0..x}= integral(1 dt, t=0..x) = x
> and y_1(x) =
> int{sqrt(1+t^2) dt, t=0..x) > z_1(x) for x > 0. Thus,
> z_2(x) =
> int{g(t,z_1(t)) dt, t=0..x} < int{f(t,z_1(t)) dt,
> t=0..x} <
> int{f(t,y_1(t)) dt, t=0..x) = y_2(x), since f(t,a) <
> f(t,b) if a < b.
> Thus, z_2(x) < y_2(x) for x > 0. Continuing gives
> z_n(x) < y_n(x) for
> x > 0, so in the limit (assuming it is OK) we get
> y(x) < z(x).
>
> You can clean up the argument a bit, worrying about
> convergence, etc.,
> but that is the basic idea.
>
> R.G. Vickson
>
>
>

Why so difficult ?
Since the initial conditions for both ODEs
y' = sqrt(1+y^2) and y' = sqrt(1+x^2+y^2) are
identical,
sqrt(1+y^2) <= sqrt(1+x^2+y^2)
directy implies
sinh(x) <= (solution of the ODE y'=sqrt(1+x^2+y^2)).

Best wishes
Torsten.
From: Ray Vickson on
On May 5, 11:41 pm, Torsten Hennig <Torsten.Hen...(a)umsicht.fhg.de>
wrote:
> > On May 4, 6:00 pm, Blue Venom
> > <mandalayray1...(a)gmail.com> wrote:
> > > y'=SQRT(1+x2+y2)
> > > y(0)=0
>
> > > Assuming there exists a local and unique solution
> > defined on (-d,d),
> > > prove that the solution y is defined for all x in R
> > and that y(x) >=
> > > sinh(x) for all x >=0.
>
> > > Any hint?
>
> > Here, finally, is an approach to the proof. It is
> > based on the Picard
> > Method; see, eg.,
> >http://math.fullerton.edu/mathews/n2003/picarditeratio
> > n/PicardIterationProof/Links/PicardIterationProof_lnk_
> > 2.html
> > orhttp://www-math.mit.edu/~dav/picard.pdf.
>
> > Let f(x,y) = sqrt(1+x^2+y^2) and h(x,y) =
> > sqrt(1+y^2). Let y(x) solve
> > dy/dx = f(x,y), y(0)=0 and z(x) solve dz/dx = h(x,z),
> > z(0)=0. Of
> > course, z(x) = sinh(x). In Picard's method we would
> > convert the DEs to
> > integral equations and then solve them iteratively.
> > We have y(x) =
> > int{f(t,y(t)) dt, t=0..x}, while z(x) = int{
> > h(t,z(t)) dt, t=0..x} for
> > x > 0. Start with y_0(x) = z_0(x) = 0 for all x >= 0,
> > and consider the
> > iterations y_{n+1}(x) = int{f(t,y_n(t)) dt, t=0..x},
> > z_{n+1}(x) =
> > int{h(t,z_n(t)) dt, t=0..x}. In the references for
> > the Picard method
> > there are theorems about the convergence of y_n(x) to
> > y(x) and of
> > z_n(x) to z(x) as n --> infinity. Assume we have such
> > convergence
> > (which WILL apply at least in a finite interval 0 <=
> > t <= T0; at t =
> > T0 you can re-start the method and extend the result
> > to T0 + T1, etc.)
> > Anyway, notice that for t > 0 we have h(t,w) <
> > f(t,w). We have z_1(x)
> > = int{h(t,0) dt, t=0..x}= integral(1 dt, t=0..x) = x
> > and y_1(x) =
> > int{sqrt(1+t^2) dt, t=0..x) > z_1(x) for x > 0. Thus,
> > z_2(x) =
> > int{g(t,z_1(t)) dt, t=0..x} < int{f(t,z_1(t)) dt,
> > t=0..x} <
> > int{f(t,y_1(t)) dt, t=0..x) = y_2(x), since f(t,a) <
> > f(t,b) if a < b.
> > Thus, z_2(x) < y_2(x) for x > 0. Continuing gives
> > z_n(x) < y_n(x) for
> > x > 0, so in the limit (assuming it is OK) we get
> > y(x) < z(x).
>
> > You can clean up the argument a bit, worrying about
> > convergence, etc.,
> > but that is the basic idea.
>
> > R.G. Vickson
>
> Why so difficult ?
> Since the initial conditions for both ODEs
> y' = sqrt(1+y^2) and y' = sqrt(1+x^2+y^2) are
> identical,
> sqrt(1+y^2) <= sqrt(1+x^2+y^2)
> directy implies
> sinh(x) <= (solution of the ODE y'=sqrt(1+x^2+y^2)).

Yes, but how do we KNOW this? Of course, this is quite intuitive.
However, this still needs a _proof_. One can either quote a theorem
from some source (which I could not find easily) or _prove_ the
result. The proof I gave is actually quite general and does not depend
on the actual forms of f(x,y) and g(x,y), only on their ordering and
monotonicity.

R.G. Vickson

>
> Best wishes
> Torsten.