From: Rob Johnson on
In article <20100524041147.B566(a)agora.rdrop.com>,
William Elliot <marsh(a)rdrop.remove.com> wrote:
>Assume f:(0,1] -> R is continuous and
>for all sequences {sj} of elements from (0,1] /\ Q
>for which sj -> 0, the sequence f(sj) -> 0.
>
>Show that for all sequences of elements {sj} from (0,1]
>for which sj -> 0, the sequence f(sj) -> 0
>
>In otherwords, show f can be continuously extended to 0.
>
>Proposition.
>f in C((0,1],R), for all sequence s into (0,1] /\ Q, (s -> 0 ==> fs -> 0)
> ==> for all sequence s into (0,1], (s -> 0 ==> fs -> 0)

First, we will prove that for q in Q,

lim f(q) = 0
q->0+

Suppose not. Then there must be some e > 0 so that for all d > 0,
there is a q in (0,d) /\ Q so that |f(q)| > e. This means that we
can construct a sequence { q_n } in Q so that q_n -> 0+, yet f(q_n)
does not converge to 0. -><- QED

Thus, for any e > 0, we can find a d > 0 so that for all q in
(0,d) /\ Q, we have |f(q)| < e. Since f is continuous and Q is
a dense subset of (0,d), we must have that |f(x)| <= e for all
x in (0,d). Thus,

lim f(x) = 0
x->0+

Rob Johnson <rob(a)trash.whim.org>
take out the trash before replying
to view any ASCII art, display article in a monospaced font
From: cwldoc on
> From: cwldoc <cwldoc(a)aol.com>
>
> >> Proposition.
> Proposition 1.
>
> >> f in C((0,1],R), for all sequence s into (0,1] /\
> Q,
> >> (s -> 0 ==> fs -> 0)
> >> ==> for all sequence s into (0,1], (s -> 0 ==> fs
> -> 0)
>
> > Let sj be a sequence of elements of (0,1] such that
> sj -> 0
>
> > Supose that f(sj) fails to converge to zero.
>
> > Then there exists e > 0 such that for every
> positive integer,
> > J, there exists a j > J such that |f(sj)| > e
>
> > Thus (letting J = 1), there exists a positive
> integer, j1,
>
> Do you mean jk instead of j1?
>
> > such that |f(s(jk))| > e for every k.
>
> Are you're creating a subsequence s_jk of s by
> induction?
>
> > By continuity of f, for each k, we can choose a dk
> > 0 such that
>
> > |s(jk) - x| < dk implies |f(s(jk) - f(x)| <
> |f(s(jk)| - e
>
> Do you mean "such that for all x in (0,1],
>
> (|s(jk) - x| < dk implies |f(s(jk) - f(x)| <
> |f(s(jk)| - e)"?
>
> > and such that dk < 2^(-k)
>
> > For each k, choose a rational element, tk,
> > of (0,1] such that |tk - s(jk)| < dk
>
> > Then tk -> 0. Furthermore, |f(tk)| > e, for all k,
> > so f(tk) does not converge to zero.
>
> |f(s_jk) - f(tk)| < |f(s_jk)| - e
>
> > This is a contradiction, since tk is
> > comprised of rational elements of (0,1].
>
> Proposition 2. Let X and Y be metrics spaces.
> Assume D dense subset X, U open subset X, f in C(U,Y)
> For all a in cl U - U, assume for all sequences sj of
> elements of U /\ D which converge to a, that the
> sequence
> f(sj) converges to f(a).
>
> Show for every a in cl U - U that for all sequences
> sj of
> elements of U which converges to a, the sequence
> f(sj)
> converges to f(a).
>
> Can your proof be extended to a proof of the above
> generalization?
> In the part
> |f(s(jk) - f(x)| < |f(s(jk)| - e
>
> d(f(s_jk), f(x)) can be used for |f(s(jk) - f(x)|
> and d(f(s_jk)), f(a)) - e can be used for |f(s(jk)| -
> e
>
> Would you concure that your proof of proposition 1
> is a template for a proof of proposition 2?
>
> ----

Since, as it stands, f(a) is not defined for a in cl U - U, how about modifying proposition 2 as follows:

Let X and Y be metrics spaces. Let b be some fixed element of Y. Let D be dense subset of X, and U be an open subset of X.

Let A be a subset of X such that A /\ U = null set

Let f be in C(U,Y), such that for all sequences sj of elements of U /\ D which converge to an element of A, the sequence
f(sj) converges to b.

Show for all sequences sj of elements of U which converges to an element of A, the sequence
f(sj) converges to b.

Proof:
Let sj be a sequence of elements of U such that sj converges to an element, a, of A.

Suppose f(sj) fails to converge to b.

Then there must exist e > 0, and a subsequence, sjk, of sj, such that d(f(sjk),b) > e for all k.

For each k, by continuity of f, we may choose a dk > 0 such that for all x on U,
d(sjk,x) < dk implies d(f(sjk),f(x)) < d(f(sjk),b) - e
and dk < 2^(-k)

Since D is a dense subset of X, we may choose, for each k, an element, tk, of D on U /\ B[sjk,dk]. So, tk is contained in U /\ D, and is a sequence that converges to a. But, by the triangle inequality, d(f(tk),b) > e for all k, so f(tk) does not converge to b, which is a contradiction.

Thus sj must converge to b.
From: cwldoc on
> > From: cwldoc <cwldoc(a)aol.com>
> >
> > >> Proposition.
> > Proposition 1.
> >
> > >> f in C((0,1],R), for all sequence s into (0,1]
> /\
> > Q,
> > >> (s -> 0 ==> fs -> 0)
> > >> ==> for all sequence s into (0,1], (s -> 0 ==>
> fs
> > -> 0)
> >
> > > Let sj be a sequence of elements of (0,1] such
> that
> > sj -> 0
> >
> > > Supose that f(sj) fails to converge to zero.
> >
> > > Then there exists e > 0 such that for every
> > positive integer,
> > > J, there exists a j > J such that |f(sj)| > e
> >
> > > Thus (letting J = 1), there exists a positive
> > integer, j1,
> >
> > Do you mean jk instead of j1?
> >
> > > such that |f(s(jk))| > e for every k.
> >
> > Are you're creating a subsequence s_jk of s by
> > induction?
> >
> > > By continuity of f, for each k, we can choose a
> dk
> > > 0 such that
> >
> > > |s(jk) - x| < dk implies |f(s(jk) - f(x)| <
> > |f(s(jk)| - e
> >
> > Do you mean "such that for all x in (0,1],
> >
> > (|s(jk) - x| < dk implies |f(s(jk) - f(x)| <
> > |f(s(jk)| - e)"?
> >
> > > and such that dk < 2^(-k)
> >
> > > For each k, choose a rational element, tk,
> > > of (0,1] such that |tk - s(jk)| < dk
> >
> > > Then tk -> 0. Furthermore, |f(tk)| > e, for all
> k,
> > > so f(tk) does not converge to zero.
> >
> > |f(s_jk) - f(tk)| < |f(s_jk)| - e
> >
> > > This is a contradiction, since tk is
> > > comprised of rational elements of (0,1].
> >
> > Proposition 2. Let X and Y be metrics spaces.
> > Assume D dense subset X, U open subset X, f in
> C(U,Y)
> > For all a in cl U - U, assume for all sequences sj
> of
> > elements of U /\ D which converge to a, that the
> > sequence
> > f(sj) converges to f(a).
> >
> > Show for every a in cl U - U that for all
> sequences
> > sj of
> > elements of U which converges to a, the sequence
> > f(sj)
> > converges to f(a).
> >
> > Can your proof be extended to a proof of the above
> > generalization?
> > In the part
> > |f(s(jk) - f(x)| < |f(s(jk)| - e
> >
> > d(f(s_jk), f(x)) can be used for |f(s(jk) - f(x)|
> > and d(f(s_jk)), f(a)) - e can be used for |f(s(jk)|
> -
> > e
> >
> > Would you concure that your proof of proposition 1
> > is a template for a proof of proposition 2?
> >
> > ----
>
> Since, as it stands, f(a) is not defined for a in cl
> U - U, how about modifying proposition 2 as follows:
>
> Let X and Y be metrics spaces. Let b be some fixed
> element of Y. Let D be dense subset of X, and U be an
> open subset of X.
>
> Let A be a subset of X such that A /\ U = null set
>
> Let f be in C(U,Y), such that for all sequences sj of
> elements of U /\ D which converge to an element of A,
> the sequence
> f(sj) converges to b.
>
> Show for all sequences sj of elements of U which
> converges to an element of A, the sequence
> f(sj) converges to b.
>
> Proof:
> Let sj be a sequence of elements of U such that sj
> converges to an element, a, of A.
>
> Suppose f(sj) fails to converge to b.
>
> Then there must exist e > 0, and a subsequence, sjk,
> of sj, such that d(f(sjk),b) > e for all k.
>
> For each k, by continuity of f, we may choose a dk >
> 0 such that for all x on U,
> d(sjk,x) < dk implies d(f(sjk),f(x)) < d(f(sjk),b) -
> e
> and dk < 2^(-k)
>
> Since D is a dense subset of X, we may choose, for
> each k, an element, tk, of D on U /\ B[sjk,dk]. So,
> tk is contained in U /\ D, and is a sequence that
> converges to a. But, by the triangle inequality,
> d(f(tk),b) > e for all k, so f(tk) does not converge
> to b, which is a contradiction.
>
> Thus sj must converge to b.

Here is an alternate approach, in the case that A = cl U - U:

Proposition:

Let X and Y be metric spaces. Let b be some fixed element of Y. Let D be a dense subset of X, and U be an open subset of X. Let f be a continuous function from U to Y. Let A = cl U - U

Suppose that for all sequences sj of elements of U /\ D which converge to an element of A, the sequence
f(sj) converges to b.

Then for all sequences sj of elements of U which converge to an element of A, the sequence
f(sj) converges to b.

Proof:

Let g: cl U -> Y be defined by
g(x) = f(x), for x on U,
g(x) = b, for x on A

Then the condition

[For all sequences, sj, of elements of U /\ D, which converge to an element of A, the sequence f(sj) converges to b.]

is equivalent to the continuity of g|D (the restriction of g to D).

And the condition

[For all sequences, sj, of elements of U which converge to an element of A, the sequence f(sj) converges to b.]

is equivalent to the continuity of g.

Thus to prove the proposition, it suffices to show:
g|D continuous => g continuous.

Toward that end, suppose that g|D is continuous.

Since g is continuous on U (because g|U = f), it remains to prove that g is continuous on A:

Let a be any element of A. Let e > 0.

By continuity of g|D, there exists a d1 > 0, such that
for all x on D /\ (cl U),
d(x,a) < d1 => d(g(x),g(a)) < e/2

I claim that for all x on cl U,
d(x,a) < d1 => d(g(x),g(a)) <= e/2 < e,
which proves the continuity of g.

Suppose that there exists an x on cl U such that
d(x,a) < d1 and d(g(x),g(a)) > e/2.

Note that x is not on A, because then d(g(x),g(a)) = d(b,b) = 0 < e/2. So x is on U.

By continuity of f, there is a d2 > 0 such that, for all xo on U,
d(x,xo) < d2 => d(g(x),g(xo)) < d(g(x),g(a)) - e/2
and such that d2 < d1 - d(x,a)

Since D is a dense subset of X, there exists an element,
x1, of D in B[x,d2] /\ U.

This leads to a contradiction, because (by triangle inequality)
d(x1,a) < d1 but d(g(x1),g(a)) > e/2
Q.E.D.
From: William Elliot on
On Wed, 26 May 2010, cwldoc wrote:
> Here is an alternate approach, in the case that A = cl U - U:
>
> Proposition:

> Let X and Y be metric spaces. Let b be some fixed element of Y. Let D be
> a dense subset of X, and U be an open subset of X. Let f be a continuous
> function from U to Y. Let A = cl U - U

> Suppose that for all sequences sj of elements of U /\ D which converge
> to an element of A, the sequence f(sj) converges to b.
>
> Then for all sequences sj of elements of U which converge to an element
> of A, the sequence f(sj) converges to b.
>
> Proof:
>
> Let g: cl U -> Y be defined by
> g(x) = f(x), for x on U,
> g(x) = b, for x on A
>
> Then the condition
>
> [For all sequences, sj, of elements of U /\ D, which converge to an
> element of A, the sequence f(sj) converges to b.]
>
> is equivalent to the continuity of g|D (the restriction of g to D).
>
> And the condition
>
> [For all sequences, sj, of elements of U which converge to an element of
> A, the sequence f(sj) converges to b.]
>
> is equivalent to the continuity of g.

For the continuity of g, it's necessary to show for all sequences
s into A \/ U for which s converges to a in A, that
fs = {f(sj)} -> f(a). The additional sequences are a nuisance
in view of the continuity of f|U and the current proposition.
A problematic case is if s is into A.

> Thus to prove the proposition, it suffices to show:
> g|D continuous => g continuous.
>
> Toward that end, suppose that g|D is continuous.
>
> Since g is continuous on U (because g|U = f), it remains to prove that g
> is continuous on A:
>
That's trivial if A is a singleton such as in the first proposition.

The bulk of the work is toward showing g is continuous on A \/ U.

> Let a be any element of A. Let e > 0.
>
> By continuity of g|D, there exists a d1 > 0, such that
> for all x on D /\ (cl U),
> d(x,a) < d1 => d(g(x),g(a)) < e/2
>
> I claim that for all x on cl U,
> d(x,a) < d1 => d(g(x),g(a)) <= e/2 < e,
> which proves the continuity of g.
>
> Suppose that there exists an x on cl U such that
> d(x,a) < d1 and d(g(x),g(a)) > e/2.
>
> Note that x is not on A, because then d(g(x),g(a)) = d(b,b) = 0 < e/2.
> So x is on U.
>
> By continuity of f, there is a d2 > 0 such that, for all xo on U,
> d(x,xo) < d2 => d(g(x),g(xo)) < d(g(x),g(a)) - e/2
> and such that d2 < d1 - d(x,a)
>
> Since D is a dense subset of X, there exists an element,
> x1, of D in B[x,d2] /\ U.
>
> This leads to a contradiction, because (by triangle inequality)
> d(x1,a) < d1 but d(g(x1),g(a)) > e/2
> Q.E.D.
>
I think we're fairly close upon a metric generalization for your
original proof. Here's a sketch of my version.

--
dense D, open U subset X, f:(X,d) -> (Y,d), f in C(U,Y)
for all u in bd U, sequence s into U /\ D, (s -> u ==> fs -> f(u))
==> for all u in bd U, sequence s into U, (s -> u ==> fs -> f(u))

otherwise:
some u in bd U, sequence s into U with s -> u, not fs -> f(u)
not for all r > 0, some n with for all j > n, d(f(sj),f(u)) < r
some r > 0 with for all n, some jn > n with r <= d(f(s_jn),f(u))

let t1 = j1, t_(k+1) = j_tk; t increasing; st subsequence
for all j, r/2 < d(f(st_j),f(u)), 0 < rj = d(f(st_j),f(u)) - r/2

for all j, some dj < 1/j with for all x in U,
d(st_j,x) < dj ==> d(f(st_j),f(x)) < rj
for all j, some aj in B(st_j,dj) /\ U /\ D, d(st_j,aj) < dj
a -> u. lim_j d(aj,u) <= lim_j (d(aj,st_j) + d(st_j,u)) = 0

for all j, d(f(st_j),f(u)) <= d(f(s_j),f(aj)) + d(f(aj),f(u))
< rj + d(f(aj,f(u)), r/2 < d(f(aj),f(u))
not fa -> f(u), no!

--
Here's the same assuming f uniformly continuous on U. bd U = boundary U.

dense D, open U subset X, f:(X,d) -> (Y,d), f uniformly continuous on U,
for all u in bd U, sequence s into U /\ D, (s -> u ==> fs -> f(u))
==> for all u in bd U, sequence s into U, (s -> u ==> fs -> f(u))

if u in bd U, s sequence into U with s -> u:
for all j, some aj in B(sj,1/j) /\ U /\ D, d(aj,sj) < 1/j
a = (aj)_j -> u. lim_j d(aj,u) <= lim_j (d(aj,sj) + d(sj,u)) = 0
fa -> f(u); some s with for all x,y in U,
d(x,y) < s ==> d(f(x),f(y)) < r
some n with for all k > n, d(f(ak),f(u)) < r
for all j > n,1/s, d(f(sj),f(u)) <= d(f(sj),f(ak)) + d(f(ak),f(u)) <= 2r

--
Here's a general version that I've not been able to prove.

dense D, open U subset X, A = bd U \/ U/\D
f:X -> Y, f in C(U,Y), f in C(A,Y) ==> f in C(cl U,Y)

Even with the lemma of proposition 2 and the restriction
of X and Y to metric spaces, doubt remains for sequences
of elements of bd U, as pointed out above.

I've not tried using sequences for 1st countable spaces
nor nets for spaces in general. I've tried to show that
if V is open subset Y, then (f|cl U)^-1(V) is open subset
of X|cl U and if K is any subset of cl U, then
(f|cl U)(cl K) subset cl_(cl U) (f|cl U)(U)

I occurs to me that by restating the problem, the restriction
of f to cl U nuisance would disappear.

dense D, open U subset X, cl U = X, A = bd U \/ U/\D
f:X -> Y, f in C(U,Y), f in C(A,Y) ==> f in C(X,Y)

or

dense D, open dense U subset X, A = bd U \/ U/\D
f:X -> Y, f in C(U,Y), f in C(A,Y) ==> f in C(X,Y)

What a weird extension theorem. Upon request, I'll write up
verbose versions for various parts of my short hand thinking.

----
From: Ostap Bender on
On May 24, 4:21 am, William Elliot <ma...(a)rdrop.remove.com> wrote:
> Assume f:(0,1] -> R is continuous and
> for all sequences {sj} of elements from (0,1] /\ Q
> for which sj -> 0, the sequence f(sj) -> 0.
>
> Show that for all sequences of elements {sj} from (0,1]
> for which sj -> 0, the sequence f(sj) -> 0
>
> In otherwords, show f can be continuously extended to 0.

PROOF: Suppose there exists a sequence S of elements {sj} from (0,1]
for which sj -> 0 and the sequence f(sj) does not tend to 0. WLOG we
can assume that all f(sj) >= 0. By definition of non-convergence,
there must exist a b>0 and an infinite subsequence W of {f(sj)} s.t
f(sj)>b. For each sj s.t. f(sj) is in W, let qj be a rational number
s.t. 0 <gj <sj and f(gj) > b/2. Such gj exists because f is left-
continuous at sj. This sequence {gj} is in (0,1] /\ Q, gj -> 0; but
f(gj) > b/2 for all j, and thus {f(gj)} does not converge to 0.
Contradiction.

QED

This function f doesn't even have to be right-continuous. All you need
is for f to be left-continuous.