From: Yihong on
Let f: R^n -> R be defined as f(x) = max{<a, x>: a \in A}, where A is a compact subset of R^n. Then f is a convex function hence differentiable (I mean total differentiable i.e. Frechet-differentiable) everywhere but a countable number of points.

I wonder if there is any sufficient condition on A that guarantees the differentiability of f everywhere? Thanks!
From: A N Niel on
In article
<365662068.28964.1262830804779.JavaMail.root(a)gallium.mathforum.org>,
Yihong <yihongwu(a)princeton.edu> wrote:

> Let f: R^n -> R be defined as f(x) = max{<a, x>: a \in A}, where A is a
> compact subset of R^n. Then f is a convex function hence differentiable (I
> mean total differentiable i.e. Frechet-differentiable) everywhere but a
> countable number of points.

That theorem is for a function with real domain, not R^n. In R^n it
could fail differentiability on a curve, for example.

>
> I wonder if there is any sufficient condition on A that guarantees the
> differentiability of f everywhere? Thanks!

A non-differentiable (at a point) example:
n=1, A = {1,-1}, so f(x) = |x|.

Now, before asking for a general condition to get f differentiable
everywhere, find some interesting examples in R^1 where it occurs.
From: Dave L. Renfro on
Yihong <yihongwu(a)princeton.edu> wrote:

>> Let f: R^n -> R be defined as f(x) = max{<a, x>: a \in A},
>> where A is a compact subset of R^n. Then f is a convex
>> function hence differentiable (I mean total differentiable
>> i.e. Frechet-differentiable) everywhere but a countable
>> number of points.

A. N. Niel wrote (in part):

> That theorem is for a function with real domain, not R^n.
> In R^n it could fail differentiability on a curve, for example.

In case it could be of interest to Yihong, below are some
results about convex functions (defined on an open interval)
that I posted in another group on 14 June 2001 and then
reposted in sci.math on 28 December 2002.

****************************************************************
****************************************************************

DEFINITION: Let f be defined on an interval I. We say that f
is convex on I if whenever x1, x2 belong to I, then
the line segment whose endpoints are (x1,f(x1)) and
(x2,f(x2)) lies on or above {(x,f(x)): x in [x1, x2]}.

If "on or above" is strengthened to "strictly above", we get a
geometric condition for concave up. Many of the results given
below continue to hold when f is concave up. Some of these will
be automatic (e.g. when the hypothesis includes "convex", since
concave up implies convex) and some of these will continue to hold
for other reasons. However, I don't really have the time or desire
right now to try and sort out which continue to hold when "convex"
is replaced with "concave up".

Here is another characterization of convex functions.

THEOREM : A function f is convex on an interval I if and only if
the following condition holds:

Whenever x1 < x2 < x3 belong to I, then

[f(x2) - f(x1)] / (x2 - x1) < or = [f(x3) - f(x2)] / (x3 - x2).

In other words, the average rate of change of f on
[x1, x2] does not exceed the average rate of change
of f on [x2, x3].

---------- SOME RESULTS ----------

Convex functions have many applications, both in pure mathematics
and in applied mathematics -- Many useful inequalities, including
the arithmetic and geometric mean inequality, can be obtained very
easily using convex functions. In recent years there has been a lot
of interest in convex functions defined on Banach spaces, especially
in their differentiability properties. Finally, convex functions play
a crucial role in linear programming and in optimization theory. For
more information, go to <http://www.google.com/> and search using
"differentiability +of convex functions" (quotes included),
"convex functions optimization" (quotes NOT included),
and "convex functions linear programming" (quotes NOT included).

1. If f is convex on an open interval I, then f is continuous at
each point in I.

2. If f is convex on an open interval I, then f is Lipschitz
continuous on each closed subinterval of I. [Lipschitz continuous
means the difference quotients are bounded.] This strengthens #1.

3. If f is convex on an open interval I, then there are at most
countably many points at which f is not differentiable.

Any countable set can be the set of non-differentiability points
for some convex function. Curiously, I couldn't find this in any
of the real analysis texts I looked at. However, this is a special
case of theorem 4.20 on p. 93 of [1].

4. Assume that f is convex on an open interval I. Then, at each point
of I, both the left derivative of f and the right derivative of
f exist. This strengthens #3. [It can be shown that this property
implies the property given in #3, but not conversely.]

5. If f is convex on an open interval I, then the left derivative
of f is a non-decreasing function, the right derivative of f
is a non-decreasing function, and at each point the left
derivative is less than or equal to the right derivative.
(See [7], p. 109.) This strengthens #4.

6. If f is convex on an open interval I, then the second derivative
of f exists at every point of I except for a set of measure zero.
[This follows from #3, #5, and the fact that monotone functions
are differentiable almost everywhere.]

7. If f is convex on an open interval I, and g is either the left
derivative of f or the right derivative of f (it doesn't matter
which one you let g be), then

f(b) - f(a) = integral of g on the interval [a,b]

for all a,b in I. [Monotone functions are Riemann-integrable,
so this is the usual calculus integral.]

8. Suppose f'' exists at each point of an open interval I. Then f is
convex on I if and only if f''(x) is nonnegative for each x in I.

9. Suppose f is continuous. Then f is convex on an open
interval I if and only if

limit as h --> 0 of [ f(x+h) + f(x-h) - 2*f(x) ] / (h^2)

is nonnegative for each x in I. [This strengthens #8, since
the existence of f'' at a point implies the limit above exists
at that point, and the converse fails.]

Riemann introduced and used this "second order symmetric
derivative" in an 1854 memoir on trigonometric series. It
was this memoir, incidentally, that Riemann introduced what
we now call the Riemann integral. LaTeX, .dvi, .ps, and .pdf
files of Riemann's 1854 memoir are available at
<http://www.maths.tcd.ie/pub/HistMath/People/Riemann/Trig/>.

10. Suppose f' exists at each point of an open interval I. Then f
is convex on I if and only if f' is non-decreasing on I. This
strengthens #8 and neither implies nor is implied by #9.

11. If h is non-decreasing on an open interval I and 'a' belongs
to I, then the function f defined on I by

f(x) = integral of h on the interval [a,x]

is convex on I. [This refines a result that arises by putting
#5 and #8 together.]

---------- SOME REFERENCES ----------

[1] Yoav Benyamini and Joram Lindenstrauss, "Geometric Nonlinear
Functional Analysis", Volume 1, Colloquium Publications #48,
American Mathematical Society, 2000. [chapter 4: "Differentiation
of Convex Functions", pp. 83-98]

[2] Ralph P. Boas, "A Primer of Real Functions", 4'th edition
(revised and updated by Harold P. Boas), Carus Mathematical
Monographs 13, Mathematical Association of America, 1996.
[pages 175-186]

[3] Andrew M. Bruckner, "Differentiation of Real Functions",
CRM Monograph Series #5, American Mathematical Society, 1994.
[pages 131-134 (advanced)]

[4] Krishna M. Garg, "Theory of Differentiation", Canadian
Mathematical Society Series of Monographs and Advanced Texts
#24, John Wiley and Sons, 1998. [pages 195-198 (very advanced)]

[5] R. Kannan and Carole King Krueger, "Advanced Analysis on the
Real Line", Springer-Verlag, 1996. [pages 74-76]

[6] A.C.M. van Rooij and W.H. Schikhof, "A Second Course on Real
Functions", Cambridge University Press, 1982. [pages 14-18]

[7] H. L. Royden, "Real Analysis", 2'nd edition, MacMillan, 1968.
[pages 108-110]

[8] Brian S. Thomson, "Symmetric Properties of Real Functions",
Pure and Applied Mathematics #183, Marcel Dekker, 1994.
[pages 202-209 (advanced)]

[9] Richard L. Wheeden and Antoni Zygmund, "Measure and Integral",
Pure and Applied Mathematics #43, Marcel Dekker, 1977.
[pages 118-124]

****************************************************************
****************************************************************

Dave L. Renfro
From: Ronald Bruck on
In article <070120100754421701%anniel(a)nym.alias.net.invalid>, A N Niel
<anniel(a)nym.alias.net.invalid> wrote:

> In article
> <365662068.28964.1262830804779.JavaMail.root(a)gallium.mathforum.org>,
> Yihong <yihongwu(a)princeton.edu> wrote:
>
> > Let f: R^n -> R be defined as f(x) = max{<a, x>: a \in A}, where A is a
> > compact subset of R^n. Then f is a convex function hence differentiable (I
> > mean total differentiable i.e. Frechet-differentiable) everywhere but a
> > countable number of points.
>
> That theorem is for a function with real domain, not R^n. In R^n it
> could fail differentiability on a curve, for example.

Not even that. The best you can say in R^d is that f is differentiable
a.e. Sets of measure 0 aren't necessarily countable. And, IIRC, every
set of measure 0 is the set of non-differentiability of some convex
function.

-- Ron Bruck
From: Dave L. Renfro on
Ronald Buck wrote:

> Not even that. The best you can say in R^d is that
> f is differentiable a.e. Sets of measure 0 aren't
> necessarily countable. And, IIRC, every set of
> measure 0 is the set of non-differentiability of
> some convex function.

This isn't true (and probably isn't what you intended to say),
since the set of non-differentiability of an arbitrary function
is G_delta_sigma (and without knowing this, it would still
be enough to know the non-differentiability set of a continuous
function is a Borel set). What might be true (and what you
probably meant) is that given any set E of measure 0, there
exists a convex function f whose set of non-differentiability
is a superset of E (i.e. f does not have a derivative at
each point in E).

Dave L. Renfro